Hacker Newsnew | past | comments | ask | show | jobs | submit | Frannky's commentslogin

Opencode go with open models is pretty good

Give it a try to opencode + mimo V2 pro...

I am honestly just happy they haven't figured out a way to lock in the users, and that there are alternatives that can get it done. I feel like they treat the user as a dumb peasant.

Self hosted minio?

I was always very curious why people are using azure. Clunky difficult to setup and crazy prices. I know a person being very happy with them because of the credits they gave it to him. I felt I probably don't have a model that explains what is going on there and that would be cool to know why people pay them vs the competion


In my experience Azure endpoint versus openAI endpoint was way faster and significantly cheaper.


Tangential. I want a coffee maker that prepares coffee and cleans up after itself automatically. The ones I found are very expensive. Any DIY hack?


I know a person. Ex-Googly. Doing his startup. Spent one year on a crazy complex product. Investors do not get it. Users do not get it. He spends 99% of the time explaining why his ideas are so good. You ask to try the product, weeks pass and just slides and video demos. When you eventually try it, it's so confusing and nothing really works. I tried to make him understand that the constraint he needs to fit is users being able to understand rapidly why it's useful to them and that it should work. He does not care. He says it's about the story, and that the story will drive millions unlocking a super big team building his idea. I said that's cool but why in the meantime you don't just go with one thing that is useful and works and then procedurally evolve to your vision while you interact with the user and discover more about their problems. The answer was a one-hour speech on Google leveling system. Maybe he is right. Time will tell.


I model LLMs as searchers. Give the input search and they match an output. The massiveness of the parameters and training data let them map data in a way searching looks like human thinking. They can also permutate a little and still stay in a space that can overlap with reality.

The human brain may be doing a very similar thing though, search and permutation via searched rules. It may be doing it just in a functional way, with more ability to search on massive data that may be with holes but filled with synthetic data via mind subprocesses on learned rules.

I think machines can eventually get there, especially if we can figure out how to harness continuous models instead of discrete ones. And I have a feeling that functional analysis may be the key.


It's an interesting way to think about it. For every word you say, every message you write, every task you do, every thought you have, every subtle cue you give, there is a statistically best response / follow up / output.

And all of that can distilled and stored into such a small amount of data. If that's really how consciousness works in our mind (just another representation of "output") it's fascinating.

The repercussions though could be concerning. On one hand it means things like consciousness upload will be possible. On the other hand it means security agencies can monitor people and figure out who is (literally) committing thought crime. They'd just need to search the space and figure out what weights a person's internal model runs on - and you wouldn't actually need that much reference material to do it. Basically Minority Report.


I think you are mixing two concepts. I was just talking about having an LLM that is able to replicate human thinking, which is different then having a precise person's brains turned into LLM weights.

In that second case the problems you are saying emerge. But I can understand why you conflate the two, since having a model that works like a human may unlock the ability to dump the brain into model weights.


I don't use it for coding but as an agent backend. Maybe opencode was thought for coding mainly, but for me, it's incredibly good as an agent, especially when paired with skills, a fastapi server, and opencode go(minimax) is just so much intelligence at an incredibly cheap price. Plus, you can talk to it via channels if you use a claw.


I see great potential in this use case, but haven’t found that many documented cases of people doing this.

Do you have resources you can point to / mind sharing your setup? What were the biggest problems / delights doing this?


No resources, I used Claude Code and did what I put in the original message. Experience easy thanks to Claude coding and deploying


By "agent" you mean what?

Coding is mostly "agentic" so I'm bit puzzled.


It's defined in opencode docs, but it's an overall cross industry term for custom system prompt with it's own permissions:

https://opencode.ai/docs/agents/


Thanks for reference the docs. For me an agent is an entity that you can ask something and it talks to you and try to do what you asked to do.

In this case if you have a server with an endpoint you can run opencode when the endpoint is called and pass it the prompt. Opencode then think, plan and act accordingly to you request, possibly using tools, skills, calling endpoints,etc.


I'm still kind of confused, but opencode itself comes with several agents built-in, and you can also build your own. So what does it mean to use opencode itself as an agent?


Claude code too has build and plan agents.


Like this user has a comment history of hyping show HN


I mean isn't that what show HN is for? Plus wasn't just hype, I was generally interested in getting more info on the SEO side of this project.

I've had a lot of nice people try out my own projects and leave comments in the past and it meant a lot to me so I'm just trying to pass that forward.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: