Hacker Newsnew | past | comments | ask | show | jobs | submit | officialchicken's commentslogin

The investors are their customers - not the users of the end-product.

This shows a lack of understanding of how markets work. Investors make money when the valuation of the company increases. The valuation of the company is the best prediction of future profit risk adjusted.

How would anthropic increase future profits without satisfying customers?


Early investors make money when later investors buy them out at inflated valuations.

Well sure, all market signals should be considered. As a casual observer, my received signals have been indicating that AI is getting sold at a loss to get market share, and more recent signals have indicated that users are really really sensitive to both costs and performance.

The weakest signal to me is investor money, because when you think of it, investors are betting on a future that may or may not be there. Heck even trends aren't guaranteed, "past performance is no guarantee etc etc"


Have you seen the business models for these companies? Literal underpants gnome memes. OpenAI's goes like this:

1. Build AGI

2. Use said AGI to tell us how to become profitable

3. Profit!

Anthropic seems to be going all in on enterprise sales. Which means they don't actually have to please customers, or it's what ThePrimeagen humorously calls a "yacht problem"—a problem that only needs a solution after the IPO. For now all they have to do is convince corporate leadership that this is the future of work and sow enough FOMO to close those sales contracts and their projected sales, and stock valuation, goes through the roof.

Of course that value will collapse if they go without delivering on their promises long enough. That's why they call it a bubble. But by then, hopefully, Dario and the early investors will be long gone and even richer than they were to start. Their only competitor, OpenAI, is confronted with the same issues: the scalability problems won't go away, and addressing them doesn't drive stock valuation the way promising high rollers that AGI and total workforce automation are just around the corner does.


> What damage is that? (excluding the present case)

That seems to be an introspective question.


Extrospection is valid spection

I must have a really really outdated version of K+R C.


Think of it as RGB lighting in DIMM format and it makes a lot more nonsense.


I'm excited! Almost everyone here is looking forward to big future payouts of gigs that require cleaning up during emergency slop outages that impact the loss of a critical line of business.


And the odds are good you use the models and understand them in detail while the CEO is just buying the hype, ill informed or not.


Well, I want to reduce the rate at which I have to intervene in the work my agents do as well. I spend more time improving how long agents can work without my input than I spend writing actual code these days.


Except, there is correlation.

You're holding the statistics wrong - the chart you're looking at is upside-down.


I have been porting an existing pub-sub to Rust (no_std) that runs over serial UART. The published serial protocol is very similar as this one: COBS encoding with CRC32 checksum not CRC16. These docs have a great reference on backpressure for any micro and will be helpful.


Thanks for sharing this, that sounds really interesting.

COBS with CRC32 over UART is very close to the kind of problems I’ve been thinking about too. Glad the docs were helpful!


It works fine for webapps and other slop-adjacent projects.

If you try to do anything outside of typical n-tiered apps (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller) it all falls apart very very quickly.

If the protocol is even slightly complex then the docs/reqs won't fit in the context with the code. Bootstrapping / initial bring-up of a protocol should be really easy but Claude struggles immensely.


> (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller)

I have had an AI assistant reverse engineer a complex TCP protocol (3-simultaneous connections each with a different purpose, all binary stuff) from a bunch of PCAPs and then build a working Python server to speak that protocol to a 20-year-old Windows XP client. Granted, it took two tries: Claude Opus 4.1 (this was late September) was almost up to the task, but kept making small mistakes in its implementation that were getting annoying. So I started fresh with Codex CLI and GPT-5.1-Codex had a working version in a couple hours. Model and tool quality can have a huge impact on this stuff.


I just vibe coded a VST. Runs a mix of realtime DSP and ML models. Really nontrivial stuff. It does exactly what I want.

Claude Opus 4.5 is truly impressive.


Care to share your output? I doubt your VST is on the same level as something released by a company like Native Instruments, Spectrasonics, etc.


That's an app (running ML), not a protocol.


I hear people report the opposite.

The sloppier a web app is, the more CSS frameworks are fighting for control of every pixel, and simply deleting 500,000 files to clear out your node_modules brings Windows to its knees.

On the other hand, anything you can fit in a small AVR-8 isn't very big.

Whatever you do, your mileage may vary.


Yep, but I don’t intend to let that happen to my web app! It’s not that big and I intend to keep it that way.

Dependencies are minimal. There’s no CSS framework yet and it’s a little messy, but I plan to do an audit of HTML tag usage, CSS class usage, and JSX component usage. We (the coding agent and I) will consider whether Tailwind or some other framework would help or not. I’ll ask it to write a design doc.

I’m also using Deno which helps.

Greenfield personal projects can be fun. It’s tough to talk about programming in the abstract when projects vary so much.


I've been working with an agent to make a web-based biofeedback "application" which is really a toolbox of components you can slap together to support

  - heart rate via Polar H10
  - respiration rate via strap-on device
  - GSR and EMG via arduino + web serial
  - radar-based respiration (SOTA says you can get R-R intervals as good as the H10 if you're not moving)
and even do things like a 2 player experience. The code is beautiful, pure CSS the way it was supposed to be, visualizations with D3.js. I do "npm install" and can't get over the 0 vulnerability count. It's coding with React that's 100% fun with none of the complaints I usually have.


Given the amount of Arduino code that existed at the time LLM's were trained, I would have to agree that AVR-8 might be fine. For now it's on the Cortex-M struggle bus.


Let's not forget about the myriad of basic problems that still remain - like deploying data, caching/distribution, and server resilience.

There is absolutely NO reason why that PDF shouldn't load today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: