"none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be"
Going big-name doesn't even help you here. It's the same story with Nvidia's Jetson platforms; they show up, then within 2-3 years they're abandonware, trapped on an ancient kernel and EOL Ubuntu distro.
You can't build a product on this kind of support timeline.
For what it’s worth, Jetson at least has documentation, front ported / maintained patches, and some effort to upstream. It’s possible with only moderate effort and no extensive non-OEM source modification to have an Orin NX running an OpenEmbedded based system using the OE4T recipes and a modern kernel, for example, something that isn’t really possible on most random label SBCs.
Yup, I'm working a lot with Jetsons, and having the Orin NX on 22.04 is quite limiting sometimes, even with the most basic things. I got a random USB Wi-Fi dongle for it, and nope! Not supported in kernel 5.15, now have fun figuring out what to do with it.
> I want to pick up and move to another harness and/or model with minimal fuss. Buying in to things like this would make that much harder.
Yes, I expect that is very much the point here. A bunch of product guys got on a whiteboard and said, okay the thing is in wide use but the main moat is that our competitors are even more distrusted in the market than we are; other than that it's completely undifferentiated and can be swapped out in a heartbeat for multiple other offerings. How do we do we persuade our investors we have a locked in customer base that won't just up-stakes in favour of other options or just running open source models themselves?
I think they really knee capped themselves when they released Claude for Github integrations, which allows anyone to use their Claude subscription to run Claude Code in Github actions for code reviews and arbitrary prompts. Now they’re trying to back track that with a cloud solution.
I've been running RPI-based torrent client 24/7 in several countries and never experienced that. Eats a few TBs per month, not the full line, but pretty decent amount. I guess it really depends on the country.
I’ve used Spectrum and their predecessors since the 90s. Never ran into this, although the upstream speeds are ridiculously slow, and they used to force Netflix traffic to an undersized peer circuit.
I'm unsure if you're sarcastic or not, never have I've used a ISP that would throttle you, for any reason, this is unheard of in the countries I've lived, and I'm not sure many people would even subscribe to something like that, that sounds very reverse to how a typical at-home broadband connection works.
Of course, in countries where the internet isn't so developed as in other parts of the world, this might make sense, but modern countries don't tend to do that, at least in my experience.
I think most are familiar with throttling because most (all?) phone plans have some data cap at one point, but I don't think I've heard of any broadband connections here with data caps, that wouldn't make any sense.
Data caps are just documenting the reality that ISPs oversubscribe - if they sell a hundred 1Gb/s connections to a neighborhood, it's highly unlikely they're peering that neighborhood onto the Internet at large at 100Gb/s. I don't know what the current standard is, but in the past it's been 10/100 to 1 - so a hundred 1Gb/s connections might be sharing 1-10Gb/s of uplink; and if usage starts to saturate that they need a way of backing off that is "fair" - data caps are one of the ways they inform the customer of such.
I've seen it with my new fiber rollout - every single customer no matter their purchased speed had 1Gb up and down - as more customers came online and usage became higher, they're not limiting anyone, but you get closer to your advertised rate - but my upload is still faster than my download because most of my neighborhood is downloading, few are uploading.
I have 5 Gbps symmetric at home. I and my fiancee both work from home, so our backup fiber connection from another provider is 2 Gbps. We can also both tether to cell phones if necessary. We can get 5G home wireless Internet here, too, and we might ditch our 2 Gbps line in favor of that as a backup. We moved from Texas back home to Illinois last year, and one of the biggest considerations was who had service at what tiers due to remote work. Some of the houses we looked at in the same three-county area in the Chicago suburbs didn’t even have 5G home available (not from AT&T, Verizon, or T-Mobile anyway).
My parents have 5G wireless home as their primary connection, and that was only introduced in their area a couple of years ago. Before that, they could get dial-up, 512 kbps wireless with about a $1000 startup cost, ISDN (although the phone company really didn’t want to sell it to them), Starlink, or HughesNet. The folks across the asphalt road from them had 20 Mbps Ethernet over power lines years ago, and that’s now I think 250 Mbps. It’s a different power company, though, so they aren’t eligible.
Around 80% of the US population lives in large urban areas. The other 20% of the population range from smaller towns to living many kilometers from any town at all. There’s a lot of land in the US.
Here in dense NYC, most apartments I've lived in have but a single ISP available. It's common to hunt for apartments by searching the address on service maps.
I'm pretty sure one landlord was cut in by his ISP, as he skipped town when I tried to ask about getting fiber, and his office locked their door and drew their shades when I went there with a technician on two occasions. The final time, we got there before they opened and the woman ran into the office and slammed the door on us.
That’s pretty common in apartments to have a single provider, especially in high rise ones or ones built before broadband was common. It’s unfortunate, but the cost of running wiring for multiple providers through old buildings can be prohibitive. The providers won’t pay to install it for a single unit. Other tenants might not like the disruption if they’re not going to use the new service. If you get a big enough block of tenants to pre-sign then it becomes a conversation more worth having for the provider and the landlords.
In codebases where PRs are squashed on merge, the commit messages on the main branch end up being the PR body description text, and that's actually reviewed so tends to be much better I find.
And in every codebase I've been in charge of, each PR has one or more issue # linked which describe every possible antagonizing detail behind that work.
I understand this isn't inline with traditional git scm, but it's a very powerful workflow if you are OK with some hybridization.
It works until you migrate to a new system. In 5 years we are on our 3rd. I saw that at FAANG and startup alike. Then someone might dump the contents in JSON or just PDF for archival but much easier to have the commit msg have the relevant info - only relevant, losts of small details can be still on the issue and if someone really needs them can search those archives.
I personally find this to be a substantially better pattern. That squashed commit also becomes the entire changeset - so from a code archeology perspective it becomes much easier to understand what and why. Especially if you have a team culture that values specific PRs that don’t include unrelated changes. I also find it thoroughly helpful to be able to see the PR discussions since the link is in the commit message.
I agree, much as it's a loss for git as a distributed system (though I think that ship sailed long ago regardless). As far as unrelated changes, I've been finding that another good LLM use-case. Like hey Claude pull this PR <link> and break it up into three new branches that individually incorporate changes A, B, and C, and will cleanly merge in that order.
One minor nuisance that can come up with GitHub in particular when using a short reference like #123 is that that link breaks if the repo is forked or merged into something else. For that reason, I try to give full-url references at least when I'm manually inserting them, and I wish GitHub would do the same. Or perhaps add some hidden metadata to the merge commit saying "hey btw that #123 refers to <https://github.com/fancy-org/omg-project/issues/123>"
Yep - we do exactly the same with Claude. In fact - part of our PR review automation with Claude includes checking whether the PR is tightly scoped or should be split apart. I’d say in about 80% of the cases the Claude review bot is accurate in its assessment to break it up? It’s optional feedback but useful - especially when we get contributors outside our immediate team that maybe don’t know our PR norms and the kinds of things we typically aim for.
Yeah I usually default to just a straight up link or a markdown link. Mostly because I usually don’t know the exact number of a PR/ticket/issue - so it’s easy to just copy the URL once I’ve found it.
I've seen it be the concatenated individual git commit messages way too often. Just a full screen scroll of "my hands are writing letters" and "afkifrj". Still better than if we had those commits individually of course, but dear god.
The gold standard is rebased linear unsquashed history with literary commits, but I'll take merged and squashed PR commits with sensible commit messages without complaint.
I have a Samsung ML-1740 kicking around still that I just can't bear to part with; I've been meaning forever to RasPi-ify it, but it's one of those projects that feels like it's going to end up being a rabbit hole.
>but it's one of those projects that feels like it's going to end up being a rabbit hole.
I know your feelings. I've started https://printserver.ink because I wanted to buy a retail print server and could not find any. I was expecting half a year overall work for everything, but fixing all printer-related bugs for 3 years already.
Ah nice. If I supply my own raspi zero 2W, can I buy just a software license from you? Doing the software integration is the part I'm least looking forward to. :(
Unfortunately the firmware supports only Orange Pi Zero3. It's a hardware+software solution, because to make reliable print server, you have to control both and fix SoC/board specific bugs.
Greg does not accept my patch and does not reply to me, presumably because I'm Russian and he doesn't want legal consequences.
Raspberry Pi unfortunately have quite list of USB hub bugs and Wi-Fi module bugs (according to their GitHub and reports all over the Internet), and I'm not familiar with it and don't know how to fix. That's why I don't make generic firmware or container images, and stick only to my board which I know from start to end.
Well my 9070 XT made the list; I've been quite happy with it, great performance with paying the Nvidia tax.
RIP my Radeon 7500 from high school though, that was always a budget card, and we all had them but wanted the 9700. Couldn't beat the box are from that era though: https://www.ebay.com/itm/206159283550
Disconcerting for sure, but from a business point of view you can understand where they're at; afaiui they're still losing money on basically every query and simultaneously under huge pressure to show that they can (a) deliver this product sustainably at (b) a price point that will be affordable to basically everyone (eg, similar market penetration to smartphones).
The constraints of (b) limit them from raising the price, so that means meeting (a) by making it worse, and maybe eventually doing a price discrimination play with premium tiers that are faster and smarter for 10x the cost. But anything done now that erodes the market's trust in their delivery makes that eventual premium tier a harder sell.
Yeah. I've been enjoying programming with Claude so much I started feeling the need to upgrade to Max. Then it turns out even big companies paying API premiums are getting an intentionally degraded and inferior model. I don't want to pay for Opus if I can't trust what it says.
This could also be a marketing strategy. Make your models perform worse towards the end of a model's cycle, so that the next model appears as if more progress has been made than there actually has been.
I really wonder about this. Is it so bad that they cannot even disclose it? not even an optimistic lie in the ballpark of reality? it's not like they haven't been found cooking the truth repeatedly.
I look at the output of Kimi and the costs of running inference on it that i can replicate, and it isn't that bad, although admittedly i don't have to worry anywhere near as much about scaling it and about having to dedicate large amounts of compute to research and distillation on the back end. It's true that it's perhaps a step behind SotA vs January's Opus or current Codex, depending on what you do. But not by a lot. In fact it's leaps and bounds superior to the current subscription API experience. Together with GLM, Qwen and Minimax they are an amazing backstop just the way they are right now.
With all the layers of obfuscation it's hard to even know roughly how many i/o Opus tokens do Claude subscriptions pay for. They'll give you some flippant arguments like "people were not looking at thinking so we're not showing you anymore" with a straight face. However podcasts still insist Anthropic are "winning the AI war" (??) it really makes me wonder because in no metric I can see them as providing neither best value nor best quality, and let's not get started about consumer experience.
My intuition is that things must be really bad so they're willing to pull the kind of moves they're pulling right now. They're speedrunning people into understanding how important it is to be able to run your own generative AI infrastructure for reliability, thus becoming a very fancy but trustless throwaway solution factory.
I wonder if OpenAI will turn the screws similarly if/when their pockets start to dry up at a certain pace.
tldr: they are trying hard to change S&P500 inclusion rules so that they dont have to wait 12months after going public so they can list mega-ipo asap in force index funds to buy a portion (presumably before revenue exponential growth settles and profits start tanking due to opensource catching up). They know something that we dont.
btw if they are public and part of S&P500 then potentially they'll be a candidate for a bailout.
Going big-name doesn't even help you here. It's the same story with Nvidia's Jetson platforms; they show up, then within 2-3 years they're abandonware, trapped on an ancient kernel and EOL Ubuntu distro.
You can't build a product on this kind of support timeline.
reply