AWS has a similar RAM consumption. I close Signal to make sure it doesn't crash and corrupt the message history when I need to open more than one browser tab with AWS in the work VM. I think after you click a few pages, one AWS tab was something like 1.4GB (edit: found it in message history, yes it was "20% of 7GB" = 1.4GB precisely)
Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps? (Reddit's new UI and various blogs I've come across.) Or the page runs smoothly but your CPU lifts off while the tab is in the foreground? (e.g. DeepL's translator)
Every time I wonder if they had an LLM try to get some new feature or bugfix to work and it made poor choices performance-wise, but it completes unit tests so the LLM thinks it's done and also visually looks good on their epic developer machines
I think a big problem is the fact that many web frameworks allow you to write these kind of complex apps that just "work" but performance is often not included in the equation
so it looks fine during basic testing but it scales really bad.
like for example claude/openAI web UIs, they at first would literally lag so bad because they'd just use simple updating mechanisms which would re-render the entire conversation history every time the new response text was updated
and with those console UIs, one thing that might be happening is that it's basically multiple webapps layered (per team/component/product) and they all load the same stuff multiple times etc...
The Grok android app is terrible in that sense. Just writing a question with a normal speed will make half of the characters not appear due to whatever unoptimized shit the app does after each keystroke.
Sounds quite overengineered. CEOs have basically no idea what they're doing these days. If this were my company, I'd start by cutting 80% of staff and 80% of the code bloat.
The "very often" part is wild to me. You'd think being an engineer himself[0] he'd fix the root cause: the testing process, not work as an IC QA himself.
[0] He holds the title of Chief Engineer at SpaceX.
it's unironically just react lmao, virtually every popular react app has an insane number of accidental rerenders triggered by virtually everything, causing it to lag a lot
Vue uses signals for reactivity now and has for years. Alien signals was discovered by a Vue contributor. Vue 3.6 (now in alpha/beta?) will ship a version that is essentially a Vue flavored Svelte with extreme fine grained reactivity based on a custom compiler step.
One of the reasons Vue has such a loyal community is because the framework continues to improve performance without forcing you to adopt new syntax every 18 months because the framework authors got bored.
The React paradigm is just error prone. It's not necessarily about how much you spend. Well paid engineers can still make mistakes that cause unnecesssary re-renders.
If you look at older desktop GUI frameworks designed in a performance-oriented era, none of them use the React paradigm, they use property binding. A good example of getting this right is JavaFX which lets you build up functional pipelines that map data to UI but in a way that ensures only what's genuinely changed gets recomputed. Dependencies between properties are tracked explicitly. It's very hard to put the UI into a loop.
Property binding and proxies really didn't work well in JS at all until relatively recently, and even then there is actually a much worse history of state management bugs in apps that do utilize those patterns. I've yet to actively use any Angular 1.x app or even most modern Angular apps that don't have bugs as a result of improper state changes.
While more difficult, I think the unidirectional workflows of Redux/Flux patterns when well-managed tend to function much better in that regard, but then you do suffer from potential for redraws... this isn't the core of the DOM overhead though... that usually comes down to a lot of deeply nested node structures combined with complex CSS and more than modest use of oversized images.
Yes, they do. OGs remember that Facebook circa 2012 had navigation take like 5-10 seconds.
Ben Horowitz recalled asking Zuck what his engineer onboarding process was when the latter complained to him about how it took them very long to make changes to code. He basically didn't have any.
If it's true that nobody is getting promoted for improving web app performance, that seems like an opportunity. Build an org that rewards web app performance gains, and (in theory) enjoy more users and more money.
They have no real competitors, so anything that makes the user even stickier and more likely to spend money (LinkedIn Premium or whatever LinkedIn sells to businesses) takes priority over any improvements.
It's a bit unfortunate that several calls to .setHTML() can't be batched so that several .setHTML() calls get executed together to minimize page redraws.
Well, their lowest tier devs, they have started firing and churn a lot... combined with mass layoffs... and on the higher end, they're more interested in devs that memorized all the leet code challenges over experienced devs/engineers that have a history of delivering solid, well performing applications.
Narcissism rises to the top, excess "enterprise" bloat seeps in at every level combined with too many sub-projects that are disconnected in ways that are hard to "own" as a whole combined with perverse incentives to add features over improving the user experience.
I think linkedin is built with emberjs not react last i checked…
The problem with performance in wep apps is often not the omg too much render. But is actually processing and memory use. Chromium loves to eat as much ram as possible and the state management world of web apps loves immutability. What happens when you create new state anytime something changes and v8 then needs to recompile an optimized structure for that state coupled with thrashing the gc? You already know.
I hate the immutable trend in wep apps. I get it but the performance is dogshite. Most web apps i have worked on spend about 10% of their cpu time…garbage collecting and the rest doing complicated deep state comparisons every time you hover on a button.
I was researching laptops at BestBuy and every page took ages to load, was choppy when scrolling, caused my iPhone 13 mini to get uncomfortably hot in my hand and drained my battery fast. It wouldn’t be noticeably different if they were crypto-mining on my iPhone as I browsed their inventory.
Best Buy is actually one of the worst and slowest websites from any large retailer. I cannot believe how bad it is. It's like they set out to make it pretty and accidentally stepped in molasses.
The irony! My router died literally an hour ago, and I was on bestbuy to buy a new one. Over 5g connection. That was probably the worst shopping experience I had in a while...
> Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps?
It is to do with websites essentially baking in their own browser written in javascript to track as much user behavior as possible.
Spot on. It's why I quit adtech in 2015. Running realtime auctions server-side is one thing, but building what basically amounts to live-feed screen capture ..
I do live-feed screen capture and it doesn't really consume much and is barely unnoticeable. Running 100 live-feed screen capture is a different story though.
My company started using slack in 2015 and at that time I put in a bug report to slack that their desktop app was using more memory than my IDE on a 1M+LOC C++ project. I used to stop slack to compile…
No, compared to everything else in those apps. i.e. if they are writing extremely bloated Electron apps why would the native version be less slow and bloated? I mean Electron's overhead is mostly fixed (it's still a lot but its possible to keep memory usage we below 1 GB or even 500 BB even for more complex applications).
A native app that compiles to machine language and uses shared system libraries is by definition going to take less memory and resources than code + web browser +Javascript VM + memory to keep JITd byte code.
Write a “Hello World” in C++ that pops up a dialog box compared to writing the same as an Electron app.
Yes, exactly, that's what I said. There is significant overhead but is it the only or the main reason why these apps are so slow and inefficient? It's perfectly easy to write slow and inefficient code in C++ as well...
Hit this exact wall with desktop wrappers. I was shipping an 800MB Electron binary just to orchestrate a local video processing pipeline.
Moved the backend to Tauri v2 and decoupled heavy dependencies (like ffmpeg) so they hydrate via Rust at launch. The macOS payload dropped to 30MB, and idle RAM settled under 80MB.
Skipping the default Chromium bundle saves an absurd amount of overhead.
I noticed that there's a developing trend of "who manages to use the most CSS filters" among web developers, and it was there even before LLMs. Now that most of the web is slop in one form or another, and LLMs seem to have been trained on the worst of the worst, every other website uses an obscene amount of CSS backdrop-filter blur, which slows down software renderers and systems with older GPUs to a crawl.
When it comes to DeepL specifically, I once opened their main page and left my laptop for an hour, only to come back to it being steaming hot. Turns out there's a video around the bottom of the page (the "DeepL AI Labs" section) that got stuck in a SEEKING state, repeatedly triggering a pile of NextJS/React crap which would seek the video back, causing the SEEKING event and thus itself to be triggered again.
I wish Google would add client-side resource use to Web Vitals and start demoting poorly performing pages. I'm afraid this isn't going to change otherwise; with first complaints dating back to mid-2010s, browsers and Electron apps hogging RAM are far from new and yet web developers have only been getting increasingly disconnected from reality.
Ah, now I understand your question (and see others already answered). Yeah, I realized that possible confusion after writing it, but hoped it was clear enough after editing in the bit about this AWS problem being in a browser tab. You may have seen the initial version, or it may still have been too confusing. Whoops
The official name is the AWS management console. Or just the console.
The ‘dashboard’, the ‘interface’? Reminds me of coworkers who used to refer to desktop PC cases as the hard drive, or people who refer to the web as ‘Google’.
If you're talking about the AWS management UI, I haven't used it recently but can tell you that the Azure one is no better. One of the stupidest things I remember is that it somehow managed to reimplement a file upload form for one of their storage services such that it will attempt to read the whole file into memory before sending it to the server. For a storage service meant for very large files (dozens of gigabytes or more).
Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps? (Reddit's new UI and various blogs I've come across.) Or the page runs smoothly but your CPU lifts off while the tab is in the foreground? (e.g. DeepL's translator)
Every time I wonder if they had an LLM try to get some new feature or bugfix to work and it made poor choices performance-wise, but it completes unit tests so the LLM thinks it's done and also visually looks good on their epic developer machines