Hacker Newsnew | past | comments | ask | show | jobs | submit | birken's commentslogin

I'd really like to see a regular poll on HN that keeps track of which AI coding agents are the most popular among this community, like the TIOBE Index for programming languages.

Hard to keep up with all the changes and it would be nice to see a high level view of what people are using and how that might be shifting over time.


Not this community's opinion on agents, but I've found it helpful to check the lmarena leaderboards occasionally. Your comment prompted me to take a look for the first time in a while. Kind of surprising to see models like MiniMax 2.1 above most of the OpenAI GPTs.

https://lmarena.ai/leaderboard/code

Also, I'm not sure if it's exactly the case but I think you can look at throughput of the models on openrouter and get an idea of how fast/expensive they are.

https://openrouter.ai/minimax/minimax-m2.1


I just started something like that, haven’t shared it widely yet, but here we go - happy if you participate: https://agentic-coding-survey.pages.dev/


Add vscode. Add a list of models, since many tools allow you to select which model you use.


Thanks for the feedback. I thought there are just too many models and versions to list them all. For now, if you select "other" you get a text field to add any model not listed, hope this helps.


You should add OpenAI Codex CLI.


Thanks for the feedback, I'll do that. For now, if you select "other" you get a text field to add any model not listed..


Any chance you'll add Antigravity and Jetbrains Junie? I've been using almost nothing but those for the last month. Antigravity at home, Junie at work.


Done, upon popular demand I added Antigravity, Codex CLI, and Junie


Thanks!

> Q5. For which tasks do you use AI assistance most?

This is really tough for me. I haven't done a single one of those mostly-manually over the last month.


Just pick your favorite one and stick with it. There is no point in keeping up, since we're in an endless cycle of hype where is one ranked higher than the other, with them eventually catching up to each other


> ...like the TIOBE Index for programming languages.

Why would you want a list with such godawful methodology? Here's [0] what the TIOBE folks have to say about their data analysis process:

  Since there are many questions about the way the TIOBE index is assembled, a special page is devoted to its definition. Basically the calculation comes down to counting hits for the search query
  
  +"<language> programming"
The only advantage this methodology has is it's extremely cheap for the surveyor to use.

[0] <https://www.tiobe.com/tiobe-index/programminglanguages_defin...>


I have an agent skill that is currently in the top 10 or so of the skills.sh directory - in terms of that audience, it's about 80% claude code.

Also 75% darwin-arm64


I personally don't want to trawl through Twitter to find the current state-of-the-art, so I read Zvi Mowshowitz's newsletter:

https://thezvi.substack.com/

His newsletter put me onto using Opus 4.5 exclusively on Dec 1, a little over a week after it was released. That's pretty good for a few minutes of reading a week.


Christ, the latest post is about dating and uses an ai generated wojak meme..


When all of industry is trying to catch up with the features of one coding agent - it may be a signal to just use that one.


Sure, let's all ditch linux and macOS as well since they're not the most popular...


Question is, are people on HN procrastinating and commenting here because the agent isn't very good and they're avoiding having to write the code themselves, or is the agent so good that it's off writing code, and the people here are commenting out of boredom?


You're making it sound like before agents existed HN was a ghost town because everyone was too busy building ImportantThingTM by hand


Oh. Surely you know this forum didn't exist pre-ChatGPT. Everything in the archives was generated so it just looks that way.


>Question is, are people on HN procrastinating and commenting here because the agent isn't very good and they're avoiding having to write the code themselves

Can you help me envision what you're saying? It's async - you will have to wait whether its good or not. And in theory the better it is the more time you'd have to comment here, right?


I'm saying if it's that bad, then it's pure procrastination


People have been procrastinating on HN since the beginning of time, before coding agents existed.


Correct me if I'm wrong, but before ChatGPT, there was fewer comments about vibecoding.


What kind of headcount do you estimate $1MM/year can reliably support?

That's like ~2 engineers at FAANG.


FAANG isn't the world.

Salaries for developers are well under $150k in most of the United States, for example, and that is for senior engineers. Most startups are paying $90k-$140k for senior devs, for example (I haven't done the math, but from my own experience, $100-$120k is the general sweet spot). Larger companies pay a bit more, but move beyond that and you are talking management.


I'd argue a design system used by like half the world at this point should hire the best front end engineers at a high salary and that's ok. There are people doing jack shit making more.


They were hiring about two years ago: https://tailwindcss.com/blog/hiring-a-design-engineer-and-st...

A Design Engineer and Staff Software Engineer both for $275k


Well that explains it then, don't offer stupid salaries before you make stupid money...


> Salaries for developers are well under $150k in most of the United States, for example, and that is for senior engineers

As someone who has hired hundreds of SWEs over the last 12 years from 20+ states, I have to disagree.

$150k is on the lower end for base for a Sr. SWE, and well below the total comp someone would expect. You can make the argument that $150k base is reasonable, but even Sr. SWE in the middle of the country are looking for closer to $180k -$200k OTE.


100k per month per person is over 1 million a year.

So 2 million per year barely gets you two people.


100k per year.


I am really curious about metro areas that are paying 100-120k for senior(in the real sense) devs. Could you please share some metro areas you are familiar with?


120k USD ~= 180k AUD, which is a rate I have _definitely_ seen advertised for Seniors in Sydney + Melbourne.


I'm in Brisbane, but salaries are wildly different between US and AU. The exchange rate is not a good approximation. We don't see many US$275K (AU$410K) remote jobs [1] advertised in Australia either.

[1] https://tailwindcss.com/blog/hiring-a-design-engineer-and-st...


Most metro areas in the Midwest, I think. Certainly the ones near me, at least.


Sure. Boston, NYC, Seattle, basically any city in the US you will find senior devs being hired at that price range.

You do realize not every company pays well right?


Most “senior devs” are actually bad.


There are plenty of software firms out there (including the one I work for) whose entire budget is less than $1MM, and who have a headcount of developers that's more than 2.

Not every software company is busy writing software to target you with ads.


Lots of great engineers will work for way less than a FAANG salary as long as it means not having to work for FAANG. $1m/year still won't get you all that much though.


Lots and lots of people work for much less or for free on whatever they like.

Problem is that doing "boring" parts of open source project maintenance is not very exciting for many top tier developers so it should pay at least competetively for experience or people will just burn out.

And while you can obviously fund a team of 20 on $1M/year outside of US whatever said team will manage to keep up to the level of quality is another question.


Realistically if you can work on a small and high profile project like tailwind you're gonna be snatched up by someone willing to pay you at or near FAANG levels


That's good. We can tell people that so they will submit us patches for free.

Maybe we could even have a neat website with a leaderboard of sorts where we honor top contributors like some kind of gamification.

I think we would really need about five highly opinionated people with good technical and people skills to volunteer as paid maintainers for tailwind or any oss project to succeed.


Blender pays their developers ~ $3M/year. [0]

I'm having a very hard time to believe you need one third of that to maintain a library that does "shorter names for standard CSS." Of course I might be underestimating Tailwind a lot.

[0] https://download.blender.org/foundation/Blender-Foundation-A... [1] But given the unit is euro in this report, I guess the solution is to not hire developers in the US.


According to that document, they spent ~1.5M eur (1.75 USD) on developer salaries. If we count up all the people in the "Development Team" section (other than the ones paid by grant, which I excluded from the number above), we have 22 full time developer listed. That's ~$80k (USD) / developer for the all in costs, so the actual salary is probably lower than that. US News tells us[1] that the median US developer is getting ~$132k / year. To put that into a bit of perspective, the local gas station by me is paying staff $15 / hour. That's ~30k / year.

As a side note, what the heck is with all the griping about costs in this discussion? So what if it's "just a big CSS library". Don't we want people to be paid good salaries? I swear software developers are one of the only groups of people I've ever met who actively complain about being paid too much money.

[1]: https://careers.usnews.com/best-jobs/software-developer/sala...


That is truly incredible and an ode to what can be done with a relatively small budget. You’re right that Tailwind is nowhere near Blender’s complexity… but it’s also trying to be a business and not a foundation.


Tailwind (like most things) is way more complex than it first appears.

Sure the main thing was originally 'just' mapping `.p-4` to `padding: 1rem`. But it's also about grepping the code to see if `p-4` is used so it only builds needed classes. It also needs to work with things like their responsive and state classes so `md:p-4` or `hover:p-4` add the padding only on medium or larger screens, or when hovered etc.

All of which increased to support more and more css features and arbitrary values so `not-supports-[display:grid]:p-[5px]` generates the required code to check if grid is supported and add 5px padding or whatever other values you put in the [].

You can question if that's really a sensible idea, but it is undeniably a pretty complex challenge. Not sure it compares to blender, I imagine that has a lot more maths involved - put probably less edge cases and weird displays odd in X browser bugs.


Or like 10 senior engineers in mid sized companies in Europe.

I wish every engineer were paid FAANG money.


One million a year would easily buy you 10 experienced full-time engineers in most of Europe.


Tailwind is not a FAANG, they are glorified frontend CSS devs


Running one of the world’s leading UI libraries is far more impactful than anything 99% of FAANG engineers have or will ever work on.


Tailwind requires a compiler to work.


Huh, FAANG salary comes at FAANG level revenue / profitability generated. That salary is not some kind of human right.


That's barely two low level faang engineers after full load.


Failwind? Alewind? Nailwind? Galewind?

I’m struggling to figure out which letter in FAANG represents Tailwind. Not sure why they need to be paying FAANG salaries.


It is difficult to say this is what consumers want, when right now consumers are getting the best of both worlds: The ease of AI agents without the long-term negative consequences of destroying the publishers who created all the high quality training data in the first place.

I think in the long term the highest quality content creators are going to find ways to keep their information out of AI training data, and put it behind walled gardens.


The AI isn't "reading the web" though, they are reading the top hits on the search results, and are free-riding on the access that Google/Bing gets in order to provide actual user traffic to their sites. Many webmasters specifically opt their pages out of being in the search results (via robots.txt and/or "noindex" directives) when they believe the cost/benefit of the bot traffic isn't worth the user traffic they may get from being in the search results.

One of my websites that gets a decent amount of traffic has pretty close to a 1-1 ratio of Googlebot accesses compared to real user traffic referred from Google. As a webmaster I'm happy with this and continue to allow Google to access the site.

If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site, I very much should have to right to decline their access.


> If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site

are you trying to collect ad revenue from the actual users? otherwise a chatbot reading your page because it found it by searching google and then relaying the info, with a link, to the user who asked for it seems reasonable


While yes, I am attempting to collect ad revenue from users, and yes, I don't want somebody competing with me and cutting me out the loop, a large part of it is controlling my content. I'm not arguing whether the AI chatbot has the legal right to access the page, I'm not a legal scholar. What I'm saying is that the leading search engines also have the equal rights to access whatever content they want, and yet they all give webmasters the following tools:

- Ability to prevent their crawlers from accessing URLs via robots.txt

- Ability to prevent a page from being indexed on the internet (noindex tag)

- Ability to remove existing pages that you don't want indexed (webmaster tools)

- Ability to remove an entire domain from the search engine (webmaster tools)

It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.

In the long run, all this is going to lead to is more anti-bot countermeasures, more content behind logins (which can have legally binding anti-AI access restrictions) and less new original content. The victim will be all humans who aren't using a chatbot to slightly benefit the ones who are.

And again, I'm not suggesting that AI chatbots should not be allowed to load webpages, just that webmasters should be able to opt out of it.


> While yes, I am attempting to collect ad revenue from users, and yes, I don't want somebody competing with me and cutting me out the loop, a large part of it is controlling my content.

> It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.

I agree with you about the long run effects on the internet at large, but I still don't understand the horse you have in it personally. I read you as saying (1) it's less about ad revenue than content control, but (2) content control is based on analysis of benefits, i.e. ad revenue?


Well you have no rights when you expose a server to the internet. Other than copyright of course.


> Well you have no rights when you expose a server to the internet.

Technically you don’t, but there are still laws that affect what you can legally do when accessing the web. Beyond the copyright issues that have been outlined by people a lot more qualified than me, I think you could also make the point that AI crawlers actively cause direct and indirect financial harm.


A datacenter could consume a lot of water with evaporative cooling. I don't know how prevalent it is, but given how cheap and efficient evaporative cooling is, I'd guess datacenters use it a lot where possible (probably in combination with other cooling methods).


Coincidentally I just had a professionally done garage door spring replacement today, and I asked the repairman this question, and here is what he said:

1. The springs lift the door from the bottom, and from each side, which puts less load on the door itself as compared to if the entire weight were being lifted from the top middle every time.

2. The motors can be smaller, quieter and use less power

3. In case of power failure, the door is much more functional and safer the less apparently weight it has.

Also the springs themselves are very unlikely to be dangerous (as long as you don't try to replace them yourself), because he said they almost always break when the door is at the closed state, because that is when they are under the most tension. Therefore on the whole, the springs in practice offer no practical safety risk, while greatly increasing the safety of the door in it's normal operation while also reducing wear and tear on the door. They also allow people to have heavier types of doors if they want them.


And then they'd have sold at the higher value, because the sale price is almost certainly higher than whatever the 409a price was.


It might not be a 90% discount but it still will be a >50% discount


for companies raising 9 figure later-stage rounds? that's not obvious to me

and relevant to this case, often the investor will do a higher valuation (artificially minting a unicorn etc) for optics/vanity reasons, which eats an additional 1+ years of future growth, eliminating the relevance of a discount here

and for folks who many not have followed terms above: investors get preferred shares, with rights over these discounted common shares. These include things like veto rights over acquisitions, first money out ("if $200M raised, no one else sees any $ until that $200M is paid back"), and for high-valuation unicorn rounds, often something like a participation multiple ("guaranteed extra $100M profit, so no one sees anything till $300M paid"), high interest rate on convertible debt portions, etc. So beyond the obvious dilution hit of new investors, there are a lot of these gotchas that trade a bigger bank account for heightened exit value risks to employees.


The people who come up with 409a prices have every incentive to make it as low as possible provided it is somewhat defensible to the IRS.

I assure you they can get more creative than saying that the last preferred price was at $X, therefore our hands are tied and the common must be close to that. They can take into consideration the preferred preferences, the current state of the business, the time since the last round, etc. For example, the 409a value can keep going down and down if the value of the business is (defensibly) going down and down, regardless of the last fundraising round.


This is a thing I'd love to see data for - the strike price discount at time of acquisition for later-stage companies. These same companies in that megaround companies probably have stock on secondary markets, which might be a good proxy for some of this.

And totally agree wrt creative arguments being viable... Just not clear what ends up happening in practice. Ex: I can imagine a split between paper unicorns vs ones w revenue backing it up being closer to market, and those later ones often switching to RSUs. So genuine curiosity here.


And how did that go?


Self driving electric cars are much better than car shares as they'll have significantly higher usage rates and eliminate much of the need for parking spots in dense downtown areas.

If you believe car shares are part of an efficient transportation future then self-driving cars are part of it!


Didn’t uber promise the same thing and end up just adding to traffic with all the dead head trips?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: