Useful context here is that the author wrote Pi, which is the coding agent framework used by OpenClaw and is one of the most popular open source coding agent frameworks generally.
> “Heard joke once: Man goes to doctor. Says he's depressed. Says life seems harsh and cruel. Says he feels all alone in a threatening world where what lies ahead is vague and uncertain. Doctor says, "Treatment is simple. Great clown Pagliacci is in town tonight. Go and see him. That should pick you up." Man bursts into tears. Says, "But doctor...I am Pagliacci.”
That's a great shout because I'm sure a lot of people would otherwise just discredit this take as just another anti-ai skeptic. But he probably has more experience working with LLM's and agents than most of us on this site, so his opinion holds more weight than most.
If you were going to dismiss an argument because of who it comes from rather than its content, that is a flaw in your thinking. The argument is correct, or it isn't, no matter who said it.
Your ability to evaluate whether the argument is correct is limited. In theory, the author and the correctness of the argument are unrelated; in practice, the degree of experience the author has with the topic they’re making an argument on does indeed have some correlation with the argument and should influence the attention you give to arguments, especially counterintuitive ones.
Even further, not everything is a math proof, where everything has been standardized and open (although understanding the proof is a whole other topic). Heck, take it one step lower - coding - and even though theoretically the source code is 100% transparent, still often times your claims are not reproducible because of environment. Now lower it one more to any kind of science where replication is expensive and/or hard, and then one step lower to personal experiences... And yeah, things can seem tough, can't it?
And even in the case of mathematics proofs, that tells you nothing about things such as: extendability, taste, where future direction should go, what this philosophically means, etc. Which we definitely do care about.
It's funny because the people throwing around fallacy accusations everywhere don't understand that they are semi selectively using fallacies alongside claiming universality while not actually practicing it (not that you have to, of course, I very much don't agree with that premise, but if you're the one saying it...)
Anyways. /rant, it's crazy how many people don't discuss these basic but subtle ideas. To be fair, I struggled with the same exact things when I was 15, and it doesn't seem like you get taught this kind of nuance until maybe the tail end of a rigorous bachelor's degree, though personally I only learned this stuff on my own through extensive trial and error and suffering.
That doesn't work for me. Knowing who is making the argument is important for understanding how credible the parts of their argument that derive from their personal experience are.
If someone anonymous says "Using coding agents carelessly produces junk results over time" that's a whole lot less interesting to me than someone with a proven track record of designing and implementing coding agents that other people extensively use.
> The argument is correct, or it isn't, no matter who said it.
Yes, but we all have insufficient intelligence and knowledge to fully evaluate all arguments in a reasonable timeframe.
Argument from authority is, indeed, a logical fallacy.
But that is not what is happening here. There is a huge difference between someone saying "Trust me, I'm an expert" and a third party saying "Oh, by the way, that guy has a metric shitton of relevant experience."
The former is used in lieu of a valid argument. The latter is used as a sanity check on all the things that you don't have time to verify yourself.
I think its kind of like technical indicators. Obviously they mean nothing but because other people believe them you have to take them into account. So when someone with authority says something assertively many critical thinking faculties go out the window for many people
Someone making an argument needs relevant experience/context to substantiate their argument. Just because the end opinion is "correct", doesn't mean they arrived there in a reasonable way.
I think he's working at OpenAI now, so the priority would shift from MVP that gets people excited, to "make it actually reliable for a billion people".