I use AIs for coding with moderate success, but the more I work with them, the more I am convinced that "intelligence on tap" is a pipe dream, especially in domains where logical thinking in novel (ie not-in-dataset) contexts is required.
Recently, I tasked it to study a new Czech building permit law in conjunction with some waste disposal regulations and the result was just tragic. The model (opus 4.6) just could not stop drawing conclusions from obsolete regulations in its training dataset, even when given the fulltext of the new law. The usual "you are totally right" also applied and its conclusions were most of the time obviously wrong even to a human with cursory knowledge of the subject.
I ended with studying the relevant regulations myself over the weekend.
I wonder what percentage of the job space truly depends on the current edge we have over machines.
I think it's reasonable to worry that way before machines are more reliable than the average human (let alone more reliable than a highly trained human) they can pose a significant disruption to the job market which will send shockwaves throughout society
That is why we need functioning states -- free markets won't save you in such a case. Though I found it is hard to explain especially to U.S. people, who put "regulation" on par with f words :)
"The model (opus 4.6) just could not stop drawing conclusions from obsolete regulations in its training dataset"
To be fair, humans are also often like this. If some rule/law/model was deeply ingrained into them, they often cannot stop thinking in terms of that rule, even if they are clearly in a new context (like a new country).
But that is pretty much the same rule, just the numbers slightly adjusted. What do you think would happen if they changed traffic from the right to the left lane?
Heh, that would be surely funny :) But most people at least know there is a new permit law and if they are not sure, they are to seek expert guidance. The model is even with explicit notification unable to reflect upon this fact. How it is supposed to be useful then?
Oh, most people would know in theory for sure, but if they go into driving, habit would kick in and they end up on the wrong lane pretty quickly.
At least that is what happened to me in australia and I only had a year of driving practice back then, but driving on the right side was already deeply ingrained and I had to be really aware of what I did.
But to be clear, I am not arguing models have real understanding of anything - I know they don't. My point was humans can be similar in pretending to have understood something, but if their core was modeled different, they will fall into old patterns again quickly.
Funny story, when I was younger I trained a basic text predictor deep learning model on all my conversations in a group chat I was in, it was surprisingly good at sounding like me and sometimes I'd use it to generate some text to submit to the chat.
I don't see what the value of this would be. Why would I want to automate talking to my friends? If I'm not interested in talking with them, I could simply not do it. It also carries the risk of not actually knowing what was talked about or said, which could come up in real life and lead to issues. If a "friend" started using a bot to talk to me, they would not longer be considered a friend. That would be the end.
I think you underestimate how many people already run their opinions and responses through LLMs, even if the LLM is not writing them wholesale. Intelligence is part of the social game, so appearing to have it matters to people. Friend groups are just social groups of a certain kind, they're not really removed from all this.
It was for fun, to see if it were possible and whether others could detect they were talking to a bot or not, you know, the hacker ethos and all. It's not meant to be taken seriously although looks like these days people unironically have LLM "friends."
About two years ago, I made up reference to a nonexistent python library and put code "using" it in just 5 GitHub repos. Several months later the free ChatGPT picked it up. So IMO it works.
Real discussions with friends happen in group chats, without all the crap and noise.
reply