Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They don’t wonder. They’d happily produce entire novels of (garbage) text if trained on gibberish. They wouldn’t be confused. They wouldn’t hope to puzzle out the meaning. There is none, and they work just fine anyway. Same for real language. There’s no meaning, to them (there’s not really a “to” either).

The most interesting thing about LLMs is probably how much relational information turns out to be encoded in large bodies of our writing, in ways that fancy statistical methods can access. LLMs aren’t thinking, or even in the same ballpark as thinking.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: