Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Changing the title was a good call.

The article has a good take on the "lie" problem. We know about the hallucination problem, which remains serious. The "lie" problem mentioned is that if you ask an LLM why it said or did something, it has no information of how it got a result. So it processes the "why" as a new query, and produces a plausible explanation. Since that explanation is created without reference to the internals of how the previous query was processed, it may be totally wrong. That seems to be the type of "lie" the author is worried about in this essay.

(Yes, humans do that too.)

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: