Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, the lesson or the quote is not anthropomorphizing LLMs. It is not the LLM that "intends", it is the people who design the systems and those who make/provide the training data. In the LLM systems used today the RLHF process especially is used to steer towards plausible, confident and authorative sounding output - with no to little priority for correctness/truth.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: