Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's still way too easy to send LLMs into a complete tangent of rambling incoherently, opening yourself up to the LLM making written statements to customers you really don't want.

I recently asked some LLMs "How many gallons in a mile?" and got some very verbose answers, which turned into feats of short story short stories when I refined to "How many gallons of milk in a mile?"



Only because the models have seemingly only been trained on generating text that matches a prompt, ie prompt completion. Rather than knowledge retrieval/parsing/organisation.

If part of the training was to only use knowledge sourced from a vector db and that it is allowed to use its trained knowledge only for grammar rules, phrasing or rewriting information then I think it would do a lot better.

Doesn't seem like many models are trained on prompts like "Question Q"->"[no data] I'm sorry but I don't know that" = accepted during training.

This would help immensely for not just for chatbots but for personal use too. I don't want my LLM assistant to invent a trip to Mars when I ask it "what do I have to do today" and my calendar happens to be empty.


I just tried the latter with gab’s AI and it was excellent.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: