Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hallucinations can never be fixed. LLM's 'hallucinate' because that is literally what they can ONLY do, provide some output given some input. The output is measured and judged by a human who then classifies it as 'correct' or 'incorrect'. In the later case it seems to be labelled as a 'hallucination' as if it did something wrong. It did nothing wrong and worked exactly as it was programmed to do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: