The model predicts a 99% chance that the next word is "alive" and a 1% chance that it's "dead".
The llm calculates probabilities. How does it actually choose the word? It throws a weighted die. It literally chooses one at random (albeit on a custom probability distribution).
So tell me, what how can you eliminate hallucinations from something that is literally designed to pick stuff at random?
Hallucinations will never be removed from these types of llms. Hallucinations are fundamental to how they work. In the sense that even the "good" outputs are hallucinations picked out at random from a probability distribution.
Any company that says they can control hallucinations, in any way, is flat out lying.
check what, exactly? The model already did the best it could. It can't check its own work. And if I have to find out the answer myself from some other source in order to confirm the answer then that makes the model useless.
I just tried GPT4 with "is the queen alive?" and it came back with 'hold on, checking Bing' followed by "Queen Elizabeth II passed away..." That kind of thing.
The model predicts a 99% chance that the next word is "alive" and a 1% chance that it's "dead".
The llm calculates probabilities. How does it actually choose the word? It throws a weighted die. It literally chooses one at random (albeit on a custom probability distribution).
So tell me, what how can you eliminate hallucinations from something that is literally designed to pick stuff at random?
Hallucinations will never be removed from these types of llms. Hallucinations are fundamental to how they work. In the sense that even the "good" outputs are hallucinations picked out at random from a probability distribution.
Any company that says they can control hallucinations, in any way, is flat out lying.