An AI’s difficult relationship with the truth is called “hallucinating.” In extremely simple terms: these machines are great at discovering patterns of information, but in their attempt to extrapolate and create, they occasionally get it wrong. They effectively “hallucinate” a new reality, and that new reality is often wrong. It’s a tricky problem, and every single person working on AI right now is aware of it.
One Google ex-researcher claimed it could be fixed within the next year (though he lamented that outcome), and Microsoft has a tool for some of its users that’s supposed to help detect them. Google’s head of Search, Liz Reid, told The Verge it’s aware of the challenge, too. “There’s a balance between creativity and factuality” with any language model, she told my colleague David Pierce. “We’re really going to skew it toward the factuality side.”
But notice how Reid said there was a balance? That’s because a lot of AI researchers don’t actually think hallucinations can be solved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. Just as no person is 100 percent right all the time, neither are these computers.
Read more:
Cranz, A. (2024, May 15). We have to stop ignoring AI’s hallucination problem. The Verge. https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong