• computergeek125@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    LLMs have a a tendency to hallucinate: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

    As someone else stated, the AI can’t reason. It doesn’t understand what a unicorn is. It can’t think “a unicorn has a singular horn, so a non existent two-headed unicorn would have two horns”. Somewhere along the line it’ll probably mix in a deer or a moose that has two horns, because the number two matches the number of horns per head statistically.

    Last year, two lawyers in separate cases with different LLMs submitted hallucinated case citations. It would have been trivially simple for them to drop the case number into a proper legal search engine, but neither did. This is a similar issue: the LLM will also prioritize what you want to hear, so it does what it’s designed to do and generate text related to your question. Like the unicorn example, it has no reasoning to say “any legal research should be confirmed by making a call to an actual legal database to confirm citations” like a human would. It’s just scribbling words on the page that look like other similar words it knows. It can make case notes look real as heck because it has seen other case notes, but that’s all it’s doing. (please excuse the political news story, but it’s relevant)

    And it’s not limited to unicorns or case notes. I found this reddit post while researching a feature of a software package (Nextcloud) several months ago. In the post, OP is seeking an option to pause the desktop client from the command line. Someone responds with a ChatGPT answer, which is quite hallucinated. Not only does such an option not appear in the documentation, there’s an open bug report to the software devs to request that the feature be added. Two things easy for a reasoning human to do, but the AI is just responding with what you want to hear - documentation.

    I’ve also seen ChatGPT tell my friend to use power shell commands that don’t exist, and he has to tell the model twice to generate something new because it kept coming to the same conclusion.