• Hoimo@ani.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

    The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.

    What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.