• csfirecracker@lemmyf.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    This demonstrates in a really layman-understandable way some of the shortcomings of LLMs as a whole, I think.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      It’s only a “shortcoming” if you aren’t aware of how these LLMs function and are using it for something it’s not good at (in this case information retrieval). If instead you want it to be making stuff up, what was previously an undesirable hallucination becomes desirable creativity.

      This also helps illustrate the flaws in the “they’re just plagarism machines” argument. LLMs come up with stuff that definitely wasn’t in their training data.