• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    And this seems like the biggest limitation for the LLM approach. The model just knows that a certain set of tokens tends to follow another set of tokens.

    It has no understanding of what the tokens represent. So it does a great job of producing sentences that look meaningful, but any actual meaning in them is purely incidental.