• lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 days ago

    Nope. There are studies with vector databases that show that even language doesn’t matter, the words start grouping together automatically based on relevance just by the way the math works.

    In theory your could try inventing a fake language so weird that it doesn’t match anything existing, but at that point just start encrypting your stuff

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      the words start grouping together automatically based on relevance just by the way the math works

      Sure but isn’t it still the words that are grouping together? The guy in the OP video seems to be claiming that the fact that he used certain words does not matter, which does not make sense to me, since the depth of understanding these algorithms have of what is being said is still somewhat shallow.

      I would guess that it should be possible to engineer a sentence that communicates a particular message, but is phrased in such a way that it targets a location in vector space that is not associated with that message (until the other parts of their system make that association).

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        If you can give ChatGPT the transcript and it can say “yes that’s about ____”, then that means it’s certainly possible for them to do the same. I would expect that anything trained specifically for that should only get better from there, although obviously they’re not going to throw ChatGPT-sized compute at it.

        • chicken@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 days ago

          although obviously they’re not going to throw ChatGPT-sized compute at it.

          I’m not entirely sure whether and what more fundamental distinctions between embeddings and LLMs may exist, but smaller LLMs really struggle with comprehension if things are phrased in an unexpected way, and embeddings use comparatively very few resources. Maybe a circumvention training tool could work like this: a writing game where the goal is to produce text about a topic such that the embedding fails to associate it with that topic, but a more powerful LLM succeeds (the idea being that maybe a human would be able to tell also). The biggest advantage these systems have is probably just the way people do not get direct feedback about how their work is being interpreted.