• surewhynotlem@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    6 months ago

    people are able to explain themselves

    Can they though? Sure, they can come up with some reasonable sounding justification, but how many people truly know themselves well enough to have that be accurate? Is it any more accurate than asking gpt to explain itself?

    • logicbomb@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      6 months ago

      I did say that people and AI would have similar poor results at explaining themselves. So we agree on that.

      The one thing I’ll add is that certain people performing certain tasks can be excellent at explaining themselves, and if a specific LLM AI exists that can do that, then I’m not aware of it. I added LLM into there because I want to ensure that it’s an AI with some ability for generalized knowledge. I wouldn’t be surprised if there are very specific AIs that have been trained only to explain a very narrow thing.

      I guess I’m in a mood to be reminded of old Science Fiction stories, because I’m reminded of a story where they had people who were trained to memorize situations to testify later. For some reason, I initially think it’s a hugely famous novel like Stranger in a Strange Land, but I might easily be wrong. But anyways, the example they gave in the book was that the person described a house, let’s say the house was white, then they described it as being white on the side that was facing them. The point being that they’d be explaining something as closely to right as was possible, to the point that there was no way that they’d be even partially wrong.

      Anyways, that seems tangentially related at best, but the underlying connection is that people, with the right training and motivation, can be very mentally disciplined, which is unlike any AI that I know, and also probably very unlike this comment.

    • andxz@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      At least to me the exciting part is that we’re getting to a point where this is a legitimate question - regarding both us and our emerging AI’s.

      I wouldn’t be surprised at all if an AI explained its own behaviour better before we can adequately understand our own minds well enough to match that logic - there’s a lot we don’t know about our own decision making processes.