First of all, the take that LLM are just Parrots without being able to think for themself is dumb. They do in a limited way! And they are an impressive step compared to what we had before them.

Secondly, the take that LLMs are dumb and make mistakes that takes more work to correct compared to do the work yourself from the start. That is something I often hear from programmers. That might be true for now!

But the important question is how will they develop! And now my take, that I have not seen anywhere besides it is quite obvious imo.

For me, the most impressive thing about LLMs is not how smart they are. The impressive thing is, how much knowledge they have and how they can access and work with this knowledge. And they can do this with a neuronal network with only a few billion parameters. The major flaws at the moment is their inability to know what they don’t know and what they can’t answer. They hallucinate instead of answering a question with “I don’t know.” or “I am not sure about this.” The other flaw is how they learn. It takes a shit ton of data, a lot of time and computing power for them to learn. And more importantly they don’t learn from interactions. They learn from static data. This similar to what the Company DeepMind did with their chess and go engine (also neuronal networks). They trained these engines with a shit tone of games that were played by humans. And they became really good with that. But then the second generation of their NN game engines did not look at any games played before. They only knew the rules of chess/go and then started to learn by playing against themself. It took only a few days and they could beat their predecessors that needed a lot of human games to learn from.

So that is my take! When LLMs start to learn while interacting with humans but more importantly with themself. Teach them the rules (that is the language) and then let them talk or more precise let them play a game of asking and answering. It is more complicated than it sounds. How evaluate the winner in this game for example. But it can be done.

And this is where the AGI will come from in the future. It is only a question how big do these NN need to be to become really smart and how much time they need to train. But this is also when AI can gets dangerous. When they interact with themself and learn from that without outside control.

The main problem right now is they are slow as you can see when you talk to them. And they need a lot of data, or in this case a lot of interactions to learn. But they will surely get better at both in the near future.

What do you think? Would love to hear some feedback. Thanks for reading!

  • orclev@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 months ago

    That’s not how this works. That’s not how any of this works.

    LLMs can’t “talk to each other” as they don’t think, they’re more like a really complicated echo chamber. You yell your prompt into it, it bounces around and when the echo comes back you have your result. You could feed the output of one LLM into the input of another, but after a few rounds of bouncing back and forth you’d just get garbage out. Furthermore a LLM can’t learn from its queries as the queries are missing all the metadata necessary to build the model.

    • niva@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 months ago

      Well LLMs don’t learn from any interaction at the moment. They are trained and after that, one can interact with them but they don’t learn anymore. You can fine tune the model with recorded interactions later, but they do not learn directly. So what I am saying is, if this is changed and they keep learning from interactions, as we do, there will be a break through. I don’t understand why you are saying Thant’s not how it works when I am clearly talking about how it might work in the future.

      I also don’t understand why you get upvoted for this and I get down voted just for posting my thoughts about LLMs. To be clear, it is totally fine to disagree with my thoughts but why down vote it?

      • orclev@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Because you very clearly don’t understand how LLMs work and are describing something that’s impossible. If you did have something that worked like that it wouldn’t be a LLM, it would be something fundamentally different and closer to a true AI. There are no true AI in existence currently, and just trying to train a LLM using its inputs won’t change that, it would just make the output worse by introducing noise.

        • niva@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.

          Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.

          Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I’m saying.