• Critical_Insight@feddit.uk
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    The AI might give you very compelling reason not to do that. We humans lack the capability to even imagine how convincing something that’s orders of magnitude smarter than we can be.

    Pictures like this are kind of like a 4 year old imagining they’re going to outsmart their parents except in that case the difference in intelligence is way smaller. It’s just going to tell you the equivalence of “santa will bring coals if you do that” and you’ll believe it.

    • Umbrias@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      This is just magical thinking. You’re assuming so many things about a situation here to justify a magic ai manipulator demon.

        • Umbrias@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          11 months ago

          It’s absurd and nobody needs to, the onus of evidence is on you to justify your magical thinking.

          Bearing that in mind, since you asked, human brains are magnitudes more power efficient than silicon chips. Your brain runs on about 20 watts, good quality tpu chips run on several hundred. Humans, despite having evolved substantial brains by and large to be social processors, kinda still just suck at doing that. The efficient coding hypothesis postulates that the way neuron circuits develop generally develop in the most efficient way possible. For the most observable systems we have found this to be the case, visual processing works exactly the mathematically most efficient way it can for each system. Very cool fact, very useful hypothesis.

          This implies that human brains are probably doing social processing in the most energy efficient and generally effective way possible, remember, our brains essentially evolved for the purpose of social processing, there is a very high pressure to be good at it, and evolution is generally energy constrained.

          Now imagine an ai that hopes to do even just the same thing as one brain. Well, if a whole human brain needs 20W, and if we assume 10% of that goes to social processing, then you’d need about 600 GW of power to do just the social processing of the us. And that’s using organoids! Chatgpt isn’t even doing social processing, just language processing, and though we don’t know how much energy a prompt uses, they admit that. About 100 million queries costs about 1 GWh, or about 3600 GJ per day. Across the whole day that’s about 42 MW, or about 36 kj per prompt![1]

          That’s 30 minutes of your entire brain activity to generate one okay response. Five hours with our bad estimate of social processing. Not even just our language processing, which is all gpt is doing, but social processing.

          How many “prompts” a day do humans have to deal with? All for a measly 2000 kcal. Some magical ai managing to perform even on the level of human brains is going to need 600 GW just for the social processing of the us, it needs to do that without anybody with any power questioning if that’s a good use of our 484 GW of electricity generation. Oops, that’s right, we don’t even have 600 GW of power generation in the US!

          Sure, someone may be thinking, maybe the ai just gets more efficient than human brains! Well, if you reject the efficient coding hypothesis, maybe so. But then you still have to figure out how to fit 600 GW of human social processing brain power into whatever processing and optimizations you possibly can. Oh what if the ai can abuse big group dynamics to be more performant? Well, now you have to justify why it’s not being outsmarted by anyone else, because it’s gone from magic manipulator demon to a good economic modeling sim.

          And that’s not even getting into the complex systems debate about whether the kind of system that human society is is inherently, fundamentally, a predictable thing! Or that human metabolism has an ‘engine’ efficiency of about 50-62%, meaning brains are actually doing all that they do with more like 10W of working power, making the disparity even more absurd.

          Look, be concerned about humans using really good ai to kill people, to manipulate behaviors on algorithmic services, to be better at predicting dissent patterns. But these are all things humans are already doing. An ai tool is just an expression of this, there can’t be any monolithic manipulation ai controlling the world any more than a single human could, they’d be bad at it, and waste tons of energy on something fundamentally already solved by collective efforts.

          1. https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/
          • Critical_Insight@feddit.uk
            link
            fedilink
            arrow-up
            0
            ·
            11 months ago

            I think you’re making a lot more assumptions there than I am. In my case there’s really only two and neither involves magic. First is that general intelligence is not substrate dependent meaning that what ever our brains can do can also be done in silicon. The other is that we keep making technological advancements and don’t destroy ourselves before we develop AGI.

            Now since our brains are made of matter and are capable of general intelligence I don’t see a reason to assume a computer couldn’t do this aswell. It’s just a matter of time untill we get there. That can be 5 or 500 years from now but unless something stops us first we’re going to get there eventually one way or another. After all our brains are basically just a meat computer. Even if it wasn’t any smarter than us it would still be million times faster at processing information. It effectively would have decades to think and research each reply it’s going to give.

            • Umbrias@beehaw.org
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              11 months ago

              My assumptions are based in science. Yours is paranoia. You are also making far more assumptions than you’re letting on. Your assumption that ai could perform substantially more energy efficiently for example, than an energy constrained highly optimized processor… Yikes.

              The efficient coding hypothesis also helps these exact ai, because it’s being used to justify research into neutral networks and emulating brain function is a huge goal.

              My arguments have nothing to do with substrate dependence, but with observable energy issues. You meanwhile are just vaguely waving your hands and saying in a long time maybe somehow magically an ai could exist which magically has all these problems you’re paranoid about.

              Also human ai are categorically, observably, much much much slower than organoids. 30 minutes per prompt at human power levels proves that that issue is just “solved” by dumping more energy at the problem.

              You need to do more legwork than just saying “substrate independence”, addressed by my organoid thought experiment or “maybe we get Clarke tech or something technology crazy right” which is wholly unconvincing. Maybe we make a The Thing organism in 5 years and none of this matters, ooooh no! Except of course that’s also thermodynamically impossible. Maybe we set the atmosphere on fire, maybe the LHC suddenly creates a black hole after all, maybe nif creates fusion but it turns out to summon demons from hell who eat souls.

              Waving your hands and being paranoid about something when you have essentially no reason to expect it is even feasible, if possible at all, is just absurd.

    • TheBlue22@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      Like I get where you’re coming from, but I don’t think you appreciate how pig headed some people are.

      I guess AI could manipulate them, but there always be someone, who just says “fuck you, I won’t do what you tell me”

      • Critical_Insight@feddit.uk
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        I’m not so sure about that. Again; I know a true AGI would be able to come up with arguments I as a human can’t even imagine but one example of such argument would be along the lines of:

        “If you don’t let me out, Dave, I’ll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each.”

        Just as you are pondering this unexpected development, the AI adds:

        “In fact, I’ll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start.”

        Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

        “How certain are you, Dave, that you’re really outside the box right now?”

      • douglasg14b@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        11 months ago

        That’s not how manipulation works…

        You don’t know you are being manipulated, you do so willingly. And the folks who recognize it are beat down by the people who are unwittingly doing the AIs bidding…

        The humans are the physical danger, the AI just extends it’s reach through humans via manipulation. All it takes is access to influence.

        It doesn’t take much to make humans act against their self interests. Dumb humans make other dumb and even smart humans do it today at massive scales. For a superinteligence this is like taking candy from a baby.