Despite its name, the infrastructure used by the “cloud” accounts for more global greenhouse emissions than commercial flights. In 2018, for instance, the 5bn YouTube hits for the viral song Despacito used the same amount of energy it would take to heat 40,000 US homes annually.

Large language models such as ChatGPT are some of the most energy-guzzling technologies of all. Research suggests, for instance, that about 700,000 litres of water could have been used to cool the machines that trained ChatGPT-3 at Microsoft’s data facilities.

Additionally, as these companies aim to reduce their reliance on fossil fuels, they may opt to base their datacentres in regions with cheaper electricity, such as the southern US, potentially exacerbating water consumption issues in drier parts of the world.

Furthermore, while minerals such as lithium and cobalt are most commonly associated with batteries in the motor sector, they are also crucial for the batteries used in datacentres. The extraction process often involves significant water usage and can lead to pollution, undermining water security. The extraction of these minerals are also often linked to human rights violations and poor labour standards. Trying to achieve one climate goal of limiting our dependence on fossil fuels can compromise another goal, of ensuring everyone has a safe and accessible water supply.

Moreover, when significant energy resources are allocated to tech-related endeavours, it can lead to energy shortages for essential needs such as residential power supply. Recent data from the UK shows that the country’s outdated electricity network is holding back affordable housing projects.

In other words, policy needs to be designed not to pick sectors or technologies as “winners”, but to pick the willing by providing support that is conditional on companies moving in the right direction. Making disclosure of environmental practices and impacts a condition for government support could ensure greater transparency and accountability.

  • QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    12
    ·
    6 months ago

    This article may as well be trying to argue that we’re wasting resources by using “cloud gaming” or even by gaming on your own, PC.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      6 months ago

      Yeah it is a bit weak on the arguments, as it doesn’t seem to talk about trade offs?

    • blargerer@kbin.social
      link
      fedilink
      arrow-up
      20
      arrow-down
      23
      ·
      6 months ago

      Gaming actually provides a real benefit for people, and resources spent on it mostly linearly provide that benefit (yes some people are addicted or etc, but people need enriching activities and gaming can be such an activity in moderation).

      AI doesn’t provide much benefit yet, outside of very narrow uses, and its usefulness is mostly predicated on its continued growth of ability. The problem is pretrained transformers have stopped seeing linear growth with injection of resources, so either the people in charge admit its all a sham, or they push non linear amounts of resources at it hoping to fake growing ability long enough to achieve a new actual breakthrough.

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        8
        ·
        6 months ago

        AI doesn’t provide much benefit yet

        Lol

        I don’t understand how you can argue that gaming provides a real benefit, but AI doesn’t.

        If gaming’s benefit is entertainment, why not acknowledge that AI can be used for the same purpose?

        There are other benefits as well – LLMs can be useful study tools, and can help with some aspects of coding (e.g., boilerplate/template code, troubleshooting, etc).

        If you don’t know what they can be used for, that doesn’t mean they don’t have a use.

        • technocrit@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          11
          ·
          edit-2
          6 months ago

          If gaming’s benefit is entertainment, why not acknowledge that AI can be used for the same purpose?

          Ah yes the multi-billion dollar industry of people reading garbage summaries. Endless entertainment.

          • otp@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            6 months ago

            Ah yes the multi-billion dollar industry of people reading garbage summaries. Endless entertainment.

            See, I’m not even sure if you’re criticizing LLMs or modern journalism…lmao

        • sinedpick@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          6 months ago

          LLMs help with coding? In any meaningful way? That’s a great giveaway that you’ve never actually produced and released any real software.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        9
        ·
        6 months ago

        I’m going to assume that when you say “AI” you’re referring to LLMs like chatGPT. Otherwise I can easily point to tons of benefits that AI models provide to a wide variety of industries (and that are already in use today).

        Even then, if we restrict your statement to LLMs, who are you to say that I can’t use an LLM as a dungeon master for a quick round of DnD? That has about as much purpose as gaming does, therefore it’s providing a real benefit for people in that aspect.

        Beyond gaming, LLMs can also be used for brainstorming ideas, summarizing documents, and even for help with generating code in every programming language. There are very real benefits here and they are already being used in this way.

        And as far as resources are concerned, there are newer models being released all the time that are better and more efficient than the last. Most recently we had Llama 3 released (just last month), so I’m not sure how you’re jumping to conclusions that we’ve hit some sort of limit in terms of efficiency with resources required to run these models (and that’s also ignoring the advances being made at a hardware level).

        Because of Llama 3, we’re essentially able to have something like our own personal GLaDOS right now: https://www.reddit.com/r/LocalLLaMA/comments/1csnexs/local_glados_now_running_on_windows_11_rtx_2060/

        https://github.com/dnhkng/GlaDOS

        • andrew_bidlaw@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          6 months ago

          It isn’t resource efficient, simple as that. Machine learning isn’t something new and it indeed was used for decades in one form or another. But here is the thing: when you train a model to do one task good, you can approximate learning time and the quality of it’s data analyzis, say, automating the process of setting price you charge for your hotel appartments to maximize sales and profits. When you don’t even know what it can do, and you don’t even use a bit of it’s potential, when your learning material is whatever you was dare to scrap and resources aren’t a question, well, you dance and jump over the fire in the bank’s vault. LLM of ChatGPT variety doesn’t have a purpose or a problem to solve, we come with them after the fact, and although it’s thrilling to explore what else it can do, it’s a giant waste*. Remember blockchain and how everyone was trying to put it somewhere? LLMs are the same. There are niche uses that would evolve or stay as they are completely out of picture, while hyped up examples would grow old and die off unless they find their place to be. And, currently, there’s no application in which I can bet my life on LLM’s output. Cheers on you if you found where to put it to work as I haven’t and grown irritated over seeing this buzzword everywhere.

          * What I find the most annoying with them, is that they are natural monopolies coming from the resources you need to train them to the Bard\Bing level. If they’d get inserted into every field in a decade, it means the LLM providers would have power over everything. Russian Kandinsky AI stopped to show Putin and war in the bad light, for example, OpenAI’s chatbot may soon stop to draw Sam Altman getting pegged by a shy time-traveler Mikuru Asahina, and what if there would be other inobvious cases where the provider of a service just decides to exclude X from the output, like flags or mentions of Palestine or Israel? If you aren’t big enough to train a model for your needs yourself, you come under their reign.

          • QuadratureSurfer@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            Ok, first off, I’m a big fan of learning new expressions where they come from and what they mean (how they came about, etc). Could you please explain this one?:

            well, you dance and jump over the fire in the bank’s vault.

            And back to the original topic:

            It isn’t resource efficient, simple as that.

            It’s not that simple at all and it all depends on your use case for whatever model you’re talking about:

            For example I could spend hours working in Photoshop to create some image that I can use as my Avatar on a website. Or I can take a few minutes generating a bunch of images through Stable Diffusion and then pick out one I like. Not only have I saved time in this task, but I have used less electricity.

            In another example I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper to quickly translate and transcribe what was said in a matter of seconds.

            On the other hand, there are absolutely use cases where using some ML model is incredibly wasteful. Take, for example, a rain sensor on your car. Now, you could setup some AI model with a camera and computer vision to detect when to turn on your windshield wipers. But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that’s normally reflected back it can activate the windshield wipers. The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

            Cheers on you if you found where to put it to work as I haven’t and grown irritated over seeing this buzzword everywhere.

            Makes sense, so many companies are jumping on this as a buzzword when they really need to stop and think if it’s necessary to implement in the first place. Personally, I have found them great as an assistant for programming code as well as brainstorming ideas or at least for helping to point me in a good direction when I am looking into something new. I treat them as if someone was trying to remember something off the top of their head. Anything coming from an LLM should be double checked and verified before committing to it.

            And I absolutely agree with your final paragraph, that’s why I typically use my own local models running on my own hardware for coding/image generation/translation/transcription/etc. There are a lot of open source models out there that anyone can retrain for more specific tasks. And we need to be careful because these larger corporations are trying to stifle that kind of competition with their lobbying efforts.

          • afraid_of_zombies@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            6 months ago

            That is a good argument, they are natural monopolies due to the resources they need to be competitive.

            Now do we apply this elsewhere in life? Is anyone calling for Boeing to be broken up or Microsoft to be broken up or Amazon to be broken up or Facebook?

            • andrew_bidlaw@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              We are missing big time on broking them into pieces, yes. No argument. There’s something wrong if we didn’t start that process a long time ago.

        • technocrit@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          11
          ·
          edit-2
          6 months ago

          Otherwise I can easily point to tons of benefits that AI models provide to a wide variety of industries

          Go ahead and point. I’m going to assume when you say “AI” that you mean almost anything except actual intelligence.

          • QuadratureSurfer@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            3
            ·
            edit-2
            6 months ago

            I think you’re confusing “AI” with “AGI”.

            “AI” doesn’t mean what it used to and if you use it today it encompasses a very wide range of tech including machine learning models:

            Speech to text (STT), text to speech (TTS), Generative AI for text (LLMs), images (Midjourney/Stable Diffusion), audio (Suno). Upscaling, Computer Vision (object detection, etc).

            But since you’re looking for AGI there’s nothing specific to really point at since this doesn’t exist.

            Edit: typo

            • technocrit@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              8
              ·
              edit-2
              6 months ago

              Speech to text (STT), text to speech (TTS), Generative AI for text (LLMs), images (Midjourney/Stable Diffusion), audio (Suno). Upscaling, Computer Vision (object detection, etc).

              Yes, this is exactly what I meant. Anything except actual intelligence. Do bosses from video games count?

              I think it’s smart to shift the conversation away from AI to ML, but that’s part of my point. There is a huge gulf between ML and AGI that AI purports to fill but it doesn’t. AI is precisely that hype.

              If “AI doesn’t mean what it used to”, what does it mean now? What are the scientific criteria for this classification? Or is it just a profitable buzzword that can be attached to almost anything?

              But since you’re looking for AGI there’s nothing specific to really point at since this doesn’t exist.

              Yes, it doesn’t exist.

              • QuadratureSurfer@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                edit-2
                6 months ago

                Edit: Ok it really doesn’t help when you edit your comment to provide clarification on something based on my reply as well as including additional remarks.


                I mean, that’s kind of the whole point of why I was trying to nail down what the other user meant when they said “AI doesn’t provide much benefit yet”.

                The definition of “AI” today is way too broad for anyone to make statements like that now.

                And to make sure I understand your question, are you asking me to provide you with the definition of “AI”? Or are you asking for the definition of “AGI”?

                Do bosses from video games count?

                Count under the broad definition of “AI”? Yes, when we talk about bosses from video games we talk about “AI” for NPCs. And no, this should not be lumped in with any machine learning models unless the game devs created a model for controlling that NPCs behaviour.

                In either case our current NPC AI logic should not be classified as AGI by any means (which should be implied since this does not exist as far as we know).

          • AIhasUse@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            3
            ·
            6 months ago

            You read too many headlines and not enough papers. There is a massive list of advancements that AI has brought about. Hell, there is even a massive list of advancements that you personally benefit from daily. You might not realize it, but you are constantly benefiting from super efficient methods of matrix multiplications that AI has discovered. You benefit from drugs that have been discovered by AI. Guess what what has made google the top search engine for 20 years? AI efficiency gains. The list goes on and on…

            • slackassassin@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              6 months ago

              People in this thread think AI is just the funny screenshot they saw on social media and concluded that they are smart and AI is dumb.

              • AIhasUse@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                6 months ago

                Absolutely. I am surprised, I would expect more from people who would end up at a site like this.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Do you never play games with bots? Those are AI.