• PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Not only the pollution.

    It has triggered an economic race to the bottom for any industry that can incorporate it. Employers will be forced to replace more workers with AI to keep prices competitive. And that is a lot of industries, especially if AI continues its growth.
    The result is a lot of unemployment, which means an economic slowdown due to a lack of discretionary spending, which is a feedback loop.

    There are only 3 outcomes I can imagine:

    1. AI fizzles out. It can’t maintain its advancement enough to impress execs.
    2. An unimaginable wealth disparity and probably a return to something like feudalism.
    3. social revolution where AI is taken out of the hands of owners and placed into the hands of workers. Would require changes that we’d consider radically socialist now, like UBI and strong af social safety nets.

    The second seems more likely than the third, and I consider that more or less a destruction of humanity

    • _NoName_@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Miles is chill in my book. I appreciate what he is tackling, and hope he continues.

      It seems that there are much worse issues with AI systems that are happening right now. I think those issues should be taking precedent over the alignment problem.

      Some of the issues are bad enough right now that AI development and use should be banned for a limited time frame (at least 5 years) while we figure out more ethical ways of doing it. The fact that we aren’t doing that is a massive failure of our already constantly-fucking-up governments.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    It’s wild how we went from…

    “Crypto is an energy hog and its main use case is a convulted pyramid scheme”

    “Bro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementations”

    …to…

    “AI is an energy hog and its main use case is a convoluted labor exploitation scheme”

    “Bro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementations”

    • SleezyDizasta@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      They’re not really comparable. Crypto and blockchain were good solutions looking for problems to solve. They’re innovative and cool? Sure, but they never had a widescale use. AI has been around for awhile, it just got recently rebranded as artificial intellectual, the same technologies were called algorithms a few years ago… And they basically run the internet and the global economy. Hospitals, schools, corporations, governments, the militaries, etc all use them. Maybe certain uses of AI are dumb, but trying to pretend that the thing as a whole doesn’t have, or rather already has, genuine uses is just dumb

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I feel like you’re being incredibly generous with the usage of AI here. I feel as though the post and comment above refer to LLM/image generation AI. Those “types of ‘AI’” certainly don’t run all those things.

        • SleezyDizasta@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          The term AI is very vague because intelligence is an inherently subjective concept. If we’re defining AI as something that has consciousness then it doesn’t exist, but if we’re defining it as a task that a computer can do on it’s own, then virtually everything that is automated is run by AI.

          Even with generative AI models, they’ve been around for a while too. For example, lot of the news articles you read, especially about the weather or news aren’t written by actual people, they’re AI generated. Another example would be scientific simulations, they use AI to generate a bunch of possible scenarios based on given parameters. Yet another example would be the gaming industry, what do you think generates Minecraft worlds? The point here is that AI has been around for awhile and is already being used everywhere. What we’re seeing with chatGPT and these other new models is that these models are now being released for public access. It’s like democratization of AI, and a lot of good and bad things are bound to come of it. We’re at the infancy stage of this now, but just like with the world wide web before it, these technologies are going to fundamentally change how we do many things from now on.

          We can’t fight technology, that’s a losing battle. These AIs are here and they’re here to stay. So strap on and enjoy the ride.

          • JackbyDev@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            I think you misunderstood me, I’m not trying to make some point about “LLMs aren’t ‘real AI’” or even what is and is not AI. I’m just saying the post is talking about that type of AI specifically and I wouldn’t say those types are controlling that much of the world.

  • Frostbeard@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I don’t like to use relative numbers to illustrate the increase. 48% can be miniscule or enormous based on the emission last year.

    While I don’t think the increase is miniscule it’s still an option inessesary ambiguity.

    • elrik@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The relative number here might be more useful as long as it’s understood that Google already has significant emissions. It’s also sufficient to convey that they’re headed in the wrong direction relative to their goal of net zero. A number like 14.3 million tCO₂e isn’t as clear IMO.

    • kyle@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      It’s supposed to represent Ben Kenobi from the original star wars I think. Or more generally, a wizard-y sage robe.

      Edit: it’s also just a meme, with its own understood meaning.

  • NutWrench@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Stupid AI will destroy humanity. But the important thing to remember is that for a brief, shining moment, profit will be made.

  • didnt1able@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    The way it’s done at this current moment is in no way sustainable. Once we start seeing better dedicated hardware for doing ai on client side hardware and remove the need to use massive GPU farms. AI is cool but it’s like driving a tank to the grocery store. We need the Prius of ai.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Personally I think AI systems will kill us dead simply by having no idea what to do, dodgy old coots thinking machines are magic and know everything when in reality machines can barely approximate what we tell them to do and base their information on this terrible approximation.

    • ChapulinColorado@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Machines will do exactly what you tell them to do and is the cause of many software bugs. That’s kind of the problem, no matter how elegant the algorithm, fuzzy goes in, fuzzy comes out. It was clear this very basic principle was not even considered when Google started telling people to eat rocks and glue. You can’t patch special cases out when they are so poorly understood.

    • Umbrias@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      In what sense does a small community working with open weight (note: rarely if ever open source) llm have any mitigating impact on the rampant carbon emissions for the sake of bullshit generators?

      • Daxtron2@startrek.website
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Not a small community by any means. It inherently is opposed to the unnecessarily large and wasteful models of corporations. But when people just lump i al l under “AI”, the actually useful local models are the ones most likely to get harmed while Google, meta, and the other megacorps will be able to operate with impunity.

        • Umbrias@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Those people doing the majority of the lumping, and it’s not even close, are the corporations themselves. The short hand exists. Machine learning is doing fine. Intentionally misinterpreting a message to incidentally defend the actions of the corporations doing the damage you are opposed to ain’t it.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Nowadays you can actually get a semi decent chat bot working on a n100 that consumes next to nothing even at full charge.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 months ago

          Someone needs to tell google that AI powered search is not working right now, and that they better wait a few years to try massively implementing that in a successful way.

          Other AI fields are working really good. But search engine “instant AI answers” for general use are not in a phase when they should be as widely used as google (or microsoft) is trying to use them right now.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        The problem is the concentration of power, Sam “regulate me daddy” Altman’s plan is to get the government to create a web of regulation that makes it so only the big tech giants have access to the uncensored models.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    The root problem is capitalism though, if it wasn’t AI it would be some other idiotic scheme like cryptocurrency that would be wasting energy instead. The problem is with the system as opposed to technology.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Nah, human ideology is much broader than a single economic system. The fact that people who live under capitalism can’t understand this just shows the power of indoctrination.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            What you’re saying is that you’re not self aware enough to realize that you have an ideology. Everyone has a world view that they develop to understand how the world works, and every world view necessarily represents a simplification of reality. Forming abstractions is how our minds deal with complexity.

              • Tryptaminev@lemm.ee
                link
                fedilink
                arrow-up
                0
                ·
                3 months ago

                Do you think people should be treated with respect? Do you think there should be consideration for your condition so you are not exempt from certain events, activities, opportunities?

                These are matters of ideology. If you say yes to it, it is ideological in the same way when you say no to it. There is no inherent objective truth to these value questions.

                Same for the economy. It doesn’t matter if you think that growth should be the main objective, or that equal opportunity should be the focus or sustainability or other things. You will have to make a value judgement and the sum of these values represent your ideology.

                • Samvega@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  3 months ago

                  There is no inherent objective truth to these value questions.

                  I disagree. These values are based on objective observations.

        • Wxnzxn@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          If you think that sounds like “Žižekian nonsense”, then you obviously don’t understand what Žižek argues, because he clearly doesn’t say anything silly like “human ideology” (or “Žižekianism”, for that matter). The article you posted also does wonders completely breaking down Žižek as an abonimable human being - while not truly engaging with his ideas. It is pretty worthless, takes things deliberately out of context, and, after rigorously defining him as a persona non grata, invests no proper effort to do what actual communists like Marx and Lenin did - acknowledge that even enemies like that can give contributions to understanding, and things to learn from and work at doing so.

          Does he sometimes spew bullshit? Absolutely. Does he believe in “human ideology” or spout anticommunism on a worse level than The Black Book of Communism, as the article wants to imply? Only if you deliberately misread and misinterpret him.

            • Wxnzxn@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              Yeah, look, I did read the article, and the article, unlike the person who might very well have done that in their work, did not do that. All I see is the same flipping of materialist analysis into an ideological dogma, that becomes ahistoric, trying to repeat instead of following material developments towards communism. From a quick look at your links, there’s even a lot I agree with, especially in criticising the French intellectuals. It still reads like a polemic removed from reality, that values its own farts more than understanding and working towards change, but it has value. And the article you linked in the beginning does nothing, but try to opportunistically recruit people away from one ideologue (which Zizek can definitiely be called) to another idealist “team” that tries to redirect proletarian material interests and analysis. You seem to think it’s a contest of who can quote “great people” the best and who can be the most orthodox, which treats it all like a religion instead of a material movement to change the world and mode of production.

              In the end, I fear, we will be on other sides of the river, each seeing “their idealist perversions” across from “our materialist analysis”, but I at least won’t cross the river for your side any time soon.

              • davel [he/him]@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Okay, Holden Caulfield, best of luck with your own personal, non-phony, left-libertarian revolution.

                • Wxnzxn@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  3 months ago

                  Nice burn, even brought in the “libertarian”, at least be consistent, if I am a Zizekian heretic, I’m not an individualist libertarian who’s afraid of authority, I am of course a liberal anticommunist reactionary who won’t acknowledge the achievements of “really existing socialism”. You strike me as someone who would have written a hit piece on Marx for profiting from British imperialism and his capitalist buddy Engels, citing the letter and his drinking habits to make clear that he is an immature mind, then join some utopian socialist fringe group.

        • Samvega@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 months ago

          I’m open to trying a non-Capitalist system, but I’m pretty sure hierarchical bullshit will happen and the majority will end up being exploited.

          Whether anyone else is open to it before humans extinguish themselves, I don’t know.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Right, but the technology has the system’s philosophy baked into it. All inventions encourage a certain way of seeing the world. It’s not a coincidence that agriculture yields land ownership, mass production yields wage labor, or in this case fuzzy plagiarism machines yield a transhuman death cult.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Sure, technology is a product of the culture and it in turn influences how the culture develops, there’s a dialectical relationship there.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          So why take the heat off of AI, as if profiting from mass plagiarism is different when it has an API instead of flesh and bone?

  • bassomitron@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    But what if we use AI in robots and have them go out with giant vacuums to suck up all the bad gasses?

    My climate change solution consultation services are available for hire anytime.

      • Tryptaminev@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Don’t worry, they will figure out that without humans releasing gasses they have no purpose, so they will cull most of the human population but keep just enough to justify their existence to manage it.

        Although you don’t need AI to figure that one out. Just look at the relationships between the US intelligence and military and “terrorist groups”.

        • ChickenLadyLovesLife@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Don’t worry, they will figure out that without humans releasing gasses they have no purpose, so they will cull most of the human population but keep just enough to justify their existence to manage it.

          Unfortunately this statement also applies to the 1%. And the “just enough” will get smaller and smaller as AI and automation replace humans.

    • pastel_de_airfryer@lemmy.eco.br
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Careful! Last time I sarcastically posted a stupid AI idea, within minutes a bunch of venture capitalists tracked me down, broke down my door and threw money at me non stop for hours.