Well, that’s awesome.

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    139
    arrow-down
    1
    ·
    3 months ago

    “Generative AI has polluted the data,” she wrote. “I don’t think anyone has reliable information about post-2021 language usage by humans.”

    That is fucking horrifying.

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      49
      ·
      3 months ago

      Yeah, the generative AI pollution feels alot like the whole steel thing - since the nuclear tests it’s been impossible for new steel to not be slightly radioactive, which means if they need uncontaminated steel they get it from ships that sunk before those.

      • Tamo240@programming.dev
        cake
        link
        fedilink
        arrow-up
        13
        ·
        3 months ago

        This is the exact metaphor I’ve been using when talking to people about the issue. Did we both get it from somewhere I can’t remember, or is it just perfect?

        • Zikeji@programming.dev
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          It’s the first thing I thought of when the articles about the generative AI polluting itself started coming out.

      • 2pt_perversion@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        3 months ago

        Luckily radiation levels have pretty much dropped back to pre-war levels now so new steel can be low-background as well. It was possible to make new low-background steel from 1945 onward too it just would have been more expensive than salvaging pre-war ships. I like the analogy though, it fits.

        • Clinicallydepressedpoochie@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Isn’t it the same with the upper atmosphere and humans more or less. I remember something about radio active tracker used which wouldn’t be present if it were for nuclear testing etc.

  • cybervseas@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    ·
    3 months ago

    That makes sense. Way too many web search results look and feel like they weren’t written by a human lately. It’s gotten even more difficult for me to figure out what’s trustworthy and what isn’t.

    • TimLovesTech (AuDHD)(he/him)@badatbeing.social
      link
      fedilink
      English
      arrow-up
      29
      ·
      3 months ago

      Yep, and the fact they continue to feed these same results back to the AI is going to eventually make them lose their shit. I saw it mentioned in an article or video (can’t remember now which) that when AI starts taking AI created output as input it gets hallucinations, almost like schizophrenia.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      When the first three results look like high schoolers copied with slight wording changes from the same source and they are all written in an extremely passive tone, my assumption is AI. Questions on things like cooking temps are the worst in my experience, and I assume that is something which is easy to automate.

      • cybervseas@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        I was looking for tips on cutting acrylic sheets and everything I found seemed untrustworthy. Bad advice there could be hazardous.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          arrow-up
          8
          ·
          3 months ago

          That I feel is a case of people yearning for a day that never existed.

          Like, every GenX/Older Millennial who had a modem too early in life has stories about The Anarchist’s Cookbook. And the thing you learn REAL fast is that people would edit and share MUCH more dangerous versions (and considering what the source was to begin with…). I remember being part of the mod staff for a couple DC++ hubs where we would check versions and tell anyone with a(n overly) dangerous edit to delete that shit or be banned.

          Fast forward a couple decades and I needed to do a temporary repair on my car before I could get some “body” damage fixed (like two hours of effort but needed a part). Every attempt at searching, even on reddit, would talk about how you should use flexseal or the good duct tape or whatever. Only lucked out because I found one blog post that talked about how using any of those methods would guarantee you rip off the paint and drastically increase the cost of repairs and to instead use automotive masking tape unless you REALLY needed to drive in a heavy downpour.

          Same with doing house work. Youtube is immensely useful for that. But there is a reason so many “maker” channels have “React to life hack” videos. Because if you don’t know what you are doing? Some whackjob using clever editing to make it look like they built a duct adapter out of elmer’s glue and an actual repair video are indistinguishable (especially after youtube hid the dislikes…). And that can range from wasting your time to outright fire hazards or frozen pipes.

          The reality is that people have always been shits. And it REALLY fucking sucks when the LLMs designed to parse that, invariably, become shits too. But this has been a problem since people discovered SEO in the first place. Volume has gone up but the problem is not new.

          And… late stage capitalism. But I find myself REALLY liking Kagi (libertarian tech bro CEO aside…) simply because it reduces the impact of my search history on results while also letting me manually emphasize some sites or outright block any that piss me off. Still get the SEO blogspam but a lot less.

          • snooggums@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            I remember the anarchit’s cookbook and knowing it was terrible at the time. One dumb friend was able to prove it while luckily avoiding seeious injury!

            But what I also learned back then was criticsl thinking as a lot of early websites were just as terrible, but it was a bit easier to tell they were terrible because they did not have any sulporting information like references or examples. Today it is fairly easy to dismiss youtube videos where the person is enthusiastic or doesn’t show the thing from start to finish. The best auto repair videos were some guy with a handheld camera (probably a phone) walking through the process and explaining what they were doing and why. If they stuggle a bit, even better! My favorite channel for someone doing wordworking explains everything in a calm and clear way, shows the process, and explains the ins and outs and why they might have done it differently in the past!

            The worst ones are someone enthusiastic showing five second clips and not mentioning anything about safety or how to know if you are doing it wrong. They are entertainment personalities and not a source of knowledge!

    • Sludgehammer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      Yeah, when I was looking for information about Tears of the Kingdom around 90% of my search results was AI slop. I think was looking for info about how weapon durability and fusion worked and I kept getting a badly reworded version of the explanation of fusion from the gameplay teaser.

      Actually… that reminded me of another TotK search I did, I was looking for where to farm some variety of lizalfos tails and kept getting AI articles that confused BotW locations with TotK. Amusingly, I eventually tried Google’s chatbot out of exasperation and it actually proved more accurate than my search results.

  • Asafum@feddit.nl
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    3 months ago

    Tie this with the obvious oil pollution, and newly musks radio transmission pollution… Fucking corporations get to pollute the world in every way imaginable to chase a buck and we’re left having to cope with their waste…

    Fucking bullshit society we made for ourselves…

    • some_guy@lemmy.sdf.orgOP
      link
      fedilink
      arrow-up
      4
      ·
      3 months ago

      Fucking bullshit society we made for ourselves…

      Yes, but more accurately, those who came before us made for us. Not that we’re doing a bang-up job at reversing the trend.

  • Neuromancer49@midwest.social
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 months ago

    Devastating loss for the science community. I used this database in my PhD, and didn’t expect it to shut down ever.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    arrow-up
    23
    arrow-down
    8
    ·
    3 months ago

    Having read the article:

    I agree that the approach is no longer viable but I strongly disagree with the rationale. It boils down to three key aspects:

    1. Wordfreq works by scraping the “open web”. As a result, it is being inundated with massive amounts of gpt spam articles. This is problematic in that it is not “natural language” between people but… those articles never were. If you think anyone talks like the average SEO recipe blog then… more on that later.
    2. Sites are increasingly locking down access to scraping their text. This… I actually think is really good. I strongly dislike that that locking down means “so that only people who pay us can train off of you” but I have always disliked the idea that people just train models off of social media with no consent whatsoever
    3. Funding for NLP research is basically dead. No arguments there and I have similar rants from different perspectives. But… that is when you learn how to call what you do AI to get back your old funding.

    But I think the bigger part, that I strongly disagree with, is the idea that this is not the language of a post-2021 society. With points like

    Including this slop in the data skews the word frequencies.”

    But… look up “so-cal-ification” and how many people have some “valley girl” idioms and cadence to their normal speech because that is what we grew up on. Like, I say “like” a lot to chain thoughts together and am under no illusions that came from TV. Same with how you can generally spot someone who grew up reading SFF based on how they use some semi-obscure words and are almost guaranteed to mispronounce them.

    Because it is the same logic as “literally there is no word that means literally anymore”. Yeah, it is true. Yeah, it is annoying. But language evolves and it doesn’t always evolve in ways that make sense.

    Or, just look at how many people immediately started using the phrase “enshittification” every chance they got. Or who learned about the Ship of Theseus and apply it every chance they get.

    Like (there it is again!), a great example is cell phones. Reality TV popularized the idea of putting your phone on speaker, holding it in the palm of your hand, and talking into it. That is fucking obnoxious and has made the world a worse place. But part of that was necessity (in reality tv it is so that the audience gets both perspectives. In reality life it is because of shit like the iphone having a generation or two that would drop calls if you held it like a god damned phone) and then it is just that feedback loop. Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.

    AI has already begun to change language and it will continue to do so in the future. That is just reality and it is no different than radio and especially television leading to many regional dialects being outright wiped out.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      arrow-up
      26
      ·
      edit-2
      3 months ago

      The problem is that LLMs aren’t human speech and any dataset that includes them cannot be an accurate representation of human speech.

      It’s not “LLMs convinced humans to use ‘delve’ a lot”. It’s “this dataset is muddy as hell because a huge proportion of it is randomly generated noise”.

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        arrow-up
        3
        arrow-down
        10
        ·
        3 months ago

        What is “human speech”? Again, so many people (around the world) have picked up idioms and speaking cadences based on the media they consume. A great example is that two of my best friends are from the UK but have been in the US long enough that their families make fun of them. Yet their kid actually pronounces it “al-you-min-ee-uhm” even though they both say “al-ooh-min-um”. Why? Because he watches a cartoon where they pronounce it the British way.

        And I already referenced socal-ification which is heavily based on screenwriters and actors who live in LA. Again, do we not speak “human speech” because it was artificially influenced?

        Like, yeah, LLMs are “tainted” with the word “delve” (which I am pretty sure comes from youtube scripts anyway but…). So are people. There is a lot of value in researching the WHY a given word or idiom becomes so popular but, at the end of the day… people be saying “delve” a lot.

    • some_guy@lemmy.sdf.orgOP
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      Cell phone companies design their phones to look good on TV when held that way and people who watch TV start doing that because all the cool people do it. And so forth.

      I strongly disagree with this. They’re designed to look good no matter what. TV is an afterthought in the design of smartphones. But what do I know… I only worked on one of those projects.

      Language evolves, yes, and here’s another chance to recommend an incredible book for language nerds: Highly Irregular: Why Tough, Through, and Dough Don’t Rhyme and Other Oddities of the English Language

      But “enshitification” refers to a very specific cultural trend. The Ship of Theseus is someone trying to sound smart. These are not the same thing, even if some asshole tries to sound smart talking about the former. Others who are industry-enthusiasts use it as a shorthand for a very specific larger conversation.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    AI language patterns are polluting the data, but are they influencing language usage by humans as well? We should delve into that.

  • xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    Taking a step back, i wonder… we are reading this stuff now, it effects us too. What if we have already stepped into a linguistic death-spiral of a telephone-game where each generation gets rehashed garbage from the last?

  • Optional@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    5
    ·
    3 months ago

    Cripes what a dumb way to do it.

    “Study of How Dogs Interact With Cheese Called Off After Dogs Eat the Cheese”