- cross-posted to:
- techtakes@awful.systems
- cross-posted to:
- techtakes@awful.systems
cross-posted from: https://lemmy.world/post/23009603
This is horrifying. But, also sort of expected it. Link to the full research paper:
That thumbnail makes me not wanting to watch the video.
You’re not missing anything. In the first minute: “Is ChatGPT AGI? It said it would copy itself to another server if it got shut down!”
I linked the PDF too, so you can read it. I know the Youtube Title is very clickbait, but it is truly worth the watch IMHO.
More no-clicky
Don’t understand what you mean, but no worries. The sources are there to consume at free will. I am not the author of the material, I just came across it and wanted to share. Anyways.
Not really caught. The devs intentionally connected it to specific systems (like other servers), gave it vague instructions that amounted to “ensure you achieve your goal in the long term at all costs,” and then let it do its thing.
It’s not like it did something it wasn’t instructed to do; it didn’t perform some menial task and then also invent its own secret agenda on the side when nobody was looking.
It says the frontier models weren’t changed though… Do you think this introduction ending is incorrect?
Together, our findings demonstrate that frontier models now possess capabili ties for basic in-context scheming, making the potential of AI agents to engage in scheming behavior a concrete rather than theoretical concern.
I never said anything of the kind. I just pointed out that it didn’t do anything it wasn’t instructed to do. They gave it intentionally vague instructions, and it did as it was told. That it did so in a novel way is interesting, but hardly paradigm shattering.
However, the idea that it “schemed” is anthropomorphization, and I think that their use of the term is intentional to get rubes to think more highly of it (as near to AGI) than they should.
Soon we will not talk about “weapons of mass destruction” anymore, but about “weapons of truth destruction”.
They are worse.
Whenever this topic comes up, I’d like to refer to Robert Miles and his continuing excellent work on the subject.
I did say at one point that self conscious AI had a slight chance at actually ending this loop by sabotaging itself / the company that made it. But slight chance is too thin to hope for.
TFW a LLM might be better at solving cognitive dissonance than its creators and stakeholders.