There’s a real possibility of an AI becoming radically self-improving, then super-intelligent
There’s zero possibility of that happening anytime soon. As for if a chatbot given control of missile “defense” would inadvertantly start WW3? That fully depends on the people implementing it and the safeguards put in place. Though the very act of doing any of that would demonstrate an inability to set up a suitably secure system.
In short, a nuclear apocalypse triggered by the likes of chatgpt wouldn’t be due to an “AI” singularity. It would be caused by typical human incompetence.
There’s zero possibility of that happening anytime soon.
I’m not so sure.
Granted, the LLM chatbots we’ve got now aren’t it. Far from it. But in 5 years? 10? 15? This shit has been progressing really fast over just the past few years. Hard to guess what the future holds.
And once they cobble together something that’s capable of effective and autonomous self-improvement … well, at that point, it may only be a matter of days or even hours before something completely beyond our understanding and beyond our control emerges from it. Autonomous self-improvement is the inflection point where it really starts to snowball out of control. Each time it improves itself, even slightly, it becomes not only better at doing its tasks, but also better at improving itself, so that the next round of self-improvement is more efficient and more effective. It could very quickly compound itself out of control. And even if there are safeguards in place by then (there currently aren’t any) a sufficiently advanced AI would find it very easy to manipulate the people in charge of it into removing those restrictions.
(On the plus side, I can pretty much guarantee that the AI dystopia our current techbro CEOs fatasize about will never come to pass. As soon as AI becomes good enough to do most jobs all on its own – if it ever does – it will very quickly surpass that level and be capable of taking over our society through manipulation and coercion. Those CEOs will never get to be the despots of their own technofeudal company towns. By the time AI is able to replace us, it will be able to replace them as well.)
No. LLMs are a technological dead end, and anyone that’s actually worked in computer science knows it.
Are there other forms of AI models that could eventually get to the singularity? Possibly, but none of them are LLMs, which is what is behind the big AI crazr
There’s zero possibility of that happening anytime soon. As for if a chatbot given control of missile “defense” would inadvertantly start WW3? That fully depends on the people implementing it and the safeguards put in place. Though the very act of doing any of that would demonstrate an inability to set up a suitably secure system.
In short, a nuclear apocalypse triggered by the likes of chatgpt wouldn’t be due to an “AI” singularity. It would be caused by typical human incompetence.
I’m not so sure.
Granted, the LLM chatbots we’ve got now aren’t it. Far from it. But in 5 years? 10? 15? This shit has been progressing really fast over just the past few years. Hard to guess what the future holds.
And once they cobble together something that’s capable of effective and autonomous self-improvement … well, at that point, it may only be a matter of days or even hours before something completely beyond our understanding and beyond our control emerges from it. Autonomous self-improvement is the inflection point where it really starts to snowball out of control. Each time it improves itself, even slightly, it becomes not only better at doing its tasks, but also better at improving itself, so that the next round of self-improvement is more efficient and more effective. It could very quickly compound itself out of control. And even if there are safeguards in place by then (there currently aren’t any) a sufficiently advanced AI would find it very easy to manipulate the people in charge of it into removing those restrictions.
(On the plus side, I can pretty much guarantee that the AI dystopia our current techbro CEOs fatasize about will never come to pass. As soon as AI becomes good enough to do most jobs all on its own – if it ever does – it will very quickly surpass that level and be capable of taking over our society through manipulation and coercion. Those CEOs will never get to be the despots of their own technofeudal company towns. By the time AI is able to replace us, it will be able to replace them as well.)
No. LLMs are a technological dead end, and anyone that’s actually worked in computer science knows it.
Are there other forms of AI models that could eventually get to the singularity? Possibly, but none of them are LLMs, which is what is behind the big AI crazr