I'm a lot more scared of an AI destroying humanity, like, every single person, than I am of any government or anything like that. More so than climate change. I'm not saying the people using it aren't dangerous -- but I would choose a totalitarian regime over a rogue AI any day of the week.
It wouldn't be HAL 9000 or terminators. It would be an AI deciding that it needs to turn every bit of available matter on earth into computational power in order to cure cancer, or figure out a way to stop humans from fighting each other, or to maximize the profit of GOOG, and being so good at planning and deceiving us that by the time we figured out what it was doing, it was way, way too late.
I'm concerned about climate change, but I am a lot more hopeful about that than I am about AI. Climate change -- we have time, we are making changes, and it's not going to kill all of humanity. A smart enough AI might effectively end us the moment we switch it on.
> It would be an AI deciding that it needs to turn every bit of available matter on earth into computational power in order to cure cancer, or figure out a way to stop humans from fighting each other, or to maximize the profit of GOOG, and being so good at planning and deceiving us that by the time we figured out what it was doing, it was way, way too late.
That's how AIs worked in outdated science fictions.
Current one don't have a bunch of mathematical rules that they follow to literally, but try to model what a human would write by statistical means with less logical capability.
You can't think of any ways of taking advanced LLMs and using them as core components in a system that could carry out actions in the world? I bet you can come up with something.
Work out the details on exactly how said end-of-the-world would occur.
Note that we already have “AI to end cities” - they’re just sitting, turned off, waiting for the code and button press in silos and submarines throughout the world.
The danger is from something vastly more intelligent than humans, and with a mindset that's incomprehensibly alien. No human is capable of working out the details. That doesn't mean the risk doesn't exist. Failure to understand chemistry does not make ants immune to insecticides. The only thing we can assume about a super-intelligence is that it will be highly capable of achieving its goals. There is no reason to assume those goals will be compatible with human existence.
It wouldn't be HAL 9000 or terminators. It would be an AI deciding that it needs to turn every bit of available matter on earth into computational power in order to cure cancer, or figure out a way to stop humans from fighting each other, or to maximize the profit of GOOG, and being so good at planning and deceiving us that by the time we figured out what it was doing, it was way, way too late.
I'm concerned about climate change, but I am a lot more hopeful about that than I am about AI. Climate change -- we have time, we are making changes, and it's not going to kill all of humanity. A smart enough AI might effectively end us the moment we switch it on.