Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?
I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.
We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.
That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.
Nuclear war had very simple mechanistic concept behind it.
Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).
Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.
There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.
You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.
Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.
Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.
AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.
> There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.
You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting
The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.
It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:
* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).
* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.