Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why aren't there autonomous vehicles already?

There’s a lot of reasons. Not in the order of importance, just what in the order of what comes first in my mind :

1. While Real World Interactions (robotics, autonomous driving, factories automation,…) are somewhat parallelizable with Purely Digital AGI (games, text, videos, programming,…), it is way more easy to do AGI first and Real World Interactions second. This is why you see the Big Brains and the Big Money going to Anthropic/DeepMind/OpenAI. If you have AGI you have Waymo. So predictably, OpenAI/DeepMind/Anthropic will go faster than Waymo.

2. The source of the difficulty gap is easy to understand. It is hard to parallelize and scale experiments in the real world. It is trivial in the digital world, just takes More Money. AlphaZero is an AI engine doing dozens of millions of games playing against itself, eventually reaching super-human capabilities in chess and go. Good luck doing that with robotics/cars.

3. "I learned to drive faster" : It is unknown how much bits of priors evolution have put in the human brain (we don’t even know how genes encode priors — a fascinating question). It is certainly not zero. Evolution did that hard work of parallelizing/scaling the "learning to interact with the world" before you were even born. Hell, most of the work on this problem was probably already completed by the start of the mammalian line. No wonder you find this easy and Waymo find this hard. It is not that the problem is inherently easy and "how bad are AI are to fail this simple promble ?" It is that you are custom-tailored-built for it.

4. We have higher standards for AI than humans, and regulation reflect that.

> con: losing your job. pro: the best health care you can imagine. the best education for your kids. etc etc

The con is that humanity is going to lose pretty much any influence on the future. "losing you job" is a pretty bad way of picturing it.

It is a frustrating topic. Let me try to explain you the stakes in a few words, and let’s start with this image :

https://en.wikipedia.org/wiki/German_revolution_of_1918%E2%8...

It’s a communist militia in Berlin at the early stages of the Weimar Republic. The specifics doesn’t matter, you don’t have to judge who was right or wrong. I could have taken a picture of the proto-nazis, or the SDP, or anyone really. The story is the same.

Why are those humans here ? In the cold, in a potentially dangerous situation ? What’s going on in their head ?

"This a an important moment. I have to be the Best Person I can be, take the Best Actions I can take. If I am right, and I do this right, my actions will help better my future. It will help my family. My neighbor. My community. The World. My actions and my choices here Matter. I Matter".

Those two words, "I Matter" is I believe a fundamental requirement of what is it to be human. To my great surprise, there are people who actually actively disagree with that. "Mattering does not matter very much". Maybe you are one of those, I don’t know, I don’t know you. Those people should indeed accept and welcome the AGI. No human will matter anymore but who cares ? Great healthcare, great education, great entertainments.

But AGI being way better in all cognitive domains : Business/Economy, Policy/Politics/Governance, Science, Arts,… means exactly this : humans will no longer have any place in those domains, and this "I Matter" feeling will be lost forever.

EDIT: I forgot a point :

> and I'm not sure 'trend predictions' work that well

It’s not trend prediction. It’s engineering. Roadblocks have been identified. Solutions to those roadblock have been identified. Now they are just the phase of "implement those solutions". Whether those solutions are sufficient to go all the way to AGI is a bit more speculative, but the odds are clearly in the "yes" side.



> Those two words, "I Matter" is I believe a fundamental requirement of what is it to be human. To my great surprise, there are people who actually actively disagree with that. "Mattering does not matter very much". Maybe you are one of those, I don’t know, I don’t know you. Those people should indeed accept and welcome the AGI. No human will matter anymore but who cares ? Great healthcare, great education, great entertainments.

Yes there will be a crisis of meaning, in fact in most secular societies there already is one (how much meaning can you derive from preparing a balance sheet or handling customer support tickets?). Some societies will deal much better with unemployment - mostly religious societies. If we can create societies of abundance (where you get most services for pretty much free due to A.I) I think we will solve the crisis of meaning with family, friends, hobbies and really good next generation T.V and computer games.

In the grand scheme of things most of us understand we don't matter at all (at least I don't think I matter in any significant way, nor does humanity as a whole imo), but we do need a reason to get up in the morning, somewhere to go and interact with society. We matter to our families and close friends if it makes you feel better.


> We have higher standards for AI than humans, and regulation reflect that.

This is the most likely reason that we won’t see widespread adoption of autonomous vehicles in the next 30 years.

Around 120 people die in automobile accidents every day and people shrug. But if 12 people died in accidents caused by autonomous vehicles in a year (yes I changed units), they will quickly be banned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: