Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?
Too little too late, Sam. 
Yes on everything but drone strikes.
A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.
So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?
I’m not sure why you think that’s how they would work.
Well it’s simple, who do you think should make the life or death decision?
The computer, of course.
A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds/thousands of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.
It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.
Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.
Are there any pedestrians in your perfectly flowing grid?
Again, a computer can react faster than a human can, which means the car can detect a human and start reacting before a human even notices the pedestrian.