Articles like this and frankly even the statistics on the safety of autonomous vehicles are a red herring distracting us from where the focus needs to be on the development of this technology. Consider that if the same life saving autopilot technology Elon Musk is pushing was rolled out as a backup, rather than a primary control and turned on by default on every Tesla, far, far more lives would be saved beyond just allowing full-time autopilot or a full-time human in control.
That's right, keep the driver engaged because we know the outcome of the EULA whereby people swear they'll pay attention when autopilot is enabled, they won't, they can't, it's not how the human brain works. Rather, autopilot should function as a backup to the human driver in the near term as the technology is developed. If it senses a pending collision, kick in. Simple, simple. A computer won't disengage because it's not in control, it will always be vigilant as a backup to the human, but a human will never be a backup to a computer, we aren't built that way.
But you might say: "Oh, Mr. Smarty Pants, if the driver knows there's an autopilot backup wouldn't the driver just let it take over?" Well, there's an easy solution to that. Just like a computer can be taught to drive a car, it can also be taught to sense when the driver isn't paying attention. It can kick off the radio, turn off the AC, sound an alarm, or even just pull the car safely to the side of the road.
The little experiment we are currently playing with everyone's lives must be better managed. As a security researcher who's uncovered thousands of bugs over the course of the last two decades, without question, the code that Tesla or anyone else in this space is producing is not of sufficient quality for a life critical system. Where are the independent lab certifications? Where's the university research? It's not there, it's too early in the game and that's why people need to set their ego's aside and do the right thing. Computers at this stage of the game are for backup, not primary.
Why not just do as you suggest ("kick off the radio, turn off the AC, sound and alarm, or even just pull the car safely to the side of the road") if the driver isn't paying attention?
You can still do all that you suggest while providing semi-autonomous driving capabilities.
That's right, keep the driver engaged because we know the outcome of the EULA whereby people swear they'll pay attention when autopilot is enabled, they won't, they can't, it's not how the human brain works. Rather, autopilot should function as a backup to the human driver in the near term as the technology is developed. If it senses a pending collision, kick in. Simple, simple. A computer won't disengage because it's not in control, it will always be vigilant as a backup to the human, but a human will never be a backup to a computer, we aren't built that way.
But you might say: "Oh, Mr. Smarty Pants, if the driver knows there's an autopilot backup wouldn't the driver just let it take over?" Well, there's an easy solution to that. Just like a computer can be taught to drive a car, it can also be taught to sense when the driver isn't paying attention. It can kick off the radio, turn off the AC, sound an alarm, or even just pull the car safely to the side of the road. The little experiment we are currently playing with everyone's lives must be better managed. As a security researcher who's uncovered thousands of bugs over the course of the last two decades, without question, the code that Tesla or anyone else in this space is producing is not of sufficient quality for a life critical system. Where are the independent lab certifications? Where's the university research? It's not there, it's too early in the game and that's why people need to set their ego's aside and do the right thing. Computers at this stage of the game are for backup, not primary.