Self-driving cars, once the exclusive domain of science fiction, are now very much a reality. Foremost among their predicted benefits is a much higher level of road safety. Unfortunately, that’s not currently true. Apparently, cars drive themselves a bit too well.
It’s a hilarious problem; when cars perform their tasks flawlessly, human drivers don’t know how to deal with it. For manufacturers, this creates a strange dilemma. Should they be programming their vehicles to break traffic laws under specific circumstances?
Companies like Uber have also been investing in the adaptation and use of driverless vehicles, employing hackers and robotics specialists to try and solve the complex situational management that automated vehicles require.
Raj Rajkumar of Pittsburgh’s General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab describes the issue as a subject of “constant debate.” When you program a car to drive the speed limit, you’re telling it to do something that most people don’t.
In the largely driverless future, the precision of a car’s artificial intelligence will be a boon. IHS Automotive predicts that future may arrive much sooner than we think. But until 2050, car manufacturers and automated driving services may be forced to give our robotic counterparts a bit of a lead foot.
Follow Nate Church @Get2Church on Twitter for the latest news in gaming and technology, and snarky opinions on both.