Neural Networks Give Self-Driving Cars Their Own Brains | Koeppel Direct

Neural Networks Give Self-Driving Cars Their Own Brains

self-driving-cars-given-brain
 

The future of self-driving cars may be in the hands, and minds, of machines.

Although the concept of the self-driving car has been around for a while, it wasn’t until recently that machine learning was applied to the problem. What took months to teach a car in the past now only takes days, bringing the self-driving car closer and closer to commercial production for the auto industry.

Machine Learning, Blessing or Curse?

Google first employed machine learning on its self-driving car in order to teach the vehicle how to properly identify pedestrians, slowly incorporating the tech into different aspects of the software.

However, newer self-driving car startups like Google vet Chris Urmson’s Aurora Innovation have been able to catch up quickly to companies long-involved in this research because they don’t have to go back and retrofit machine learning into older, hand-coded software. Instead, their car’s programming is built with machine learning from the ground-up, allowing for possibilities that are both impressive and intimidating.

Machine learning works by feeding an immense amount of data into an algorithm, which then ultimately informs the types of decisions that the software will make. The problem with this approach is that because the calculations of these neural networks are so complex, they sometimes can behave unpredictably, which becomes exponentially more difficult to untangle than if a human performed some irrational behavior.

Who’s at Fault in a Self-Driving Accident?

One of the questions being raised by the Virginia Department of Transportation’s Transportation Research Council is this: in an accident between a car controlled by a neural network and one driven by, say, a human, can you explain why the car made the decision that led to the crash and ensure it won’t happen again? It seems to be a key question that’s on everyone’s mind.

Without being able to understand what’s really going on in that neural network, it might be possible that something completely unforeseen occurs, but Aurora’s Drew Bagnell has one simple answer.

“How do you get confidence that something works? You test it.”

And, as it turns out, you also hire a few specialists who spend their days trying to trick the algorithm into doing strange things so they can figure out how to prevent them in the future.

 

Your browser is out of date. It has security vulnerabilities and may not display all features on this site and other sites.

Please update your browser using one of modern browsers (Google Chrome, Opera, Firefox, IE 10).

X

Google+