Google Car
Autonomous cars, like this one from Google, will rely on deep learning to teach themselves how to drive Google

Teaching an autonomous car to drive with a set of instructions and millions of lines of code will only get it so far. Eventually we will have to remove the stabilisers and let them begin learning for themselves through artificial intelligence and deep learning.

This is where self-driving cars learn the ways of the road and how humans interact with each other from behind the wheel, but without giving the computer a series of pre-written demands and rules to stick to. This will also see autonomous systems move beyond their current state, where a lack of visual guidance from road markings causes problems, and onto the next stage.

A company heavily involved with this is Nvidia. Better known as a computer graphics card manufacturer, Nvidia has an autonomous vehicle division which works with over 80 manufacturers and has successfully taught a car to drive itself with deep learning technology.

Instead of following written instructions, the car learned by analysing footage taken from a front-facing camera which was synced with the angle of the steering wheel. It wasn't just taught how to drive; it was taught how to learn to drive.

Danny Shapiro, senior director of Nvidia's automotive division, explained to IBTimes UK: "We're using deep learning to help a car understand and recognise specific objects. An end-to-end deep learning system where we taught the car to learn just by driving. We didn't have to train it. This is a huge breakthrough and has got the attention of a lot of folks."

Shapiro admits the system "isn't ready for prime time just yet" but a concept has been proven that "through very minimal amounts of training, the car was able to understand its environment."

Tesla Model S Autopilot
Tesla's Autopilot feature is autonomous, but relies on reading road markings to work properly Reuters

Such training will go a long way to overcoming the issues autonomous cars currently face. Although it is undeniably impressive, Tesla's self-driving Autopilot feature relies heavily on lane markings painted on the road and, as a backup, the vehicle in front. Without these, the car hands control back to the driver with just a second's warning.

"If you get a car which has been trained on lane markings," Shapiro continues, "and it gets to a piece of new road where there aren't any markings, the system would fail. So instead we trained it on a whole range of different roads and weather conditions and the result is it didn't rely on any one piece of critical information...that's just the start of a whole new way of approaching autonomous driving."

Where artificial intelligence is critical, is when every potential outcome cannot be written into the autonomous vehicle's code. For obscure situations the computer must think for itself and draw of previous experiences to make the right decision.

Uber self-driving car
Uber's self-driving Ford Fusion is stacked with cameras and sensors but a driver is present during testing. Uber

While the technology is already proving itself, there is a significant hurdle in place when it comes to convincing the general public that autonomous cars are safe. But safety isn't necessarily synonymous with accuracy and efficiency because, as Shapiro points out: "A computer driven car can be extremely precise but no human is going to feel comfortable in that if it's changing lanes and leaving centimeters as a margin for error, even though the cars knows exactly where the other vehicle is...nobody in or around that car will feel safe."

But the car can't be too soft in its approach to other road users. If everyone knows an autonomous car will always give way and always seek safety over making progress, it will likely become the target of aggressive drivers and pranksters who know that pulling out in front of it, or walking in its path, will result in it stopping and giving way every time.

Volvo to start testing self-driving cars in London from 2017
Volvo has already been testing its autonomous vehicles in Sweden Reuters

Again, this is where deep learning can help produce autonomous cars which drives like humans, but make far fewer mistakes and can't be tired, drunk or distracted.

Finally, Shapiro addressed The Trolley Problem, which when applied to autonomous cars means a no-win situation where it either continues on course and kills five people in the road, or actively makes the decision to swerve, killing one on the pavement. Some believe this situation will require software to be written telling the car to kill.

That may happen, but Shapiro wants us to see the bigger picture after a self-driving car has an accident. "The expectation is that these cars will be 100% perfect or foolproof. But I think eventually there's going to be a bad outcome and it'll get a lot of publicity. But as a society we'll have to say we have eliminated tens of thousands of fatalities worldwide, are we going to tolerate a couple of incidents which were really unavoidable?

"What's more significant is that we're going to be reducing those dilemmas at an astounding rate."