Concepedia

Publication | Open Access

Explaining How a Deep Neural Network Trained with End-to-End Learning\n Steers a Car

211

Citations

0

References

2017

Year

Abstract

As part of a complete software stack for autonomous driving, NVIDIA has\ncreated a neural-network-based system, known as PilotNet, which outputs\nsteering angles given images of the road ahead. PilotNet is trained using road\nimages paired with the steering angles generated by a human driving a\ndata-collection car. It derives the necessary domain knowledge by observing\nhuman drivers. This eliminates the need for human engineers to anticipate what\nis important in an image and foresee all the necessary rules for safe driving.\nRoad tests demonstrated that PilotNet can successfully perform lane keeping in\na wide variety of driving conditions, regardless of whether lane markings are\npresent or not.\n The goal of the work described here is to explain what PilotNet learns and\nhow it makes its decisions. To this end we developed a method for determining\nwhich elements in the road image most influence PilotNet's steering decision.\nResults show that PilotNet indeed learns to recognize relevant objects on the\nroad.\n In addition to learning the obvious features such as lane markings, edges of\nroads, and other cars, PilotNet learns more subtle features that would be hard\nto anticipate and program by engineers, for example, bushes lining the edge of\nthe road and atypical vehicle classes.\n