An Intelligent Autopilot System that Learns Drive – AI Project

For the past decade, self-driving algorithms have drawn growing research efforts from both industry and academia using low-cost vehicle-mounted cameras.  In a self-driving car, various automation levels have been described. At level 0, there is no automation. The car is controlled by a human driver. Level 1 and Level 2 are specialized driver assistance systems where the system is still controlled by a human driver, but a few functions are automated, such as brake, stability control, etc. Level 3 vehicles are autonomous although it requires a human driver to intervene and monitor. Level 4 vehicles are completely autonomous, but the automation is restricted to the vehicle’s operating architecture environment i.e. not all driving situations are covered, Level 5 vehicles are assumed to be fully autonomous and their efficiency should be equal to that of a human driver. In the near future, we are still far from reaching level 5 self-driving vehicles. However, level 3/4 self-driving vehicles will theoretically become a reality. Excellent research and technical breakthroughs in the area of machine learning and computer vision, as well as low-cost vehicle-mounted cameras that can either independently deliver actionable information or supplement other sensors, are the key reason for key technical achievements in these fields. In modern cars, many vision-based driver assistance features have been widely supported. Some of these features include the identification of pedestrians/bicycles, crash avoidance by measuring the width of the front driver, lane departure warning, etc. However, in this project, we focused on the “An Intelligent Autopilot System that Learns Drive” i.e. autonomous steering largely unexplored activity in the area of machine learning and computer vision.

An autonomous car is also known as a robotic-car, a driverless-car, and a self-driving car capable of detecting and controlling its environment without human input. An autonomous car perceives their surrounding by combining a variety of techniques including laser light, radar, odometry, GPS, and Computer Vision. Sensory information is interpreted by advanced control systems to define suitable navigation routes, as well as relevant signage and obstacles. 

There is a rise in curiosity about self-driving vehicles. This is because of breakthroughs in deep learning, where deep neural networks are learned to perform tasks that usually need human intervention. To recognize patterns and characteristics in images, CNN applies models, making them helpful in the field of computer vision. Some of the examples of these are image classification, object detection, object detection, etc. 

In this autopilot – AI Project, we are implementing a Convolution Neural Network (CNN) to map raw pixels for a self-driving vehicle from the collected images to the steering commands. With minimal human training input, with or without the lane markers, the machine learns unique features to steer on the road. The inspiration is taken from Udacity Self-driving car and from End to End Learning for Self-driving Cars from NVIDIA.  The dataset provided by Udacity was used for testing purposes and planning the dataset.  The End to End Learning for Self-Driving Cars uses convnets to predict steering angle according to the road. 

BLOCK DIAGRAM

Block Diagram
Figure 01: Block Diagram

HIGH-LEVEL SYSTEM ARCHITECTURE

High Level System Architecture
Figure 02: High Level System Architecture

MODEL PREDICTION

CNN model prediction for autopilot
Figure 03: Model Prediction

CODE REQUIREMENTS

Conda for Python can be installed to address all machine learning dependencies. 

>>> pip install requirements.txt

DATASET

You can get the dataset at here and extract it into the repository folder.

Python Implementation
  • a. Inspiration: Udacity self-driving car and End to End Learning for self-driving cars by NVIDIA
  • b. Network Used: Convolutional Neural Network

Note: If you face any problem, kindly raise an issue.

PROCEDURE

  • a. The initial step is to run LoadData.py for V1 and LoadData_V2.py for V2 which will get the dataset from the folder and store it in a pickle file.
  • b. Now you need to have the data, run TrainModel.py for V1 and Train_pilot.py for V2 which will load data from the pickle. After this, the training process begins.
  • c. For testing it on the video, run DriveApp.py for V1 and AutopilotApp_V2.py for V2

EXPERIMENTAL SETUP

The machine configuration for our experiment was as follows:

Hardware
  • a. RAM: 16 GB
  • b. Operating System: OS X – 10.13.3
  • c. Hard disk Size: 1 TB
Software
  • a. Python
  • b. Unity 3D
  • c. Keras (Tensorflow Backend)
  • d. Anaconda
  • e. OpenCV

PROJECT SUMMARY

This code helps in getting the steering angle of the self-driving car. The inspiration is taken from Udacity Self driving car module as well the End to End Learning for Self-Driving Cars module from NVIDIA.

  • a. Dataset: Udacity
  • b. Inspiration: Udacity self-driving car and End to End Learning for self-driving cars by Nvidia
  • c. Techniques Used: CNN
  • d. Pixel Size: 2500
  • e. Computer Vision Library: OpenCV
  • f. Regularization Used: Dropout

CONCLUSION

In this project i.e “An Intelligent Autopilot System that Learns Drive”, we were able to use the Convolutional Neural Network (CNN) to effectively predict the steering angles and the inner information of CNN can be understood along with how they can be tuned. We have also found that CNN techniques can be used in semantic abstraction, marking detection, path cleaning, and control. In sunny, cloudy, and rainy conditions, a limited amount of training data from less than a hundred hours of driving was adequate to train a vehicle to work in varied conditions on suburban roads, local and highways. 

REFERENCES

  • a. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal,. Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, Karol Zieba. End to End Learning for Self-Driving Cars
  • b. Behavioral Cloning Project
  • c. This implementation also took a lot of inspiration from the Sully Chen GitHub repository: “https://github.com/SullyChen/Autopilot-TensorFlow”

Credit: Sunil Ghimire, Akshay Bahadur, and Raghav Patnecha

☺ Thanks for your time ☺

What do you think of this “An Intelligent Autopilot System that Learns Drive”? Let us know by leaving a comment below. (Appreciation, Suggestions, and Questions are highly appreciated).

4 thoughts on “An Intelligent Autopilot System that Learns Drive – AI Project

  • February 28, 2021 at 8:48 pm
    Permalink

    Trying to create something like this, please which programming language GUI do you use to create the GUI.

    Reply
    • March 1, 2021 at 2:22 pm
      Permalink

      Hey Jimo,

      Hope you are doing well

      We recommend you focus on Python Tkinter GUI.

      Reply
  • March 2, 2021 at 7:49 am
    Permalink

    Hello Sunil. May i know what are the V1 and V2 for?

    Reply
    • March 21, 2021 at 1:48 pm
      Permalink

      Hello Aqil Falah,

      V1 is for the Udacity Dataset based on Udacity Simulator
      V2 is for the NVIDIA Dataset based on real-world

      Thanks in advance.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *