This summer I worked with Professor Rahul Mangharam in the School of Engineering & Applied Science on Autonomous 1/10th scale vehicles. My research experience involved applying vision based reinforcement learning methods to enable camera only autonomous driving on the F1/10th racecars.
I began by familiarizing myself with state of the art deep reinforcement learning methods, and implementing them from scratch at github.com/botforge/simplementation. Initially, these methods focused on autonomous agents solving ‘game’ like scenarios (Atari Pong, Breakout & Flappy Bird). I became intimately familiar with OpenAI Gym, and the process of converting a research paper into high performance code. I also greatly improved my understanding of deep learning libraries, and how they can be set up to efficiently iterate on experiments in a controlled manner.
Later into the semester, my goals became more well defined: To effectively apply imitation learning to the F1/10 Vehicles for End-to-End autonomous driving - from camera image pixels, to steering commands. With this goal, my mentor and I began narrowing my research queries, and reading extremely recent work on the subject.
After significant experimentation, we concluded that testing every model on the live robots did not allow for quick iteration in a controlled environment. Given this, I set out to make a photorealistic simulator environment to train the Autonomous Vehicles in. I learnt Unreal Engine from scratch, and modelled the entire 2nd floor of Levine Hall Corridor with similar lighting, textures & reflections. This enabled for quick iteration, and convergence to an apt model whose parameters translated well to real life.