wiki:Other/Summer/2021/SelfDrivingVehicle

Self-Driving Vehicle

    Self-Driving Vehicle

    WINLAB Summer Internship 2021

    Group Members: Zhuohuan Li, Sandeep Alankar, Anthony Siu, Adas Bankauskas, Malav Majmudar, Abia Mallick, Lohith Bodipati, Aayush Agnihotri

    Project Website

    https://sa14544.wixsite.com/self-driving-vehicle

    Project Objective

    The goal of this project is to build a fully-functional self-driving car. The project includes development of ~1/14 scale vehicles for use as a remote self-driving car testing platform, as well as a virtual simulation environment which will model both the physical vehicles and the testbed environment. Robot Operating System (ROS) will be used for both halves of the project, with the simulation running in Gazebo.

    There are several objectives for this project:

    • Design and implementation of additional sensors for existing vehicles to allow for remote experimentation
    • Incorporation of ROS control into existing car software
    • Use of AI/machine learning algorithms for self-driving behavior
    • Building the actual vehicle at WINLAB and testing its autonomy in a real environment

    Week 1 Activities

    • Created ORBIT/COSMOS accounts and become familiar with reserving nodes and performing basic Linux operations
    • Read about how to run ROS and Gazebo Simulator on local machines
    • Week 1 Presentation

    Week 2 Activities

    • Worked on ROS tutorial with PuTTY
      • Reserved nodes on intersection and retrieve/plot data from turtlesim
    • Learned about remote graphical access and how to X forward using correct ssh commands
      • Created X image for ROS simulations
    • Learned how to duplicate PuTTY session with tmux and practice basic tmux shortcuts
    • Week 2 Presentation

    Week 3 Activities

    • Finished ROS tutorial with PuTTY
    • Working on Gazebo Simulator tutorial
      • Learn how to add new model, build/modify robot, improve model appearance with meshes, etc.
    • Created project page on ORBIT wiki, add objective and weekly summaries
    • Published project website with links to weekly presentations
    • Week 3 Presentation

    Week 4 Activities

    • Continued working on Gazebo Simulator tutorial
      • Learned how to implement DEMs, create populations of models, build multi-leveled and multi-layered simulation environment, etc.
    • Configured Chrome remote desktop to access Gazebo from local machines
      • Loaded self-driving image onto nodes and learned how to properly save node image
    • Week 4 Presentation

    Week 5 Activities

    • Finish Gazebo tutorial
      • Learned how to record and playback simulations, apply force and/or torque to models, connect to Player, use physics engines to achieved desired behavior, etc.
    • Accessed Gazebo code from previous year's GitLab repository
      • Looked over algorithms and simulations built for vehicles with Ackermann steering
    • Week 5 Presentation

    Week 6 Activities

    • Visit WINLAB
      • Examined physical model's hardware and compare with previous rigid steering models
    • Created catkin workspace and control Gazebo robot through terminal commands
    • Tested different neural network architectures to maximize steer prediction accuracy
    • Week 6 Presentation

    Week 7 Activities

    • Accessed four RealSense cameras positioned around city intersection
    • Created 3D image from each perspective and started to experiment with combining images by creating and transforming individual point clouds
    • Connected to Pioneer 3-DX robot remotely from laptops
    • Installed RosAria and used catkin workspace to build and run the RosAria node
    • Week 7 Presentation

    Week 8 Activities

    • Presented operation of Pioneer robot to Rutgers film crew at WINLAB
    • Connected to Pioneer 3-DX and ran appropriate scripts to steer robot through model intersection
    • Configured wireless connection and drove Pioneer wirelessly alongside smaller mobile robots
    • Working on creating a direct connection between Pioneer node and RealSense to view continuous RealSense camera feed while driving
    • Week 8 Presentation

    Week 9 Activities

    • Switched to ROSARIA, now using ROS nodes to connect to and wirelessly drive the robot
    • Configured ROS nodes such that training data consisting of both imaging and robot commands can be recorded simultaneously
    • Began collecting data through the rosbag node
    • Properly explored the gazebo simulation, expanding on models and the world
    • Week 9 Presentation

    Week 10 Activities

    • Debugged Python files to reflect Vector3 messages that we used to represent steering data
    • Trained robot on circular track to learn how to properly steer to avoid going off-road
    • Subscribed to control and image topics and recorded training using rosbag
    • Converted bag files to image (.ndz) files that were fed into 4 convolutional layers of neural network
    • Final Presentation
    Last modified 3 years ago Last modified on Aug 5, 2021, 3:33:29 AM
    Note: See TracWiki for help on using the wiki.