[[TOC(Other/Summer/2020/SmartIntersection/*, depth=1, heading=Smart Intersection)]] = Smart Intersection - daily traffic flow = **WINLAB Summer Internship 2020** **Group Members: Bryan Zhu, Kevin Zhang, Nicholas Meegan** == Project Website == https://bzz3ru.wixsite.com/smartintersection == Gitlab Repositories == **!DeepStream and YOLOv3 Application:** https://gitlab.orbit-lab.org/si2020-smartintersection/smart-intersection-ds-yolov3-app **OpenCV- Add Bounding Boxes to Video:** https://gitlab.orbit-lab.org/si2020-smartintersection/add-bounding-boxes == Project Objective == The goal of this project is to create a method for estimating the statistics for vehicle count/traffic flow into one intersection in New York City. As an example, record videos of the northbound traffic on Amsterdam Avenue, as vehicles are entering the 120th St./Amsterdam Av. intersection. Using YOLOv3 deep learning model, detect and count vehicles as they approach/enter the intersection from south, making sure that there is no double-counting. Use 180 second long video fragments (approximately two traffic light cycles), and repeat up to half a dozen times a day, for a number of workweek/weekend days during the same times of each day. Compare the vehicle count (traffic flow) as a function of the time of the day. Utilize NVIDIA !DeepStream deployed on COSMOS GPU compute servers to run the model. The method should be generalizable/expandable to any direction of vehicle movement, when appropriate camera views are available. == Week 1 Activities == * Get ORBIT/COSMOS account and familiarize oneself with the testbed procedures * Learn about YOLOv3 deep learning models for object detection * Read about NVIDIA !DeepStream * Explore the image (set of computing tools) available on COSMOS, which uses !DeepStream and can deploy YOLOv3 * Brainstorm about vehicle counting/traffic flow estimation methodology ---- **Week 1 Weekly Meeting Presentation:** https://docs.google.com/presentation/d/1Sf9hzpo3WQsEPwbhKfic2xWCskH1EViD-3SNb_foouA/edit?usp=sharing == Week 2 Activities == * Understand the concepts of object detection in 3D Point Cloud * Gain an understanding of NVIDIA’s !DeepStream SDK * Get comfortable deploying YOLOv3 on the COSMOS testbed * Use existing datasets to play around with !DeepStream and YOLOv3 ---- **Week 2 Weekly Meeting Presentation:** https://docs.google.com/presentation/d/1Cl8MbsSU3ZAq5lpRuE0eVBSwnIX4jTUAci5uAgP7Vt8/edit?usp=sharing **Week 2 Team Meeting Presentation:** https://docs.google.com/presentation/d/1O2yCze4fmVOAFGCi0u6WTZq8VTFhLc_J448skeygguw/edit?usp=sharing == Week 3 Activities == * Investigate existing RGB-D (RGB + depth map) object detectors whose models we can immediately put to use for inference * Look into existing 3D Point Cloud object detection implementations * Learn how to run !DeepStream's YOLOv3 implementation * Investigate !DeepStream Python bindings for use with YOLO ---- **Week 3 Weekly Meeting Presentation:** https://docs.google.com/presentation/d/13vqiw0kkyT0_XPzPv3NiowIvxc22SapfxM_ZbAKn1Cc/edit?usp=sharing **Week 3 Team Meeting Presentation:** https://docs.google.com/presentation/d/1jwq6h05mw1vHt6_C1Br4MM_0LQDZRlIEoSvEasJ5Rsg/edit?usp=sharing == Week 4 Activities == * Investigate YOLOv4 and its use with TensorRT * Look into getting output/data processing based on the outputs from !DeepStream * Look into the !DeepStream tracker to build on top of * Build a presentation slide set to inform the intern class about !DeepStream and YOLOv3 ''**!DeepStream and YOLOv3 Overview + Demonstration Presentation Slides:**'' https://docs.google.com/presentation/d/1HxFIeoxCXxvbDuS0BVnAreUocIz04w508vAwigs4EFs/edit?usp=sharing **Video recording !DeepStream and YOLOv3 Overview + Demonstration Presentation:** https://drive.google.com/file/d/13fkoHQgZHS0HY7QQ2-tXj8-ZI1u2jp4N/view ---- **Week 4 Weekly Meeting Presentation:** https://docs.google.com/presentation/d/1R50VqBbzwy0204_ZUZyb323N7cR4mwhgdUcYBD7yOGE/edit?usp=sharing **Week 4 Team Meeting Presentation:** https://docs.google.com/presentation/d/1fVNlO-hJczEXf4CQadozM4927N1Ghm42NzTR0tRnLDw/edit?usp=sharing == Week 5 Activities == * Keep trying to get YOLOv4 running as a !DeepStream app * Augment !DeepStream YOLO inference output with bounding box class confidence scores * Begin setup of a pub/sub system for inference output * Investigate ways of recombining inference output with input video stream (NTP/OpenCV) ---- **Week 5 Weekly Meeting Presentation:** https://docs.google.com/presentation/d/1lUB_HH3MoxlQUo5O9BN-lFMwreRaF0qPwgySyKkmjIc/edit?usp=sharing **Week 5 Team Meeting Presentation:** https://docs.google.com/presentation/d/16W6Lp8ouqKu9JPEil2aFbliPtWatrIM56krBP5P6zMc/edit?usp=sharing == Week 6 Activities == * Implement a publisher within the !DeepStream app using high-level C bindings for ZeroMQ provided by CZMQ * Attempt to run the !DeepStream app on a live video stream * Investigate ways to sync video frames and inferred bounding boxes on different machines synced via NTP (Network Time Protocol) * Continue working with OpenCV in order to add bounding box information to the input video stream ---- **Week 6 Weekly Meeting Presentation:** https://docs.google.com/presentation/d/1ZGjBbafobl9CBx4RCXkqRufrvhkJglMhrGW5I7ZSo-w/edit?usp=sharing **Week 6 Team Meeting Presentation:** https://docs.google.com/presentation/d/1YTUnloztvLmBBdHV84G-aswjYQllsdTe1CvybC8uxu0/edit?usp=sharing == Week 7 Activities == * Start the implementation of the subscriber class (Download the ZeroMQ Library (ZMQPP) + Baseline/barebones necessities) * Continue developing the publisher class in ZeroMQ * Continue working on adding bounding boxes on video stream through the use of multi-threading and synchronization schemes (mutexes, condition variables) ---- **Week 7 Weekly Meeting Presentation:** https://docs.google.com/presentation/d/1u7kZZMSVsy16dc8yXuwwEzmmHIA-N70FDFVEi0LD8Wg/edit?usp=sharing **Week 7 Team Meeting Presentation:** https://docs.google.com/presentation/d/1TjGrt2WbLwWloTiuHCarV99iBq0aVN0XHchZYNPv9Us/edit?usp=sharing == Week 8 Activities == * Implement any remaining features to the Smart Intersection Project * Finish integrating all code together * Prepare the final presentation and final poster ---- **Project Review Presentation:** https://docs.google.com/presentation/d/18aSVqdheWwhezxHjDkGKT2suJGrfbWYOz5Blk1VmH28/edit?usp=sharing **Open House Final Presentation:** https://docs.google.com/presentation/d/14Wl3pyiSItGkN8Fnk_KW3rxL6CMDaFzip4rZUzOoKlM/edit?usp=sharing