Rutgers Navigator, 2011 IGVC

The Navigator is a three-wheeled, differential-drive robotics research platform that was designed, built, and programmed entirely by the Rutgers University IEEE Student Branch for competing in the 2011 Intelligent Ground Vehicle Competition. I was responsible for designing the Navigator's camera system and writing the computer vision software necessary for processing its output.

CAD render of the Navigator Textured CAD Render of the Navigator
photo of the Navigator Photo of the Navigator

All of the content on this page is discussed in more depth in our official 2011 IGVC Design Report or my capstone design final report:

Camera System

stereo and wheel cameras superimposed over a ghost CAD render of the Navigator

There are a total of five PlayStation Eye cameras mounted in custom polycarbonate cases and attached to the Navigator's aluminum frame. Originally intended as a peripheral for the PlayStation 3 game console, these cameras can be used as inexpensive USB web-cameras with a disproportionate number of high-end features:

Three of these cameras form a custom trinocular vision system that is mounted to the front of the Navigator. All three of these cameras are hardware-synchronized to share the same clock, which enables accurate stereo reconstruction while moving at high speeds. Synchronized frames from these cameras are captured at 15 Hz with a resolution of 640 × 480 and all compression disabled. The remaining two cameras are mounted above the robot's front two wheels, asynchronously capture images at 15 Hz, and use a lower resolution of 320 × 240. Despite their proximity to the ground, these five cameras have a composite field of view of nearly 170°.

Stereo Obstacle Detection

Selecting the optimal baseline for a stereo system is a balance of two opposing, but equally important, factors: field of view and maximum range. Decreasing the baseline increases the shared field of view of the two cameras at the cost of a shorter maximum range. Conversely, increasing the baseline decreases the aggregate field of view, but yields an increased maximum range and better precision at each visible distance. Using a trinocular stereo system instead of a standard binocular system allows the software to get the advantages of two baselines with the addition of only one camera.

original color image Source Image
narrow baseline disparity map Narrow Disparity Map
narrow baseline disparity map Wide Disparity Map

The narrow pair has a baseline of approximately 10 cm and uses the left and middle cameras. Conversely, the wide pair has a baseline of approximately 20 cm and uses the left and right cameras. By using the narrow baselines for nearby points and the wide baseline for more distance points, this trinocular stereo system combines the small minimum range of the narrow baseline with the better accuracy and maximum range of the wide baseline. Practically, this is implemented by calibrating the two baselines independently and processing the two stereo pairs in parallel.

Once images from both baselines are processed for point correspondences, the resulting pointclouds are merged. By assuming that ground is the dominant plane in the composite point cloud, the navigator fits a planar model to the data using RANSAC. This model is smoothed by a low-pass filter and is subject to several heuristics that verify that the plane was incorrectly fit to a planar obstacle. Points that are sufficiently far from the fitted ground plane are passed through a statistical outlier detection filter and are assumed to be obstacles.

Lane Tracking

Not only must the Navigator see road obstacles, it must also be capable of detecting the white painted lines that define the edges of the Autonomous Challenge obstacle course. Analysis of the 2010 IGVC results sho that accurate line detection is crucial to performing well: a mere 2 of 28 teams lasted the full duration of the Autonomous Challenge without being disqualified. To avoid suffering the same fate, the Navigator simultaneously searches for lines in images from three cameras: the front-mounted stereo camera, the left wheel camera, and the right wheel camera. By running the entire processing pipeline at 10-15 Hz, the Navigator is able to detect lines within in a 180° field of view with less than 100 ms of latency

original color image Source Image
matched pulse-width filter response Pulse-Width Filter Response
output of non-maximal supression Non-Maximal Suppression

Lines in each of the three images are independently identified using a three-stage algorithm. Immediately upon receiving a new frame from the camera, the original color image undergoes a color space transformation that emphasizes white regions of the image. The resulting monochromatic image is then searched for regions of high intensity that match the expected width of the line using a matched pulse-width filter. The output of the matched pulse-width filter is reduced to a set of candidate line points through non-maximal suppression and the local maxima from all three images are projected onto the ground plane in a common coordinate frame.

Acknowledgements

The Rutgers IGVC team would like to extend our gratitude to everyone that has helped us along the way this year. We could not have finished the robot without the generous support of our sponsors: Dr. Stuart Shalat of the Advanced Robotics Environmental and Assessment Lab (EOHSI), Optima Batteries, Novatel, Omnistar, Github, IEEE, 80/20, Inc., the Knotts Company, the Rutgers Engineering Governing Council, Dr. Michael Littman of the Rutgers Laboratory for Real-life Reinforcement learning (RL3), and Dr. Dimitris Metaxas of the Rutgers Computational Biomedicine Imaging and Modeling Center (CBIM). In addition to our sponsors, we would like to thank Dr. Kristin Dana for answering all of our questions and Dr. Predrag Spasojevic for serving as the Rutgers IEEE Student Branch’s faculty advisor. Finally, we would like to thank Joe Lippencott, Steve Orbine, John Scafidi, and the departments of Electrical, Industrial, and Mechanical engineering for their continued support.

Also, thanks to all of the IEEE Student Branch members that contributed to the Navigator: Adam Stambler, Peter Vasilnak, Cody Schaffer, Elie Rosen, Nitish Thatte, Rohith Dronadula, and Siva Yedithi.