MIT system helps self-driving cars see around corners by watching shadows

MIT has announced the development of a new system that allows autonomous vehicles (AVs) to see better around corners. The new MIT system can sense tiny changes in shadows on the ground to determine if a moving object is coming around the corner. MIT says that AVs could one day use the system to quickly avoid a potential collision with a vehicle or pedestrian emerging from around the corner of a building or between parked cars.

MIT says that robots of the future could also use the systems to navigate hallways inside a building. Researchers say that when sensing and stopping for an approaching vehicle, the car-based system beats traditional LIDAR. LIDAR is only able to detect visible objects; MIT's system is more than a half-second faster than LIDAR.

Half a second doesn't seem like much of an improvement, but fractions of a second matter when it comes to avoiding hitting a pedestrian or another car. The extra early warning the MIT system can provide allows the AV to adjust its path or slow down. So far, the new system has only been tested indoors.

The so-called shadowCam system uses computer vision techniques to detect and classify changes to shadows on the ground. ShadowCam uses sequences of video frames from a camera to target a specific area like the floor in front of a corner. Changes in lighting frame-to-frame may indicate something is moving towards or away from the camera.

The system uses a combination of image registration and a visual-odometry technique. A similar technique was used for the Mars rovers to estimate the motion of a camera in real-time by analyzing pose and geometry sequences of images. The technique also amplifies images to bet a boost in color to reduce the signal-to-noise ratio. The team plans to work on the system in the future to get it operational indoors and outdoors.