Microsoft, MIT help self-driving cars learn from AI 'blind spots'

A collaboration of researchers from MIT and Microsoft have developed a system that helps identify lapses in artificial intelligence knowledge in autonomous cars and robots. These lapses, referred to as "blind spots," occur when there are significant differences between training examples and what a human would do in a certain situation — such as a driverless car not detecting the difference between a large white car and an ambulance with its sirens on, and thus not behaving appropriately.

The new model developed by MIT and Microsoft has the AI system comparing a human's real-world actions with what it would've done in the same situation. Alternatively, in a real-time environment a human watching over the AI could correct any mistakes as they happen, or just before. The result is that the AI system will change its behavior based on how close its actions matched the human's, and identify situations where it needs more understanding.

"The model helps autonomous systems better know what they don't know," writes research author Ramya Ramakrishnan. "Many times, when these systems are deployed, their trained simulations don't match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors."

The model isn't quite ready for public rollout, but the researchers have been testing it using video games where a simulated human makes corrections for an in-game character. It seems the next logical step, however, is to start using it with real autonomous cars and their testing systems.