Open-source robot Qbo continues to make its way to autonomy, with a new video demo showing the Linux-based DIY ‘bot capable of learning to recognize people and objects. Developers The Corpora have also previewed a cloud-based object learning system, which will allow Qbo units to crowdshare the data and thus recognize local objects initially identified by other Qbo ‘bots elsewhere in the world.
In the first part of the demo below, Qbo is shown using first a face-identifying system to spot people nearby, and then a skin color filter to track that person’s movement around the room. The second demo relies on the robot’s stereoscopic vision, with Qbo defaulting to focusing on objects nearest to it.
Interestingly, once Qbo has identified an object, it can subsequently spot it even if there are changes in color, scale, orientation or even if the object is partially blocked from the robot’s sight. Repeated viewing speeds up subsequent identification, and thanks to the cloud system those recurring encounters needn’t be with the same Qbo unit. Specifically, The Corpora has implemented a “bag of words” model (coupled with SURF), breaking the view of an object up into multiple sections.
Qbo flips between the two modes via voice prompts, part of the team’s intent to make controlling the robot something that needn’t be done with a keyboard, mouse and display. Still no exact timescale for when we’ll be able to buy – or build – one of our own, unfortunately.