The researchers at MIT are continually working on advances in robotics, and they are now showing off a new robot that combines vision and touch to teach itself to play the game of Jenga. Jenga is a game that has a series of stacked wooden blocks in layers. The goal is to push the blocks out of the tower without making the tower collapse.
MIT’s robot has a soft-pronged gripper, a force-sensing wrist cuff, and an external camera that allow the robot to see and feel the tower and the individual blocks in the tower. The bot can push gently against a block as the visual and tactile feedback is assessed from the camera and the cuff. Those forces are compared to measurements taken previously.
The robot can consider if the block it was pressing on could be removed from the Jenga tower successfully or not. The robot is then able to learn in real-time if the block can be removed from the tower without making the tower collapse. The MIT team says that the demonstration shows something hard to attain before, which is the ability for a robot to quickly learn the best way to perform a task from visual cues and physical interactions.
The tactile learning system developed by MIT can be used for more than playing games. Researchers say that tasks that need careful physical interaction, such as separating recyclable objects from a landfill and assembling consumer products, could be performed with the system. Future robots using this tech could be used to assemble smartphones for instance.
The robotic arm used by MIT for this project is an industry standard ABB IRB 120 unit. For each block that the bot pushes out of the tower, visual and force measurements are recorded and then if the attempt was successful is recorded. The robot was trained using about 300 attempts with similar measurements being grouped in clusters that represent certain block behaviors.