MIT tactical sensing carpet estimates human poses without a camera
Researchers at MIT have created a new tactile sensing carpet, able to estimate human poses without the need of a camera. Researchers on the project say it's a step towards improving self-powered personalized healthcare, smart homes, and gaming. Researchers note that most people's daily activities involve physical contact with the ground, be that walking, exercise, or resting.
Those interactions have information that can help understand people's movements, and previous research into the subject has used single RGB cameras, wearable omnidirectional cameras, and standard off-the-shelf WebCams. Challenges with systems that use cameras are the cameras being blocked or privacy issues. MIT researchers only needed cameras to create the data set their system was trained on and only capture the movement of the person performing the activity.
After training, the system could identify the 3D pose of a person by only requiring them to get on the carpet and perform the action. The deep neural network the team created uses the tactile information to determine if the person is doing sit-ups, stretching, or performing another action.
The carpet is low cost and scalable and consists of commercial pressure-sensitive film and conductive thread. The carpet measured 36 by two feet and has over 9000 sensors inside. Each sensor converts the pressure from the human into an electrical signal through physical contact between the feet, limbs, torso, and the carpet.
Applications for the system include the ability for computerized exercise regimes such as showing a video of someone doing push-ups if the user gets on the carpet and begins doing push-ups. This could help people training alone understand proper form and technique to prevent injuries during exercise. Researchers say if used solely for exercise, the carpet could count the number of reps and calculate the number of calories burned.