Sign language recognition support was one of the surprising potential use-cases for the Kinect beyond simple motion gaming in the run up to the peripheral’s release, and a team of researchers at Georgia Tech College of Computing are hard at work making it a reality. They’re looking to use Kinect in their ongoing CopyCat project, a video game intended to teach young people American Sign Language (ASL).
Video demo after the cut
Previous versions of CopyCat used a single camera and relied on tracking colored gloves embedded with accelerometers to interpret ASL movements. However, thanks to Kinect’s virtual skeleton recognition, the team can do away with that.
In early testing – with a limited vocabulary – the Kinect-powered system was able to achieve 98.8-percent accuracy in tracking signed sentences. There’s still plenty of work required, however; the team is looking to boost the system’s vocabulary as well as support longer sentences, more complex ASL constructs, and signer-independent vocab. They’ll also look at recognizing hand-shape features, rather than arm movements, for higher-resolution support. It remains to be seen how Microsoft will react, given that the company already holds a patent on Kinect sign language support.