There are more ways for using your Kinect than just playing games. Some folks over at MIT managed to create an actuated surface that can be gesture controlled--a system made possible by employing openCV, Kinect, and probably lots of brain power.
Dubbed Recompose, the system was created by a team including Anthony DeVincenzi, David Lakatos, Matthew Blackshaw, Daniel Leithinger, and Hiroshi Ishii at the MIT Lab. It has a surface consisting of 120 individual pins that can be actuated to different heights in response to different hand gestures.
They describe it more eloquently:
Our system builds upon the Relief table, developed by Leithinger. The table consists of an array of 120 individually addressable pins, whose height can be actuated and read back simultaneously, thus allowing the user to utilize them as both input and output. Building upon this system, we have furthered the design by placing a depth camera above the tabletop surface. By gaining access to the depth information we are able to detect basic gestures from the user. In order to provide visual feedback related to user interaction, a projector is mounted above the table and calibrated to be coincident with the depth camera. Computer vision is utilized to determine and recognize the position, orientation, and height of hands and fingers, in order to detect gestural input.
[Via Creative Applications]