The Virtual Kinect Environment is an application launcher that is controlled by using your entire body. With the gestures of your upper body and the position of your feet, you control the virtual room that is depicted on the screen. On the walls of this virtual room the applications are projected. By using intuitive gestures, you can rotate the room, tilt your view, pull the applications towards you or push them away again. Your feet can start new applications, which can also be controlled with your entire body.
All you need is a computer, a beamer (preferably) and an Xbox Kinect camera. The Kinect uses two infrared cameras to create a depth-image, which is analyzed by the Microsoft Kinect SDK software, to track the skeleton of the person in front of the Kinect. Our code analyzes the skeleton joints to read the gestures that were made. We have programmed a 3D room, where the gestures are interpreted and applications can be launched. Using encoded messages, the 3D room can pass on gestures to applications, so that these can also be controlled by the users entire body. All code was written in C++, using Visual Studio 10 for development.
The current program is just a proof of concept. It shows that control of the room is very intuitive and the gestures feel natural. The software can be extended in different directions: many more gestures could be implemented, more functionality could be added to the room and/or a clear API could be developed, so that it becomes easy to develop applications that can be started from within the room. But many other enhancements can be thought of.
We envision that the Virtual Kinect Environment can be used in different areas. For example for playing games, giving presentations, navigating through (different types of) media and many other purposes. Using the Virtual Kinect Environment is easy to learn, intuitive and, the most important, a lot of fun.