Thursday, March 24, 2011

Simplifying Input with OpenNI and PrimeSense

I'm got back from break very refreshed and ready to dive back into this project. I started work on integrating OpenNI to do the hand tracking for my application (as per my design), and discovered that I was grossly underestimating the power of the OpenNI libraries. I had to reorganize my software design significantly, but I think that my new design is much simpler and is based on only two third-party libraries now: OpenNI and DirectX (technically three since I'm still using pthreads).

I've been able to cut out fdlib completely, which I wanted since the beginning as it is closed source and proprietary (but was good for getting quick results). OpenNI has a skeletal tracking system very similar to the Microsoft SDK (I'm assuming) which lets me not only get head position, but also rotation, which will make the headtracking display much more accurate and intuitive for the user. I've also replaced libfreenect with alternative drivers from PrimeSense that work with OpenNI's data node architecture, so that OpenNI can pull the information from the device instead of my having to feed the information too it. Overall this significantly simplifies my program flow, and will make it much easier to troubleshoot issues with input etc.

Here are a couple of screenshots showing OpenNI in action:

Skeletal tracking being shown. Note the placement of the head joint is exactly where I will be positioning my camera to create the virtual window effect.


OpenNI's hand tracking. The white dot is showing where OpenNI is perceiving my palm to be.

 I've also been working on the system for adding and manipulating objects in the world, but that sadly does not have shiny screen shots at the moment.

 Self-Review

As part of this blog post I've been asked to perform a self-review of my progress thus far.

I've been making steady progress in setting up the Kinect as an input device. I've had several setbacks and redesigns, but on the whole I've succeeded in getting the information I need out of the Kinect. I now have everything working the way I want, and I'm confident that I shouldn't have any more major problems (famous last words I know) in terms of the Kinect.

My progress with the actual application mechanics however has been lagging because of all the focus I've given getting the Kinect to work. I really need to focus on getting some of the basic mechanics in during the next two weeks because I would like to have a simple playable demo by the beta review period, and currently all I have is unused input.

I'm pretty confident that I'll be able to deliver a final product that I'm proud of, but there is definitely plenty of work yet to be done. I think that in retrospect it would have been smarter to have made the application without the Kinect input and then worked on integrating the Kinect rather than the other way around, as every time I got more functionality out the Kinect I would have video to show and a demo to play.

tl;dr I think that I've made some decent progress, but things are going a little more slowly than I would like, and I'm going to have to really ramp it up for the Beta and Final reviews.

No comments:

Post a Comment