Thursday, March 31, 2011

Adding Game Objects

This week I've been spending my time on implementing some simple objects into the game in preparation for my beta review next Tuesday. For the beta I will only be using simple primitive shapes (cube, sphere, and cylinder) but I'll eventually be adding in .obj support so that users can load in custom objects.

Here are some simple screen caps of the objects. I've set up some simple generic lighting to give the objects a little dimension, but shaders/textures will be a post beta feature.

A couple of cubes
Cubes are joined by their friend the sphere

I've also been working with the OpenNI/PrimeSense frame works to get input ready for beta. I'm working on getting it to the point where the interactions are exactly what I want for the final version (i.e. intuitive grabbing and moving/rotating). If I can't get that up and running in time I'm also considering a gesture based interface, where the user will make gestures with their hands which will change the mode of the program to allow different types of manipulation.

Thursday, March 24, 2011

Simplifying Input with OpenNI and PrimeSense

I'm got back from break very refreshed and ready to dive back into this project. I started work on integrating OpenNI to do the hand tracking for my application (as per my design), and discovered that I was grossly underestimating the power of the OpenNI libraries. I had to reorganize my software design significantly, but I think that my new design is much simpler and is based on only two third-party libraries now: OpenNI and DirectX (technically three since I'm still using pthreads).

I've been able to cut out fdlib completely, which I wanted since the beginning as it is closed source and proprietary (but was good for getting quick results). OpenNI has a skeletal tracking system very similar to the Microsoft SDK (I'm assuming) which lets me not only get head position, but also rotation, which will make the headtracking display much more accurate and intuitive for the user. I've also replaced libfreenect with alternative drivers from PrimeSense that work with OpenNI's data node architecture, so that OpenNI can pull the information from the device instead of my having to feed the information too it. Overall this significantly simplifies my program flow, and will make it much easier to troubleshoot issues with input etc.

Here are a couple of screenshots showing OpenNI in action:

Skeletal tracking being shown. Note the placement of the head joint is exactly where I will be positioning my camera to create the virtual window effect.


OpenNI's hand tracking. The white dot is showing where OpenNI is perceiving my palm to be.

 I've also been working on the system for adding and manipulating objects in the world, but that sadly does not have shiny screen shots at the moment.

 Self-Review

As part of this blog post I've been asked to perform a self-review of my progress thus far.

I've been making steady progress in setting up the Kinect as an input device. I've had several setbacks and redesigns, but on the whole I've succeeded in getting the information I need out of the Kinect. I now have everything working the way I want, and I'm confident that I shouldn't have any more major problems (famous last words I know) in terms of the Kinect.

My progress with the actual application mechanics however has been lagging because of all the focus I've given getting the Kinect to work. I really need to focus on getting some of the basic mechanics in during the next two weeks because I would like to have a simple playable demo by the beta review period, and currently all I have is unused input.

I'm pretty confident that I'll be able to deliver a final product that I'm proud of, but there is definitely plenty of work yet to be done. I think that in retrospect it would have been smarter to have made the application without the Kinect input and then worked on integrating the Kinect rather than the other way around, as every time I got more functionality out the Kinect I would have video to show and a demo to play.

tl;dr I think that I've made some decent progress, but things are going a little more slowly than I would like, and I'm going to have to really ramp it up for the Beta and Final reviews.

Thursday, March 3, 2011

Alpha Review

This past Friday was our Alpha Review. For the review I compiled a 4 minute video which outlined my project, examined my approach, and displayed the results I have so far. The video is embedded below, but it does not have any audio as I narrated it in person.



I didn't say too much that hasn't already been said on my blog, but the last segment in which I show some of my initial results with the Kinect is notable. I've got simple head-tracking working, and I'm using it to render a rotating cube in 3D. I recorded that segment of the video using my cellphone held in front of my face (hence the rather poor video quality).

My application framework is fully setup. The next step will be adding the OpenNI libraries, and using them to process the data from the Kinect instead of the simple face recognition library that I used to demo head tracking for my alpha review.

This coming week is spring break (yay!) so my next update won't be for two weeks. By that point I'll hopefully have OpenNI fully integrated and tracking the user's head and hands.