Project progress has been pretty slow this week because of midterms and papers. Also, DXUT (the DirectX version of GLUT) decided that it didn't want to work nicely with my laptop so I've been spending some precious time troubleshooting it.
I went to an excellent GRASP lecture by Zhengyou Zhang from Microsoft Research. The title of the talk was "Human Activity Understanding with Depth Sensors" and it covered how a lot of the functionality provided with the Kinect SDK works. The researchers at Microsoft are able to derive a skeleton from up to 4 humans in front of the Kinect at a rate of about 2ms. This gives me hope that the image processing segment of my application will not be the performance drain that I'm fearful of.
The talk was also interesting because it confirmed that even if I had access to the Kinect SDK I would still have to do the majority of the work I'm currently going to be doing as the Kinect SDK does not provide the level of detail regarding hands that I require. It would fully take care of head tracking, and would give me a rough idea of where the hands are, but I'd still need more precision to create a good user experience.
The Alpha Review is next Friday, and I'm aiming to have head tracking fully working by then.
No comments:
Post a Comment