In labs people have been experimenting with computer vision for a long time, like being able to track a body. We can go back almost 20 years and find examples of that at the MIT video lab, designing interfaces that track someone's arm or hand. It's really only with the release of the Sony iToy where that became something that was more public. And now with the Kinect, it's become massively public. So that's an example of the technology changing. And then the tools around them have changed, so that instead of needing a Computer Science degree to do a prototype that has face detection, now with Processing and with other tools you can use an existing codebase to do that in a few weeks.