I have been working on a collaboration with LEVYdance, taking to new levels my skills with computer vision and interactive video programming.
I've come a long way with handling video input to get good video for tracking, using IR to avoid video feedback and combining different kinds of video processing to best suit my tracking purposes. I use the IR video, then apply background subtraction or frame differencing, which allows me to locate the dancer and quantify their movement. I generate video within Max/Jitter that is determined by the dancers' movement and also export their location to Processing, where drawing algorithms which change over time swirl around and stick to the dancers.
I'm now to the point of pinning down and debugging exactly the tech that the project needs, expanding everything I have worked out into the work itself. At the same time I am fleshing out the actual video content to be detailed, shifting, beatutiful, and conceptually interwoven with the choreography and the text.
Chunky Move's Mortal Engine