theorising through (dance) practice
there are several different types of motion tracking technology; Magnetic, Gyroscopic and Optical (passive or active). My personal preference would be for high resolution, small sensor gyroscopic system that would allow me to track position and orientation of small bones such as the phalanges. At the moment i'm using a passive optical system with markers like this;
However, i really need to track the fingers for my motion transcriptions but optical marker miss-identification resulting from marker occlusion makes this difficult. For example, when fingers and thumb touch all makers can be seen;
clench the fist and markers are occluded (hidden);
as the tracking is passive the system does not always map the markers correctly when the reappear. With an active optical system (LED's instead of retroreflective markers) each marker is uniquely identified partially resolving this problem. However, given the large number of markers i want to use (100 +) an active system is not practical (identifying all the makers would cause the system to 'lag' behind 'real-time' tracking). SIMM solving in real-time already causes me problems as the live representation is slightly behind the action, but i can turn this solving off and post-process solve from the same data. With active tracking i would be forced to work through the system lag.
i'm also having issues with the EVaRT software, which makes it clear to me how far current technology is from satisfying our needs for accurate, simple dance documentation. i have a movement phrase (improv) that is 01.41 minutes long at 200 Hz (20305 frames) with 61 markers. Even with a four processor machine editing (system processing not the edit tasks themselves) take an age. The page file balloons [screen shot]until the application grinds to a halt and i have to shut it down and restart. the only solution seems to be a) turn off the undo buffer, or b) save after each edit and clear the undo buffer; neither option makes for a fast work flow.
Fortunately SIMM solving in post process mode (to generate the HTR data) is not quite so slow, the 1:41 minute clip mentioned in the previous paragraph takes around 02:25 minutes to solve. and if you were wondering what the motion paths look like for that clip ... welcome to the noodle dance;
© splines in space
(matthew gough) 2005
Powered for Blogger by Blogger Templates - Original design by Michael Heilemann and Chris Davis.
0 responses to “noodle dance”
post a comment