theorising through (dance) practice
Gough, M. (2005). Towards Computer Generated Choreography: Epikinetic Composition. In Proceedings of the Hothaus seminar series. Birmingham: Vivid.
Towards Computer Generated Choreography: Epikinetic Composition.
Whilst Dance and technology practitioners embrace computer software for the production of dance works, there is a general desire to retain human authorship in the conceptual and creative process. Artificially creative technologies developed for algorithmic art and music have yet to make an impact on the dance making process other than to provide visual or aural accompaniment. We believe that not only can the choreographic process be simulated and automated, but that such a simulation would present a viable alternative to more traditional methods of dance composition. After examining existing methods of algorithmic choreography we present an alternative method (epikinetics) that facilitates both algorithmic choreography and autonomous dancing avatars.
Algorithmic options
In 1995 Lansdown called for 'more use [to be] made of the algorithmic approach to choreography' in order to find new 'narratives' and methods of composition [1]. Whilst methods of algorithmic composition have been developed for music [2] and the visual arts [3], issues such as modelling unique, expressive performance [4] have stalled the development of similar systems for choreography. Yet dance is essentially a motive response to stimuli, consisting of ‘movement invention' [5] (concept - ideas / stimulation; movement realisation) and the human body. These elements can be simulated using a combination of neurological and biomechanical models. For the purposes of modelling we consider choreography (performance and creation) to be a three part process:
Existing methods of algorithmic choreography model one or two processes rather than all three and are achieved by several methods including:
Choreographic software often takes the form of a multimedia 'sketch book' format such the Interactive Choreographic Sketchbook [6] or Limelight [7]. This group of applications allow the choreographer to enter images, music, dance notation, animation, motion capture data and other stimuli into a timeline based ‘stage’ and develop the choreography through rapid manipulation and pre-visualisation. Although such programs use a variety of algorithms to 'creatively' filter and process stimuli, they are incapable of autonomous composition and improvisation due to the relatively ‘static’ format of the data entered. The challenges of animating dance notation [8] and rapid capture, re-targeting and warping of motion capture data [9] severely limit the possibility of sweeping conceptual shifts though generative or emergent means.
A possible alternative to the sketch book concept is be a notation based 'pen and pad' model with animated 3D preview and Tanzkurven editing [10] that would allow increased interpretive and transformative freedom through the use of non-deterministic, context-sensitive genetic algorithms. This structure would allow movement scores to be generated from a notational lexis with minimal human intervention. A non-deterministic, syntactical, notation based approach could also be applied to performance technologies to facilitating real-time autonomous composition during a performance. The presence of algorithmic composition and improvisation in rule based performance technologies such as E-merge [11] and ChoreoGraph [12] would compliment the real-time choreography these ‘emergent’ systems already provide. Without this capability the advantage of performance technologies over generative dance (accumulation, chance procedures) is minimal due to a reliance on human composition and improvisation. Choreographic narratives remarkably similar to those generated by performance technologies can be found in traditional methods of composition and human implementations of algorithmic and cognitive models.
Davide Terlingo' has experimented with ‘fractal’ choreography with 'Nonlinear Generators' [13], a practical application of Genetic Algorithms for composition. Terlingo extends a set of 10 static positions and connecting movements through deterministic, context-free algorithmic composition to generate novel movement sequences. However, the dependence on human creativity results in a movement narrative influenced by the performer rather than the algorithmic procedure. Similarly, Hagendoorn's application of motor control, perception and cognition to dance improvisation [14] has also resulted in techniques and narratives similar to existing methods. The emphasis on organisation in Hargondorn and Terlingo's techniques (at the expense of movement invention) limits the possibilities for a computer based algorithmic implementation as they provide no mechanism for generating movement.
One approach to remove human movement bias (and allow new narratives) is through the use of virtual dancers. Virtual dancers can be used to pre visualise choreography generated by keyfame animation and motion capture data. Although these techniques generate ‘fixed’ movement material various 'noise' or 'motion texture' functions (layering pseudo random movement onto existing motion data) can be used to synthesise improvisation. Several methods of linking discreet segments of motion data (re-targeting, warping, chaotic selection, motion matching) can also be used to simulate compositional autonomy but do not facilitate a fully autonomous system.
To achieve compositional and improvisational autonomy for virtual dancers an algorithmic method for movement simulation is required. Nakata's 'automatic choreography' [15] uses an algorithmic approach to generate 'life like' movement without the need for motion capture data. However, his implementation of a somatically coordinated structure (limited of degrees of freedom to enable motive control) is unsuitable for improvised movement generation. Regardless of this limitation, algorithmically generated movement offers a method by which autonomous algorithmic choreography can be achieved.
Epikinetics
epi - "on, at, close upon (in space or time)"
kinetic - "moving, putting in motion"
Epikinetic motion simulation is related to Epigenetic Robotics (motion as genotype) but deals exclusively with the generation of raw, unstable movement rather than controlled motion. This raw movement is then processed by a series of algorithmic modules that shape the movement and present it as animation or notation. Generating unstable, rather than stable movement provides a high level of motive agility and responsiveness. Similar techniques can be found in advanced Aeronautics where adaptive motion control algorithms are used with unstable platforms (such as the F-117 'Nighthawk') to maintain level flight and keep the aircraft 'in the air'. It is through such adaptive algorithms, with differing methods of intervention that autonomous algorithmic choreography can creative distinct narrative forms.
An epikinetic system models the Somatic Nervous System (SNS) including effectors (muscle) receptors (nerves), and control systems such as the Premotor Cortex, Basal Ganglia and Cerebellum). Our system (chorea) consists of six principal modules:
vitus
The vitus module is the core of chorea functiality. An inversion of Nakata's coordinated whole it replicates the 'moment of movement' in multiple locations as an 'uncoordinated interrelated whole'. An epikinetic algorithm generates movement data by processing each joint and bone of the Hierarchical Skeleton concurrently, but individually in realtime. These uncoordinated motor images [16] are then passed onto the hierarchical skeleton and dynamic environment where they are bound by the rule of human physics and biomechanics (the related whole).
This method of producing motor images without first conceiving the motion to be generated provides a high level of improvisational fidelity and might be considered a simulation of neurological movement disorders such as chorea. The development of our movement disorder influenced method is distinct from recent choreographic research into Ataxia, the inability to coordinate movement [17].
Hierarchical skeleton / dynamic environment
The hierarchical skeleton / dynamic environment is used to process the motor images from the vitus module. Unlike existing approaches the movement simulation the skeleton does not 'drive' the motion but responds to the motor images. Rather than simulating the mechanical properties of motion we simulate the impulse, or ‘moment of movement’. This shift in perspective is reflected in the way the skeleton deals with bio mechanically impossible motor images, rather than stopping the movement at the human limitation it will arrange the surrounding limbs to accommodate the desired motion. Such adaptive reconfiguring can be found in numerous dance styles including salsa and contact improvisation.
The motor images are processed by a variety of existing methods for animation and movement simulation. Whilst some of these methods bear little relation to dance practice, other solutions have direct parallels:
By using a range of motion synthesis techniques it is possible to reveal the full range of movement styles available to the human body. Processed motor images are exported as motion data in the form of animation (rendered motion capture data) or notation to enable human performances of the choreography.
zanshin
All events occurring in the performance are stored in the zanshin (awareness) module for the duration of the performance. This allows koan and magnesium modules to ‘perceive’ events occurring within the performance and react with new motive responses. Data entered into the stimuli module is also passes through the zanshin module, only knowledge of current performance retained, the absence of a long term memory allows greater improvisational freedom.
Koan
Algorithmic composition and choreography is generated by the koan module which has the ability to override motor images from both vitus and magnesium. Koan utilizes a range of formal composition techniques (cannon, mirroring, unison etc) and is responsible for generating motive responses to internal and external stimuli such as music and limb arrangement. Koan defines its own concept(s) for the dance work, ideas for the realisation of concepts, and motor images for processing by the hierarchical skeleton.
magnesium
Points of contact that occur during the performance are monitored by magnesium. This module provides an artificially intelligent method for contact improvisation along with more general touching and lifting tasks. Magnesium generates motor images and performs adaptive motion control through a recursive process (based on the principles of contact improvisation) whilst maintaining compositional freedom.
stimuli
The stimuli section may be used to specifically influence the algorithmic choreography of koan. Data entered is flagged with a different level of importance as it passes through to zanshin to become a motive response. Apart from stimuli such as music and images the inclusion of motion capture data is facilitated, this allows chorea to 'see' the dancer in motion before creating them a specific choreography.
Algorithmic receptivity
To achieve computer generated choreography (autonomous rather than human assisted) the entire process, from conception, ideas (choreography / composition) to the moment of movement (improvisation) need to utilise algorithmic methods. The ability to create movement without the need for existing movement data (chorea reuses motor images) is an essential requirement for any choreographic software designed to automate and or stimulate the choreographic process.
Epikenetic systems deal exclusively with motive responses and are unencumbered by a cognitive or emotive understanding of the stimuli. As with work generated by chance procedures the resultant movement is disassociated from the stimuli, thus ‘information - contemporary receptivity - movement’ [18] becomes ‘information - algorithmic receptivity - movement’. Although any emotional content has been removed the choreography remains embedded in physical reality, allowing it to be mounted on ‘real’ dancers and via motion capture the computer can 'coach' the dancers’ performance. The loss of human ownership in both the choreographic and interpretive processes should allow the deployment of new narrative forms, and physical techniques. These developments may eventually become a part of general dance practice rather than isolated in specific performance works.
Towards the future
Systems such as chorea will not replace dancers and choreographers, autonomous choreography virtual dancers can and will exist alongside their physical counterparts, illuminating new narratives and forms beyond our conceptually biased imagination. A loss of authorship to the computer should only strengthen artistic resolve and refresh creative energies. The simulated Sensorimotor system and artificial body intelligence of chorea reveals the capabilities of the human body rather than the limitations to which we confine ourselves. If we are develop new narratives and forms we must let our bodies do the talking, through Epikinetics this dialog is technologically mediated.
[...] ways to connect [movement] can be algorithmically redefined infinitely.Since we're no longer restricted to the prescribed classical methods of connection,we're open to an extraordinary leap in connection, which is just a matter of definingconnective space. ... Where I'd start is with the score. What's been missing so faris an intelligent kind of notation, one that would let us generate dances from a vastnumber of varied inputs. Not traditional notation, but a new kind mediated by thecomputer. William Forsythe [19]
With thanks to Bernard Easterford and Anna Jattkowski-Hudson
[1] Lansdown, J. (1995) Computer-Generated Choreography Revisited. in Proceedings of 4D Dynamics Conference. A. Robertson. (pp 89-99). Leicester: De Montfort University.
[2] Cope, D. (2001). Virtual music: computer synthesis of musical style. Cambridge, Mass.: MIT Press.
[3] Roman V. (1990) Epigenetic Painting: Software as Genotype. Leonardo 23 (1), pp. 17-23.
[4] Camurri, A., Mazzarino, B., and Volpe, G. (2004) Expressive interfaces. Cognition, Technology & Work 6 (1), pp. 15-22. Heidelberg: Springer-Verlag
[5] Rainer, Y. (1968) A Quasi Survey of Some 'Minimalist' Tendencies in the Quantitatively Minimal Dance Activity midst the Plethora, or An Analysis of Trio A. In Buttcock, G. (ed.) Minimal Art: A Critical Anthology (pp 263-73). New York: E. P. Dutton.
[6] deLahunta, S. (2004) Interactive Choreographic Sketchbook. Retrieved October 2004 from http://www.sdela.dds.nl/sfd/icsketch.html
[9] Zordan, V. B., and Horst, N. C., (2004) Mapping optical motion capture data to skeletal motion using a physical model. In Proceedings of the 2003 ACM SIGGRAPH / Eurographics Symposium on Computer animation (pp 245 - 250). Aire-la-Ville: Eurographics Association
[10] Nakata, T. (2003) Digital Tanzkurven. Retrieved October 2004 from http://staff.aist.go.jp/toru-nakata/tanz/tanzkurv.html
[12] deLahunta, S. (2002) Duplex / ChoreoGraph: in conversation with Barriedale Operahouse. Retrieved October 2004 from http://www.sdela.dds.nl/sfd/frankfin.html
[15] Nakata, T. (2002) Generation of whole-body expressive movement based on somatical theories. In Proceedings of the second international workshop on Epigemetic Robotics pp.105-114 [16] Gallagher, S. (2001) From Action to Interaction: An Interview with Marc Jeannerod. Institut des Sciences Cognitives web site Retrieved October 2004 from
[18] Franko, M. (1995). Dancing modernism/performing politics. Bloomington: Indiana University Press
[19] Kaiser, P (1998) Dance Geometry: A conversation with William Forythe. http://www.openendedgroup.com/ideas/pdf/forsythe.pdf
© splines in space
(matthew gough) 2005
Powered for Blogger by Blogger Templates - Original design by Michael Heilemann and Chris Davis.
0 responses to “Towards Computer Generated Choreography”
post a comment