Kinect related dance projects

From CSclasswiki
Jump to: navigation, search

--Inky 11:39, 7 February 2012 (EST)


How Kinect is Used in the World of Dance

2/10/2012

--Inky 21:18, 10 February 2012 (EST)


Fidelity is an interactive dance/video project from artist and designer Rodrigo Carvalho in collaboration with choreographer Natalia Brownlie and sound designer Miguel Neto that was initially shown in spring 2011 at the Gap Gallery in Barcelona. Fidelity is a performance that associates a dancer’s body with 3D visuals to produce a digitized physical choreography. Combining these three elements works very well, as the performance plays on perspective and depth perception to showcase the movement of the dancer’s body. Carvalho used the 1024KinectFun mod (made by our friends 1024 architecture) to produce the real time visuals and 3D textures.


DANCING WITH SWARMING PARTICLES is an interactive installation and performace that intends to explore the relationship between a physical user/performer and a virtual performer the “avatar” which has the physical characteristics of morphing flocking particles. Built in the game development tool Unity 3, the project uses the Kinect and DIY motion tracking software OSCeleton to translate the movements of the performer into the swarming particles. The performer has to first harness the chaotic flock and, as the performance becomes more vigorous, the disparate particles bend to the movements of the user. Eventually, the avatar and performer, in this case, Tamar Regev, become unified.




The piece, Divide by Zero, is they say “set in an environment where connections between personal unconscious content and digital interfaces are made possible.” It’s a minimalist work with the dancer in a reactive background, starting with a white shadow that builds and morphs as she moves, changing form as the dance progresses. Along with the physical form of the female dancer, you also have the digital display—the responsive graphics—which in effect become her partner, both mimicking and responding to her movements like any good dance partner would. Art Direction and Interactive Design by Hellicar&Lewis. Full source code is available at Github. (I am not sure if this particular piece used Kinect to create the digital interaction and display. However, it still shares the same idea of creating the real-time interaction between the performer and the computer.)



--Inky 15:58, 18 February 2012 (EST)

A live choreographic interactive AV performance from ExLex (under development)

Dancer: Anna Rubi
Visual concept: Matyas Kalman
Quartz Composer programming: Tamas Herceg, Matyas Kalman
Camera: Daniel Besnyo, Matyas Kalman
Editing: Daniel Besnyo



A live choreographic interactive AV performance from ExLex (under development)

A live choreographic interactive AV performance from ExLex (under development)

Dancer: Anna Rubi
Visual concept: Matyas Kalman
Quartz Composer programming: Tamas Herceg, Matyas Kalman
Camera: Daniel Besnyo, Matyas Kalman



Kinect Graffiti: 2011 / Augmented dancing experimentation. While using a Kinect to track the human skeleton, we're mapping a video layer over a moving body.

Using : Kinect + QC + MadMapper + MaxForLive.



F&N Kinect Singapore Dance Delight: Dancers are transformed into colourful bubbles which react and replicate the dancers' moves. As dancers groove to the music, sounds will be triggered. Depending on which screen region the dancer move to and trigger, different pitches will be produced, leading to a unique mish-mash of creative sounds and movements. A first of first in many ways, the result is an unforgettable experience for the dancers and the audience.




"At the B-Seite festival 2011 we established advanced progressive workshops. Our task is to bring well skilled artists in different disciplines together. The workshops should lead to an installation or performance."

[...] Final performance realized with vvvv, Kinect & Triplehead2Go ...

THE KINECT DANCER @ KUBUS, Zeitraumexit, Mannheim.




Versus - First Teaser: Stereoscopic Realtime Dance Performance by 1n0ut, with Nanina Kotlowski

http://1n0ut.com

Experimental performance with Kinect and Adaptive Learning Algorithms



--Inky 21:04, 10 March 2012 (EST)
Research on Voice Recognition: http://www.keyboardmods.com/2011/10/kinect-speech-recognition-in-linux.html Source code download: http://dl.dropbox.com/u/11217419/srec_kinect.tgz



--Inky 00:26, 11 March 2012 (EST)
Motione: Research done in the past in Art, Media, Engineering department of Arizona State University. In collaboration with the choreographer Bill. T. Jones and Trisha Brown Dance Company, AME research team tracked and visualized their movements and created a real-time visual and audio art that was staged along with the dance.
Link: http://ame2.asu.edu/motione/research5.html

--Inky 01:30, 11 March 2012 (EST)

Dance - Technology works by well-known media artist, Paul Kaiser:



1. BIPED is an extended digital animation created to serve as the visual décor for a 45-minute dance of the same name choreographed by Merce Cunningham and performed by his company. The sequences of animation vary from 10 seconds to 4 minutes, totaling 27 minutes; they run discontinuously through the performance.
The movements are largely derived from motion-captured phrases from the choreography, which drive abstracted images of hand-drawn dancers moving through spare and evocative spaces.
LINK to the video excerpt: http://openendedgroup.com/index.php/artworks/biped/


2. After Ghostcatching (2010) is as much about touching with the hand as it is about seeing with the eye. A disembodied dancer is rendered as a moving hand-drawn sketch — and that sketch moves in a projected 3d space that can seem so close as to let the viewer reach out and touch it.
Though the work’s imagery comes entirely from a computer simulation, it bears an unmistakable human trace — that of dancer Bill T. Jones, abstracted from his physical body via a process of optical motion capture that preserves his movement but not his likeness.
LINK to the video excerpt: http://openendedgroup.com/index.php/artworks/after-ghostcatching/


3. Hand-drawn Spaces is a virtual dance installation by Merce Cunningham, Paul Kaiser, and Shelley Eshkar that presents a mental landscape in which motion-captured hand-drawn figures perform intricate choreography in 3D. Created in 1998, it was recently designated a “masterwork” by the NEA, which provided funds for its full restoration. In Hand-drawn Spaces, the virtual dancers appear as life-size drawings emerging from the darkness and moving in an apparently limitless three-dimensional space. Though the dancers are visible on three screens, they move through a much larger virtual area, and so travel in and out of projected image, often traversing the spectators’ space. The spatial sound-score by Ron Kuivila evokes their positions in space, making their presences felt even when not seen.
LINK to the video excerpt: http://openendedgroup.com/index.php/artworks/hand-drawn-spaces-1998/


4. how long does the subject linger on the edge of the volume… consists of projected imagery responding intelligently in real-time to the motion-captured live performance of the Trisha Brown Dance Company. The triangle agent shown in the clip to the right and diagrammed tries to move in one direction by choosing to connect to points of motion on the stage. The beginning of each operation is drawn diagrammatically by one or more extending lines, and the net result of the operation is indicated by a new annotation that persists after the operation is complete.
The creature is a physically simulated body in an environment with gravity and ground. If the creature is unbalanced, it falls over, dragging its annotations with it, until it finds a new equilibrium and continues.
LINK to the video excerpt: http://openendedgroup.com/index.php/artworks/how-long/


5. The Choreographic Language Agent project is a small software environment, developed in Field(http://openendedgroup.com/index.php/software/), for exploring variations in choreographic instruction. The Choreographic Language Agent enables the creation of grammars that point two ways — towards simple versions of human language and towards choreographic grammars of dance that are particular to a given choreographer (in the initial case, of Wayne McGregor of Random Dance, www.randomdance.org). This tool posits a new form of dance notation — one which aids the choreographer in generating dance movements rather than in recording existing movements.
Rather than attempting to produce a general model of movement, choreography, and meaning, this tool focuses on the individual and even idiosyncratic methods of a given language movement system. The model takes as its point of departure the minimalist point-line vocabulary of Loops instead of a sophisticated, anatomically correct joint hierarchy, the idea being to rapidly sketch movement explorations at all levels (limb, body, stage-space).
Given a sentence written in a language known to this tool, the agent can interpret this sentence to produce a short animation of its body. Then we can perform pseudo-linguistic operations on the language level, thus generating sequences, superpositions, and modulations. After studying sets of sentences, the choreographer can determine the sets of conditions under which an agent can autonomously deploy this language to fashion a multi-agent piece of choreography.