CSC400-Kinect-Based Choreography

From CSclasswiki
Jump to: navigation, search




http://tinyurl.com/inky2012

This is the main page for In Kyung Lee's (Inky) Kinect-Based Choreography Independent-Study page, Spring 2012.






Survey of Dance & Technology Work

Poster for CCSCNE 12

Movie Screening at Dance New Amsterdam, NYC

DNA NYC 022012.png













Meetings

1/31/12

A Starting Point: --Thiebaut 15:12, 31 January 2012 (EST)

  • Figure out dimensions for poster. Check on CCSCNE 12 site (check out this page)
  • Figure out way to print poster on 8 1/2 x 11" sheets for testing
  • Inky will define work to be done on IS. Current thought is for
    • Interactive project (define possible ways to interact)
    • voice recognition. User says words or sentences that alters the choreography. What words? What will the system do if a word is not recognized? Could the system ask the user to define, as a dance sentence, what the word means? Could there be several different dance movies associated with a given word? How will the system organize the information so that when a word is recognized the program knows what movie to play?
    • create random choreography elements (define what they can be)
    • start doing some research on voice recognition with Kinect. Are there programs out there that already do that? Is it covered in the Kinect book? Are there sample programs we can try?


2/7/12

  • Note: we should make sure that there is a survey of how kinect or kinect-like cameras/sensors are being used in the world of dance today. --Thiebaut 10:12, 7 February 2012 (EST)



2/14/12

  • Edit YouTube movie, and add
    • title
    • credits
    • URL

2/21/12

  • Edit Java program
  • make it output a new movie that is different from the original movie, i.e. same length, but without some fixed fixtures (such as floor).


3/13/12

  • Decided to keep a new log of features the kinect tool should sport.


3/20/12

  • Demonstration of new GUI window
KinectWindowWithGUI.png


3/21/12

  • Investigation of using pixels from still image and map them onto Kinect points. See here for more details.

1st Trial

CSC400Kinect fire.png



2nd Trial

  • The squares of pixels are reduced to a 3x3 size, and the background image is made to rotate, as if on a cylinder. For the fire image, this gives more a dynamic flare to the flames.



3rd Trial

  • Commented out the backgound( 0 ) call at the beginning of draw() and added
  filter( DILATE );
  filter( BLUR );
Check out the code here.



4th Trial

  • Tried different filters:
 filter( ERODE );
 filter( BLUR, 1 );



  • Using a picture of a flower instead of fire:
FlowerImageForKinect.png



  • Using a picture of a gradient:
GradientImageForKinect.png



5th Trial

  • Using a fast blur algorithm (code here).
    • with the gradient image...



    • with the fire image...



Tagging Points

CSC400Kinect fire.png
CSC400Kinect greenfire.png
  • Modified merging of kinect movies so that points are tagged with Id of their source file.
  • This permits selecting a different overlay for the points when playing a kinect movie.
  • Example below shows the use of two overlay files for a movie generated by merging 2 original kinect movies.



Ideas

  • Keep the index of the source kinect file for each point of a kinect_nnnn.out file. This way each point can have its own color. This is possible because currently each point is kept in a 32-bit integer, and uses only 11/12 bits for the kinect raw distance info.

Misc Links & Resources