Meetings/Assignments -- CSC400-Kinect F2011

From CSclasswiki
Jump to: navigation, search
Main Page | Log Page


  • First meeting with Kris, Lindsey, Amy

Assignment for next week

  • explore Kinect page on classwiki. Watch *every* movie
  • Figure out what options are available for installing Kinect on a computer.
    • What's available for Windows
    • What's available for Mac
    • What's available for Ubuntu
  • start working on installing Kinect on Mac in 343 and on DT's Shuttle PC.


  • Kinect SDK installed on Windows 7 Shuttle! Tried a few sample applications and downloaded speech recognition drivers. Seems to work well.
  • Kris mentioned that Kinect won't work in virtual Windows box. DT will setup everything needed to backup MacPro in FH343 to Time Capsule, and once it's setup (likely 9/27 evening), you can install Bootcamp.

Assignment for this coming week

  • Play with the Kinect Beta SDK
  • See if some simple open-source programs are available for interacting with Kinect.
  • We need to build a repository of recipes and categorize them by
    • the language they're written in (C++, Java, C, other)
    • the kinect feature they use (skeleton information, pixels + depth, speech)
    • the type of user interaction they provide
    • For each one, figure out, if possible, the way that it should be opened, compiled, executed, and add this to the documentation/log page.
  • Create a list of good source of information on the Kinect.
  • Update the Kinect page that has the videos with new discoveries of yours (and let me know about cool stuff!)
  • Setup backup for Windows + Mac (done DT 9/27)


  • Meeting with Kris today
  • Kris demoed C# application on Shuttle running in Visual Studio taken from QuickStart tutorial.
  • Explored the C# code. Fairly easy to decypher, but not obvious how to use the information.

Assignment for next week

  • Continue watching the Quickstart videos (see link on Log page for more info)
  • Generate the different examples covered in the videos.
  • Important: we want to be able to understand how to get the skeleton information and recognize movement.
  • Mini projects that we want to be able to generate code for:
    • color each person differently
    • mask one person out
    • mirror the image
    • follow just one hand, or both hands
    • follow just the head.
    • Remember that at some point we'll want to control the mouse with the movements, so we want to be able to map movements of body parts to (X, Y) coordinates in the screen system of coordinates ( 0 to 640 for X, 0 to 480 for Y). Same with left click or right click.
  • If we stay with C#, we want to know how to interface C# to Java.
  • Keep in mind that we may want to play with the open source SDK that was released somewhere...
  • Start a new wiki page with mini recipes with solutions to the mini projects listed above.

Backup Exercise

  • The PC is setup to backup files on the second drive every night.
  • Create a dummy file one given day.
  • The day after, delete it and figure out how you can use the backup system (Microsoft's default backup system) for restoring the file.

Hardware Assignment

  • This is for Kris. Find a 2GB or 4GB RAM for the PC, using the documentation about the M2NPV-VM/S mother board, and the photos below:


Something to look at:



--Thiebaut 23:30, 14 October 2011 (EDT)

  • Good meeting today
  • Played with a demo of Kinect Earth Move. Could be a demo to keep for visitors or for meeting with high school students.
  • Invesigated Kris's discovery of KinectTCP. Looks very interesting. It's supposed to work with a client remotely connected to a server attached to a Kinect. That's not really important for us, but the package shows how to interface various programming languages to it. Not sure about the loss in bandwidth due to the server/client connection, but worth checking out.
  • The KinectTCP documentation is fairly well written.
  • There are two different java applications that come with the KinectTCP that we should test out. One that tests the client/server connection, one that uses a loop and displays the bitmap image, the depth image, and the skeleton in real time.

Assignment for next week

--Thiebaut 23:30, 14 October 2011 (EDT)

  • Install dropbox in your accounts as a way to save important files (and hold Eclipse's workspace, for example)
  • Learn how to create java projects in Eclipse. It might not be obvious to get Eclipse to accept to work with the KinectTCP client. Let me know if you need help figuring this out.
  • Run the two demo programs that are available on the KinectTCP site.
  • See if the skeleton info can be manipulated.
  • Next meeting will either be the week after next or before for quick updates.


--Thiebaut 10:02, 25 October 2011 (EDT)

  • Meeting today with Kris and exploration of KinectTCP java code.
  • Working on the quadcore research Mac, running Windows 7.
  • We can go from about 6 frames/second to 13 frames/second by removing a statement that stores the depth information to file.
  • The demo program displays all 4 frames of information provided by the Kinect: RGB pixels, depth, XYZDepth, and skeleton.
  • We only need the skeleton, although it could be a nice demo to superimpose the skeleton on the depth image, or on the RGB image.

Assignment for next week

  • Modify the demo program so that it outputs to the console some simple sentences corresponding to simple movements of the skeleton.
  • The first trials could be
    • output "far apart 1" when Skeleton 1 moves its arms far away from its body.
    • the trick will be to not output "far apart" when the hands are remaining far apart for, say, more than a second. We want to detect dynamic movements, not static. So, maybe, the way to do this is to categorize positions, keep them in an internal variable, and when this variable changes, output the new state.
    • output "close together 1" when Skeleton 1 moves its hand together.
  • That is challenging enough for this week, and may require more time than a week...
  • The next step after that will be to use the depth information along with the skeleton to see if we can recognize hands going toward the Kinect, away from the body...


--Thiebaut 16:47, 10 November 2011 (EST)

  • Getting ready for demo on Saturday
    • For the pumpkin-head program
      • See if you can find additional images. Say Justin Bieber, or animal heads, or whatever you think will be funny.
      • Can we also add big hands, or some image where the hands are located?

  • My current plan for Saturday is to have Joe's demo first, then ours, and keep on doing this a few times with different groups of students. So it looks (after talking to Joe) that we only have to entertain them for 10 minutes or so, and repeat that several times.
  • My plan is to
    • show them the infra-red dots with an infra-red camera
    • show them some programming with you showing what the camera generates (RGB image, skeleton, depth image)
    • show them how to put an image on top of the head or hands. They can see what they look like by standing in front of Kinect.
    • show a YouTube movie
    • have them play in front of Kinect with the world on the other computer or with falling shapes.


--Thiebaut 15:44, 16 November 2011 (EST)

  • We are going to continue in demo mode and create a demo we can run continuously on one of the monitors in the hallway.
  • The idea is to use the example of EVOLUCE (see their Web site) and create a very simple interface that will allow people walking by to interact with Kinect.
  • The trick is to find an organization around the current demo (wings, cat head, pumpkin head, etc...) that allows one to add a User Interface that recognizes simple motions.
  • Kris and I are in thinking mode for how to organize such a system...


--Thiebaut 22:50, 21 November 2011 (EST)

  • Kris and I worked out an outline of an algorithm for the demo...

Kinect112111a.jpg Kinect112111b.jpg