Cinemacraft: Virtual Minecraft Presence Using OPERAcraft
Presentation slide deck in PDF describing the project, the current development status, and future work. (1.651Mb)
Presentation slide deck in PowerPoint describing the project, the current development status, and future work. (3.460Mb)
Final project report on Cinemacraft (in Microsoft Word format). Includes User's Manual, Developer's Manual, and Future Work. (6.377Mb)
Final project report on Cinemacraft (in PDF format). Includes User's Manual, Developer's Manual, and Future Work. (3.197Mb)
MetadataShow full item record
Cinemacraft is an interactive system built off of a Minecraft modification developed at Virginia Tech, OPERAcraft. The adapted system allows users to view their mirror image, as captured by Kinect sensors, in the form of a Minecraft avatar. OPERAcraft, the foundation of the project, was designed to engage K-12 students by allowing users to create and perform virtual operas in Minecraft. With the advanced functionality of Cinemacraft, the reinvented system aims to alter the perspective of how real-time productions will be produced, filmed, and viewed. The system uses Kinect motion-sensing devices that track user movement and extract the data associated with it. The processed data is then sent through middleware, Pd-L2Ork, to Cinemacraft, where it is translated into avatar movement to be displayed on the screen, resulting in a realistic reflection of the user in the form of an avatar in the Minecraft world. Within the display limitations presented by Minecraft, the avatar can replicate the user’s skeletal and facial movements; movements involving minor extremities like hands or feet cannot be recreated because Minecraft avatars do not have elbows, knees, ankles, or wrists. For the skeletal movements, three dimensional points are retrieved from the Kinect device that relate to specific joints of the user and are converted into three dimensional vectors. Using geometry, the angles of movement around each axis (X, Y, and Z) for each body region (arms, legs, etc.) are determined. The facial expressions are computed by mapping eyebrow and mouth movements within certain thresholds to specific facial expressions (mouth smiling, mouth frowning, eyebrows furrowed, etc.).