Wednesday, July 03, 2013
A look at "Animatic", a digital puppetry system that was developed by Luis Leite (see previous post) using 3D Studio Max and Macromedia Director in 2006. The system was developed as part of his research thesis Marionetas Virtuais.
Thursday, May 23, 2013
This is a new demo reel for Argentinian digital puppeteer Mario Mey that shows off his digital characters performing en Español at various live events (his character Pinokio 3D was mentioned here back in 2010) . He creates and performs his "Marionetas Digitales" (digital puppet) characters using Blender 3D and PureData, a real-time graphical dataflow programming environment for audio, video, and graphics.
You can see Mario at work and get a look at his production process in this video, however it was recorded in Spanish.
Thursday, May 16, 2013
Faceshift is software that promises "markless motion capture at every desk". It works with consumer-level cameras like the Kinect to track and analyze the facial expressions of a performer and uses them to animate a virtual character in real-time. It also offers the option of recording a performance so that it can be edited and polished in post-production.
There are lots of potential applications for this kind of software in game and film production and, of course, digital puppetry applications!
You can learn more at www.faceshift.com.
Tuesday, May 14, 2013
Hakanaï is one of the more unconventional examples of a digital puppetry performance I've discovered (although, is there anything truly "conventional" about any form of digital puppetry?). Its creators describe it as a "haiku dance performance taking place in a cube of moving images projected live by a digital performer".
The performance involves a dancer performing live, whose movements are tracked in real-time and used as the basis for an interactive, digitally animated environment that is projected around them:
It was created by the French Company Adrien M / Claire B using their proprietary software eMotion. Here's more from their description of the project:
...Performed by an artist as a “digital score”, it is generated and interpreted live. The dancer’s body enters into a dialogue with the moving images in motion. These simple and abstract black and white shapes behave according to physical rules that the senses recognise and to mathematical models created from the observation of nature.
The audience experiences the performance in several stages. They first discover the exterior of the installation. As the dancer arrives, they gather around to watch the performance. When the choreography has ended, the audience can then take some time to wander amongst the moving images.Very cool, no? You can learn more from the video's description on Vimeo.
Through a minimalist transposition, this piece is based on images drawn from the imaginary realm of dreams, their structure and their substance. The box in turns represents: the bedroom where, once the barrier of sleep is passed, walls dissolve and a whole new inner space unfolds; the cage, of which one must relentlessly test the limits; the radical otherness, as a place of combat with an intangible enemy; the space where impossible has become possible, where all the physical points of reference and certitudes have been shaken.
Through the encounter of gesture and image, two worlds intertwine. The synchronicity between the real and the virtual dissolves and the boundary that was keeping them separate disappears, forming a unique space filled with a high oneiric charge.
Sunday, March 31, 2013
Activision unveiled some new real-time rendering technology for human characters at the Game Developers Conference last week.This is the result of several years of research in to creating photo realistic human characters for video games. Although the animation itself a bit off and suffers from the infamous "Uncanny Valley" effect, just on a purely technical level this is pretty impressive.
From the video's description on YouTube:
This animated character is being rendered in real-time on current video card hardware, using standard bone animation. The rendering techniques, as well as the animation pipeline are being presented at GDC 2013, "Next Generation Character Rendering" on March 27.
The original high resolution data was acquired from Light Stage Facial Scanning and Performance Capture by USC Institute for Creative Technologies, then converted to a 70 bones rig, while preserving the high frequency detail in diffuse, normal and displacement composite maps.
It is being rendered in a DirectX11 environment, using advanced techniques to faithfully represent the character's skin and eyes.
More technical details can be found here.
Via Cartoon Brew.
Wednesday, March 27, 2013
A nice example of a digital shadow puppet, made by Luis Leite using Kinect and Unity 3D. To animate the puppet, a human body is tracked in real-time using the Kinect sensor, with one hand controlling the head and the other controlling the tail. The physical movement of the performer's body is remapped on to the virtual shadow puppet using Inverse Kinematics via Unity's Mecanim animation system.
Luis was also responsible for a Kinect-based digital puppet that was mentioned in a post about Kinect-based digital puppetry on Machin-X two years ago.