Computer facial animation

Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a human, a humanoid, an animal, a fantasy creature or character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.

Although development of computer graphics methods for facial animation started in the early-1970s, major achievements in this field are more recent and happened since the late 1980s.

The body of work around computer facial animation can be divided in two main areas. Techniques to generate animation data and methods to apply such data to a character. Techniques such as motion capture and keyframing belong to the first group, while morph targets animation (more commonly known as blendshape animation) and skeletal animation belong to the second. Facial animation has become well-known and popular through animated feature films and computer games but its applications include many more areas such as communication, education, scientific simulation, and agent-based systems (for example online customer service representatives). With the recent advancements in computational power in personal and mobile devices, facial animation has transitioned from appearing in pre-rendered content to being created at runtime.

History

Human facial expression has been the subject of scientific investigation for more than one hundred years. Study of facial movements and expressions started from a biological point of view. After some older investigations, for example by John Bulwer in the late 1640s, Charles Darwin’s book The Expression of the Emotions in Men and Animals can be considered a major departure for modern research in behavioural biology.

Computer based facial expression modelling and animation is not a new endeavour. The earliest work with computer based facial representation was done in the early-1970s. The first three-dimensional facial animation was created by Parke in 1972. In 1973, Gillenson developed an interactive system to assemble and edit line drawn facial images. in 1974, Parke developed a parameterized three-dimensional facial model.

One of the most important attempts to describe facial movements was Facial Action Coding System (FACS). Originally developed by Carl-Herman Hjortsjö [1] in the 1960s and updated by Ekman and Friesen in 1978, FACS defines 46 basic facial Action Units (AUs). A major group of these Action Units represent primitive movements of facial muscles in actions such as raising brows, winking, and talking. Eight AU's are for rigid three-dimensional head movements, (i.e. turning and tilting left and right and going up, down, forward and backward). FACS has been successfully used for describing desired movements of synthetic faces and also in tracking facial activities.

The early-1980s saw the development of the first physically based muscle-controlled face model by Platt and the development of techniques for facial caricatures by Brennan. In 1985, the animated short film Tony de Peltrie was a landmark for facial animation. This marked the first time computer facial expression and speech animation were a fundamental part of telling the story.

The late-1980s saw the development of a new muscle-based model by Waters, the development of an abstract muscle action model by Magnenat-Thalmann and colleagues, and approaches to automatic speech synchronization by Lewis and Hill. The 1990s have seen increasing activity in the development of facial animation techniques and the use of computer facial animation as a key storytelling component as illustrated in animated films such as Toy Story (1995), Antz (1998), Shrek, and Monsters, Inc. (both 2001), and computer games such as Sims. Casper (1995), a milestone in this decade, was the first movie in which a lead actor was produced exclusively using digital facial animation.

The sophistication of the films increased after 2000. In The Matrix Reloaded and Matrix Revolutions, dense optical flow from several high-definition cameras was used to capture realistic facial movement at every point on the face. Polar Express (film) used a large Vicon system to capture upward of 150 points. Although these systems are automated, a large amount of manual clean-up effort is still needed to make the data usable. Another milestone in facial animation was reached by The Lord of the Rings, where a character specific shape base system was developed. Mark Sagar pioneered the use of FACS in entertainment facial animation, and FACS based systems developed by Sagar were used on Monster House, King Kong, and other films.

Techniques

Generating facial animation data

The generation of facial animation data can be approached in different ways: 1.) marker-based motion capture on points or marks on the face of a performer, 2.) markerless motion capture techniques using different type of cameras, 3.) audio-driven techniques, and 4.) keyframe animation.

Applying facial animation to a character

The main techniques used to apply facial animation to a character are: 1.) morph targets animation, 2.) bone driven animation, 3.) texture-based animation (2D or 3D), and 4.) physiological models.

Screenshot from "Kara" animated short by Quantic Dream

Face animation languages

Many face animation languages are used to describe the content of facial animation. They can be input to a compatible "player" software which then creates the requested actions. Face animation languages are closely related to other multimedia presentation languages such as SMIL and VRML. Due to the popularity and effectiveness of XML as a data representation mechanism, most face animation languages are XML-based. For instance, this is a sample from Virtual Human Markup Language (VHML):

 <vhml>
   <person disposition="angry">
     First I speak with an angry voice and look very angry,
     <surprised intensity="50">
       but suddenly I change to look more surprised.
     </surprised>
   </person>
 </vhml>

More advanced languages allow decision-making, event handling, and parallel and sequential actions. Following is an example from Face Modeling Language (FML):

 <fml>
   <act>
     <par>
 	<hdmv type="yaw" value="15" begin="0" end="2000" />
 	<expr type="joy" value="-60" begin="0" end="2000" />
     </par>
     <excl event_name="kbd" event_value="" repeat="kbd;F3_up" >
 	<hdmv type="yaw" value="40" begin="0" end="2000" event_value="F1_up" />
 	<hdmv type="yaw" value="-40" begin="0" end="2000" event_value="F2_up" />
     </excl>
   </act>
 </fml>

See also

References

  1. Hjortsjö, CH (1969). Man's face and mimic language.
  2. Ding, H., Hong, Y. (2003), NURBS curve controlled modeling for facial animation, Computers and Graphics, 27(3):373-385
  3. Lucero, J.C.; Munhall, K.G. (1999). "A model of facial biomechanics for speech production". Journal of the Acoustical Society of America: 2834–2842. doi:10.1121/1.428108. PMID 10573899.

Further reading

External links

This article is issued from Wikipedia - version of the 5/8/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.