Download FCITR Magazine

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
FCITR MAGAZINE
Assoc.Prof.Dr. Ahmad Hoirul Basori
Faculty of Computing and Information Technology Rabigh
King Abdulaziz University
Kingdom of Saudi Arabia
FCITR MAGAZINE
FACIAL ANIMATION
The realistic emotional expression of virtual human such as
scared awaiting sweating and blushing are uneasy task chosen to
accomplish the 3D games. Furthermore, animating human face
still presents interesting challenges because of its familiarity, as
the face is the part used to recognize individuals. Facial modelling
and facial animation are important in developing realistic
computer facial animation. Both modelling and animation is
dependent to drive the animation. Some challenges observed in
this research include the complex geometry of facial skin, lighting
and even high quality texture. The facial action coding system
(FACS) is employed to describe and generate facial expressions. It
breaks down facial actions into minor units known as action units
(AUs). Facial expressions are generated by combining specific
independent action units. In addition, realistic virtual human also
require high definition 3D laser scan to produce good quality
emotional facial expressions such as sadness, anger, happy and
fear for virtual human. Realistic facial animation is required to
bring human into full mental immersion inside virtual reality or
serious games. There are several approaches on producing facial
animation such as:








Facial Action Coding System (FACS)
MPEG-4(Moving Picture Expert Group-4)
Pseudo-Muscle Based
Blend-Shape Interpolation
Facial Rigging
GUI Approach
Emotional Facial Expression
Future Facial Animation
1
FACIAL ACTION CODING SYSTEM (FACS)
Facs is a system that measures and describes facial behaviors by understanding
every facial muscle. Paul Ekman and Wallace Friesen developed it in 1976. FACS
is a known standard that shows the way in which each facial muscle changes the
facial appearance. It is derived from a facial anatomy analysis, which describes
human face’s muscles conduct including the movements of the tongue and jaw.
Through exploring facial anatomy, we can conclude that changes in facial
expressions are caused by facial actions. FACS starts working from facial actions
to under-stand facial behaviors. Facial action units (AUs) are made in accordance
to these actions and every au can involve with numerous facial muscles. FACS
divides the human face into 46 action units. Every unit embodies an individual
muscle action or a group of muscles that characterize a single facial position. The
principle is that every AU is the smallest unit that cannot be reduced into minor
action units. Though accurately sorting various AUs on the face, FACS was able to
mimic all the facial muscle movements. The generate of facial expression is the
combination of action units that produce altered facial expressions. For example,
joining the AU4 (brow raiser), au15 (lip corner depressor), au1 (inner brow raiser),
and au23 (lip tightened) generates a sad expression.
Figure 1. Illustration of Action Units (AUs) in Facial Action Coding Systems of human
face.
MPEG-4 (MOVING PICTURE EXPERT GROUP-4)
The MPEG-4 is an ISO standard for multimedia (MPEG41997). It was first released
in 1999 and from that time many research areas have concentrated on this
standard because it can be used in a wide range of video and audio, as well as 3D
graphics. The MPEG-4 is the only standard that deals with facial animation,
therefore the MPEG-4 standard has been used as a basis for the development of
new methods. MPEG-4 Facial Animation (FA) outlines many parameters of a
talking face in a standardized technique. It has described Face Definition and
Animation Parameters for facial action encoding. The head has been grouped into
84 feature points (FPs). Every feature point describes the shape of an area that
corresponds to it in a stand-ard face model. Hence, they can be used to outline the
parameters of animation on the face to conform to this standard when switching
between altered models. As for the current standard, there are 68 universal Facial
Animation Parameters (FAP) for the face. The distance between facial features
represents the unit. Thus, FAPU is a parameter, which is not universal. It is
exclusive to the 3D face model, which it is applied to. Therefore, when a standard
FAP, a corresponding FPs and a corresponding FAPU are available; the values can
be adjusted and decided on a new model freely exchanging information from the
face models. They can formed together to produce a face (by using any graphics
method) and based on low-level commands in FAPs animate that face. The Facial
Animation Parameters consist of two categories. The first two that can represent
facial expressions by themselves are high-level parameters.
Figure 2. MPEG-4 features points
1
PSEUDOMUSCLE BASED APPROACH
Pseudomuscle-based technique has a muscle element, which is controlled by
mathematical operator calculation for deformation purposes. Muscle-based
technique is built from mass and spring to produce certain muscle animations
(see Figure 3 for the illustration of muscle-based rendering).
Figure 3. Muscle Based Rendering
The other researcher enhances muscle facial expressions by using parameterized
method. Waters’ work is based on FACS theory that uses action unit for the muscle
movement, as shown in Figure 4.
Figure 4. Parameterized Muscle Model
[TYPE THE
BLENDSHAPE INTERPOLATION
Blend shape has become the most used animation method. Blend shape animation can also be considered as shape interpolation animation. Blend shape is
com-monly used in commercial animation software packages. This includes
packages like MAYA and 3D Studio max. Blend shape is achieved by shaping
distortion while fad-ing it into another through marking corresponding points and
vectors on the “before” and “after” shapes, which are used in the morph. The
core concept behind this is that animators create several key poses of a subject
and the animation system automati-cally interpolates the frames in-between.
Technically, blend shape animation is a point set interpolation, where an
interpolation function (typically linear) specifies smooth motion between two sets
of key points. The advantage of these types of the interpolations is they are easy
to be computed. However, the disadvantage is the limitation in producing vast
options of lifelike facial expressions. In addition, during the production process,
the animators need to look backward and forward in order to harmonize the final
result of the animated facial expression. Eftychios D. Sifakis at his thesis creates
the example of blend shape facial expression. It is a demo program of characters
rendered in DirectX 10. Figure 5 showed how the blend shape interpolation and
facial anatomy is constructed.
SIDEBAR
TITLE]
[Type the sidebar
content. A sidebar is a
standalone
supplement to the
main document. It is
often aligned on the
left or right of the
page, or located at the
top or bottom. Use the
Drawing Tools tab to
change the formatting
of the sidebar text
box.]
Figure 5. Blend shape interpolation
1
FACIAL RIGGING
Facial rigging using articulated joints is one of those approaches. The idea of this
method is how to implant joint hierarchy to the face model. Joint is the link
between bones and two skeletal segments which is bridging interaction of the
model elements. In case of face, joints can consists of several joints which are
connecting jaw and skull, skull to eyeball and joints and tongue. While the other
technique like Facial Rigging using Blend Shapes is focused on shape
interpolation to create mimic muscle appearance by using two or more shapes
that can be divided into base shape and target shape. Furthermore, Cluster
principal in this method is to create group of points which are related to coordinate
transformation. The cluster transformation will give great effect to nearest of origin
cluster and taper when the point is going away from the origin. The occurring
transformation can be scaled, translated and rotated at the same time. To add
some effect to the cluster, each point will be assigned with some different
weighting values
Figure 6. Facial Rigging
2
GUI APPROACH
Functions in the facial rigging are responsible for controlling joints, blending
shapes and clustering to manipulate the face surface of 3D model. Functions can
be written in equations format to manipulate control parameters and the expected
effect on face surface. Functions can be extended into user interface to provide
user an easier control to facial region. On GUI mode, each control value on joint
angles, cluster transformations, blend shapes and functional expression has
particular key frame position or particular times. However, by using GUI on some
desired area to create effects, we can easily control the facial expression of 3D
humanoid models.
Figure 7. GUI Model
3
EMOTIONAL FACIAL EXPRESSION
Facial expression coding system, which is proposed by Ekman (Ekman, 1982,
Ekman, 2003, Ekman and Friesen, 1978), has come up with six basic emotions
such as anger, joy, sadness, fear, disgust and surprise. These emotions are used
as basis for creating emotional expression 3D humanoid model. As a continuation
of this research, in 1990, Faigin presented a popular argument that emotion are
mainly determined by three meaningful region like eyebrows, eyes and mouth
namely universal expression of 3D humanoid model, see Figure 8, (Faigin, 1990).
Anger expression drags eyebrows to be close to each other and lower than the
normal position, while for strong anger, human will usually open their mouth or
even shout (see the illustration in Figure 8). Joy or happiness is an expression of
relaxing facial muscle, lips are widely opened and eyebrows seem calm. Sadness
makes eyebrows look stretch upward and mouth is closed but not so tight. The
lower eyelid pulls downward makes crying eyes. Fear expression makes eyebrows
pull upward and close to each other. In addition, eyes are widely opened but they
are dragged upward of facial region. The lower lip receives more pressure than
upper lip. In disgust case, eyebrows, eyelid and eyes are pulled together. Area near
the nose is pulled and raised, while mouth is half open while the other parts seem
to be closed. For Surprise expression, eyes are widely opened, while eyebrows and
eyelids are raised up and mouth is open but in relax position).
Figure 8. Re-illustration of Universal Expression
4
FUTURE FACIAL ANIMATION
Facial appearance models have become more popular in sports and the movie
indus-try. Facial appearance (expression) changes regularly during speeches,
exercising and emotional moment. To reproduce facial appearance in real time is
quiet challenging because people are aware of, and very sensitive to, the
appearance of their skin. Most facial colour appearance models of skin as two
layered translucent structure and colour appearance relates to the distribution of
melanin and hemoglobin. This has confirmed the description of skin appearances
range. Jimenez et al. (2010) adopted similar non-invasive method for in-vivo
mapping of hemoglobin concentration and distribution across wide areas of the
skin. They relate the change of hemoglobin with dynamic facial expression. The
skin colour reconstructing by Jimenez et al. (2010) uses a skin model of two layers
Figure 9. Realistic Facial Animation Jimenez et al. (2010)
Figure 10. Realistic Facial Animation Jimenez et al. (2012)
5
FACIAL ANIMATION TOOLS
There are lot of programming tools, editor and game engine that can
be used to produce realistic facial animation such as:







Autodesk Maya and 3DS Max
C# XNA programming
Ogre
Horde
OpenGL
Unity 3D
FaceGen
6