Documente Academic
Documente Profesional
Documente Cultură
Group Members:
R. Thirunavukkarasu (070497N)
03/12/2010
FAPS
Table of Contents
Table of Tables......................................................................................................................................................................... iii 1. Introduction .......................................................................................................................................................................... 1 3. Component Architecture ................................................................................................................................................. 4 1.1 Purpose............................................................................................................................................................................ 1 2. Architectural Representation ........................................................................................................................................ 2 3.1 System Overview ......................................................................................................................................................... 4 1.2 Project Scope ................................................................................................................................................................. 1 3.1.2 Functional Level Use Case Diagram ............................................................................................................ 5
3.1.1.1 Noise removal .............................................................................................................................................. 7 3.1.1.2 Extract illumination effect of an image ............................................................................................. 7
3.2.3 Texture Generation ............................................................................................................................................ 9 3.2.4 Flow Diagram of the feature extraction..................................................................................................... 9
3.4 Creating the 3D model............................................................................................................................................. 13 3.4.2 Develop the 3D model using derived facial parameter set.............................................................. 14 i|Page 3.4.1.2 Creating the conformation parameter set...................................................................................... 14
CSE
FAPS
3.4.3 Facial texture and shape generation, using 2D image data. ............................................................ 16
Table of Figures
Figure 2: Follow diagram of Systems functions. ........................................................................................................ 4 Figure 3 System's Function Use Case Diagram ........................................................................................................... 5 Figure 6: Sequence Diagram of the feature extraction .......................................................................................... 10 Figure 8: Components of 3D model................................................................................................................................ 13
Figure 1: Overall System Architecture........................................................................................................................... 3 Figure 5: Flow Diagram of the feature extraction ...................................................................................................... 9
Figure 4: Component Diagram of facial feature Extraction ................................................................................... 6 Figure 7: Parameterization of face database.............................................................................................................. 12
Figure 13: Flow Diagram of Facial Component Addition ...................................................................................... 20 Figure 14: Forehead wrinkle Addition ......................................................................................................................... 21
Figure 9: Overview of model parameters .................................................................................................................... 16 Figure 10: Flow Diagram of the 3D model .................................................................................................................. 17
FAPS
Table of Tables
CSE
iii | P a g e
FAPS
1. Introduction
1.1 Purpose
progression for Sri Lankan context) that progress the age of a given person image in order applied for Sri Lankan context, this document will provide the detailed summary of the and domains into components and to describe how each component will be implemented. about applying the face age algorithms on this created model. This Software architecture document talks about the project FAPS (Face Age
to produce the future image of the person. With the aim of finding an algorithm that can be This document will first focus about getting facial data from facial databases and 2D
design and implementation of the FAPS project while breaking the project into domains images. Then it will address about creating a parameterized 3D model using the facial database data and merging 2D image data to the created 3D model. Then it will discuss
mentors, and for all academic staff members of the University of Moratuwa that who are Involved in supervising and evaluating the projects carried out under the CS-4200 module. engaged in this project and the project associated with it.
This software architecture document is a key reference material for all the contributors
This software architecture document is focuses as a reference for all the internal
applied for face progression in Sri Lankan context. FAPS is more focused on building more it can be applied in various fields or applications where face age progression needed.
general application or algorithm rather than building product for single application so that
The main intension of this FAPS project is to provide an algorithm that can be
CSE
1|Page
FAPS
2. Architectural Representation
are two inputs passed to the system and those inputs will be stored into a database after the parameterization process. Before this process take in to place, noise removal and of the parameterized values. extract illumination effects will be applied on each image in order to increase the accuracy method help to reduce the error of the parameters. When we construct the 3D model these Once the images from database are parameterized, all images parameterized values
The following figure provides an overview of the system. As the system show there
model for Sri Lankan context. This model allow the system to create a person specific 3D model by changing some parameters which are show the also runs a algorithm in the back end to make the 3D model as a real one. variation of a face model with aging. One of the main features of using 3D model is, by system used an algorithm with the process to make the changes perfectly. and texture and wrinkles variation. Each part has its own database that contains the For the age progression, the process divided in to two parts called shape variation
will be analyzed to get the mean values of specific parameters with different poses. This
changing the some specific parameters of the 3D model, the shape variation can be done easily. The texture and wrinkles variation will be applied directly on the model and the
CSE
2|Page
FAPS
Face database
FAPS
3. Component Architecture
3.1 System Overview
person. During the age progression process the system under goes through many processes in order to make the progressed image as a realistic one. The following diagram shows the entire system working path. Image database This system takes a person image and produce the age progressed image of that
Analysis and Parameterize the image database. Parameterize the input image
Input image
System Age progressed image Apply the texture and wrinkles changes. Apply the Shape variation on 3D model
provides a better way to progress the age of the face compare to 2D based age progression. Also the 3D model allows the system to increase the accuracy of the age progressed face by considering the other poses of the face. face database that used to construct the parameterized 3D face model. Both inputs will be on those parameters.
4|Page
The main component of the system is the parameterized 3D face model which Figure 2: Follow diagram of Systems functions.
passed through a special process called parameterization (Facial feature extraction). This is
CSE
a more sensitive process in the system, because the remaining system is developed based
Additional to the input image the system takes another input which is Sri Lankan
FAPS
CSE
5|Page
FAPS
Context for a particular 2D face image which is given as input. In the initial phase of our project we get only a frontal face image and develop 3d model of that 2d face image. Main purpose of this step is facial feature Extraction for Developing 3d model of that face. This has 3 parts: Feature extraction, Image normalization and Texture generation. Facial feature Extraction
Image normalization
Texture generation
Feature extraction
Noise removal
Hair extraction
Region detection
Lip Contour
Eye Contour
Nose Contour
Eye position
CSE
6|Page
FAPS
and it is produced by the sensor and circuitry of a scanner or digital camera. Noise is removal algorithm exits. But we are using new image mixed noise removal algorithm based pixel and to restore the image. on measuring of medium truth scale. It uses the distance ratio function to detect the noise generally regarded as an undesirable by-product of image capture. There lot of image
distribution of light sources around a face have the effect of changing the brightness Cast shadows can generate prominent contours in facial images. The effects of the normalized image. the diffuse and specular reflection on a surface. After noise removal and extract the
distribution in the images, the locations of attached shadows, and specular reflections and illumination are obtained using the standard Phong model, which approximately describes illumination effect, we get a normalized image. Rests of the steps are depending on this
strategies have been proposed. Among those, color based face region detection has gained increasing popularity. The success of color-based face detection depends heavily on the accuracy of the skin-color model. Skin color alone is usually not enough to detect the
CSE 7|Page
potential face regions reliably due to possible inaccuracies of camera color reproduction
FAPS
and the presence of non-face skin-colored objects in the background. Popular methods for integral projection.
skin-colored face region localization are based on connected components analysis and
facial features. We use Luminance edge analysis because the detection of sharp changes in
lid curve in a quadratic polynomial, and an iris circle. The iris center and radius are estimated by the algorithm developed by Ahlberg. From this we can get Eyeball size, iris size & color, pupil size, reflection spots, eyebrow size, eyelid and eye lash parameters. Lip contour detection
The eye contour model consists of an upper lid curve in a cubic polynomial, a lower
lip color models are used to discriminate the lip pixels from the surrounding skin. From this we can get lower lip positions & size, corner positions and color parameters. Nose contour detection The representative shape of the nose side has already been exploited in order to
The lip color differs significantly from that of the skin. Iteratively refined skin and
increase the robustness, and its matching to the edge and dark pixels. It has some difficulties, so our approach utilizes the full information of the gradient vector from the parameters. But we cannot get bridge width.
CSE
edge detector. From this we can get Nose length, bridge width and nostril width
8|Page
FAPS
Chin and cheek contour detection
cheek contour. We use a robust model that relies on deformable model. From this we can get Chin shape, teeth, lip corners, Cheek bone space and size and cheek hollow parameters.
are grouped into strands and wisps in diverse hair styles. Extracting hair appearance is an important and challenging problem in here. We are using A Generative Sketch Model for Human Hair Analysis and Synthesis for hair extractions.
Human hair is a very complex visual pattern where hundreds of thousands of hairs
containing the texture coordinates of the model vertices is created. This is a 2D plane coordinate space also makes it easy to combine the texture from a real photo with a synthetic texture. In order to create a textured model, photos are mapped onto a UV plane.
where the points are matched with the vertex positions of the generic model. Such a public
In order to combine the texture from different view angles, a public UV plane
Noise removal
Chin and Cheek contour Hair Extraction
Lip contour
Eye contour
Texture Generation
Extracted Features
FAPS
3.2.6 Constrains
profile image face and side texture, so we make an assumption to generate texture of parameters are used in the morphable model.
CSE 10 | P a g e
profile shape, we take texture same as in the frontal face has. Size of the ear and ear
Using the frontal image it is hard to get the information about ear boundary of the
FAPS
Particularly From a profile image we can extract Nose tip, Nose Bridge top, under-nose ear shape with respective sizes. ear boundary. initialization
point, chin point, and neck top point. For the depth information we use both profile image steps ear initialization to match the template with the image ear and to translate it to an initial position, and ear refinement to deform the template in order to match the accurate Another Important feature of the face is Ear boundary detection. It has following
and Different pose image. From those we can get depth of the 3D face and nose depth, and
We use the template as a five-degree polynomial. The skin-color boundary is used for ear
Based on the initialized ear template and the matched segment on the ear boundary image, fitting algorithm.
a contour-following method is developed to deform the template to match with the whole
ear boundary. The template with line segments is approximated using an adaptive polyline
After getting all the features for 3D model creation we create a 2d feature database. This database is modeling as follows feature (Eye, Ear, Nose, Lip, chin, cheek and Forehead) values are in different columns and different pose angle (frontal (0), profile (90), 45) are different pose angle. These mean values are applied to create 3D model.
CSE
in different rows. Using this database we analysis mean value of one features with
11 | P a g e
FAPS
2D Face Database
CSE
12 | P a g e
FAPS
because this is the place where we going to apply the face progression algorithms. An extracted face details from both images and facial database applicable to 3D model and shape and texture.
ideal parameterized 3D model would allow fitting any possible face and apply face progression algorithms. The basic parts of creating a 3D model will be preparing the applying those data in to the 3D model [8]. The modifications have to be done on facial
Developing the 3D model
Expression parameter s
Conformati on parameter
the parameters based on underlying anatomy of the human face and observation of human
CSE 13 | P a g e
Here it will use a hybrid approach to develop the parameter set that involves develop
FAPS
face properties unique to each other. The facial parameters are basically divided in to two parameters that control conformation or shape in individual faces [9].
parts, expression parameters that control observable facial expressions and conformation
These parameters include eyebrow arch, eyebrow separation, jaw rotation, mouth width, upper lip position, mouth corner position.
shape, chick shape, neck shape, eye size and separation, face region proportions and overall
These parameters include jaw width, forehead shape, nose length, nose width, chin
represent the basic observations of the face and underlying structure of the face. The basic 3D face model is constructed using 3D polygonal surfaces. Then this model is manipulated scaling of the various features described by face data set. Here the face skin is constructed as the polygonal surface.
by using the parameters that control procedural shaping, interpolation, rotation, and
parametrically controlled 3D model. Techniques such as interpolation, rotation, translation, and scaling of the various features are used to transform static data into the dynamic 3D changes over skin as we are applying face age progression over 3D model.
CSE
face model. As we are using polygonal skin surface, it allows explicit, direct control over model that changes its shape [9]. The foreheads, mouth, neck, cheekbones arias are
14 | P a g e
The static data extracted from facial images need to be transform in to dynamic,
skin vertex positions. This direct control over skin vertex is very useful to control the shape
Interpolation techniques can be used to manipulate most regions data set in to the 3D
FAPS
independently interpolate. In these areas extreme positions are defined in the 3D model, and parameter values are interpolated between these shapes and parameter values. such as sizes of nose, mouth, jaw and chin [9]. position and eye orientation. affecting those features. Scaling techniques are used to control the relative size and placement of facial features
Procedural construction techniques are used to construct the eyeball, iris, pupil size, eye The following diagram describes the overview of the model structure with the parameters
Position offset techniques are used to control the length of the nose, corners of the mouth, rising of the upper lip etc [9]. These effects are blended in to surrounding regions.
CSE
15 | P a g e
FAPS
EYES Eyeball size, iris size & color, pupil size, reflection
LIPS Upper and lower lip positions & size, corner positions,
to be modify in to the desired 3D model that looks same as the person appears in the 2D image. The 2D image will only contain frontal face data. These data are used to merge in to created 3D model using the same techniques as used for modeling 3D model. For the locally to 3D model, that is eyes, nose, mouth, forehead, chin, chick etc. are applied locally Procedural construction is used to model the eyes, interpolation is used to model
16 | P a g e
As 3D facial model is developed using the data from facial database, the model needs
[7].
purpose of getting the unique features that are belongs to 2D image the data is applied
forehead, mouth, cheekbone, and rotation is used to open the mouth by rotating the lower
CSE
FAPS
portion of the jaw and scaling techniques are use to get the relative sizes of the placement of parameters.
3D Model adjustment
Desired 3D d l
CSE
17 | P a g e
FAPS
3.4.6 Constrains
Only frontal features are available from 2d images, so there is a lack of data from different that we extracted from images.
poses when creating the particular 3D model of that image. So modeling the ear becomes difficult. Getting an ideal 3D model for 2D images is depended on the quality of the data
CSE
18 | P a g e
FAPS
as shape variation and texture variation. Shape variation is considered under 3D morph able model using face component. Texture variation is considered by using wrinkles. From 3D specific model, the detail of gender for aging model is obtained to retrieve relevant information from database. 3D specific model Facial component addition Wrinkles addition
Poisson Image Editing Figure 12:11: Flow DiagramAging Figure Flow Diagram of of Aging
Aging 3D model
Different variations occur to different facial components during face aging. At the particular
age level, on the generated 3D face model, further addition of facial components one by one with their relevant changes moderates the 3D model. Then after wrinkles addition that perform on 3D model, formulate the 3D model to a synthesized aging model. To make the the face model. Since the different between two ages are identifying from the database, constrain on texture variation from person to person gets easily avoided.
CSE 19 | P a g e
model more realistic in nature, Poisson image editing technique eventually gets applied on
FAPS
Figure 13: Flow Diagram of Facial Component Addition This component is a very crucial. All the shape variation of aging image ultimately depends
on this component. The effective algorithms develop for this component, can make aging image realistic. Estimator and component filter are two most important algorithms that run on this component. Each component would have its own estimator and component filter algorithm. Algorithms proposed in [1], [2] and Computational Models for Age Progression[3] with relevant may have come from existing age progression methods such as statistical approach
alteration for Sri Lankan context. Also with the adaptation of aging parameters from available images of Sri Lankan Data base, the proposed above approaches will be get analyzed.
CSE 20 | P a g e
FAPS
This is the area which is going to cover texture variation of the aging model. As specified in the Database, under aging, wrinkles are categorized such as forehead and cheek. The level together with the accuracy of component addition and shape,
amount of wrinkle addition would bring the aging model to a considerable realistic age wrinkles from database into the 3D aging model. The forehead, cheek and Chin detection methodology can be found from 3D model generation.
color
Figure 15: Cheek Wrinkle Cheek aging changes will be obtained from Database as the differentiation (Fig. 4) of two ages. The above curve will be integrated with 3D model by an affective coordinate adopted to obtain seamless synthesis results. So after the integration, aging model engage with Poisson image editing [3] techniques are
inference algorithm. After the aging process at all three levels, the aging model as the integration together to generate final result. The inclusive face model is an additive model.
CSE
21 | P a g e
FAPS
3.5.3 Constraints
Getting images from Sri Lanka which is belong to same person in different age level
Applying Changes to 3D model require effectively handle shape variation be effective to generate realistic aging model. realistic changes.
Adaptation of the algorithm which exploits to add components and wrinkles should
The above additive model mostly requires appropriate filters to be applied to get after the adaptation of database.
The statistical analysis with database would provide further prominent aging factors and might be an evident to eliminate some of those already considered aging factors when they do not significantly influence on Sri Lankan aging progression.
Color variation on texture of the image with age progression need to be verified
CSE
22 | P a g e
FAPS
4. Data View
Progression Parameter Adaptation
As far as age progression is the main research component of our system, it is adapted with available images of different person in different age appropriately from Registration of
Persons Department. To overcome the difficulty of collecting the photos of the same person for each part from similar images, where the parts can be cropped from faces of different makes the person different at the current age level from age 16. So that the entries are variability. Facial components and wrinkles are the two extracted major information from the mage. consultation with medical site and previous research of [1] has given the clue of aging factors to be categorized. persons. Even more each entry of database at a particular age, have the parameter which one image is available at same age, the Database entry would be the average of all To derive Face Component and Wrinkles, manual effort will be used. Same person at different age level images will be verified to acquire parameters of aging factors. The
at different age groups, our model decomposes face into parts, and learns the aging pattern stored in Database, represent the amount of variability of a component. When more than
Sample Database
CSE
FAPS
The following tables will be distinctly stored according to verify male and female. Table 1: Wrinkle
Age 20 22 24 --Forehead Cheek Age 20 22 24 ---
The above components are obtained from sample database by cropping the appropriate portion of the image. By taking age 18 components as a reference, other age level then database parameters will be directly applied on that mage.
CSE
Wrinkles in cheek
components differentiation will be calculated by eq.1. If an image is progressed from age 18 Database-Parameter = Component-age-X -Component-age-18; X: from 20 to 60Eq.1
24 | P a g e
FAPS
5. References
1. Jinli Suo, Song-Chun Zhu, Shiguang Shan and Xilin Chen, A Compositional and and Lotus Hill Research Institute, China
Dynamic Model for Face Aging, University of California, Los Angeles, Los Angeles
2. Narayanan Ramanathan, Rama Chellappa and Soma Biswas, Age progression in Human Faces : A Survey.
4. Gilson A. Giraldi,Carlos E. Thomaz, Statistical Learning Models for Automatic Age Progression, National Laboratory for Scientific Computing Petropolis, Rio de
3. Patrick Perez Michel Gangnet Andrew Blake, Poisson Image Editing, Microsoft Research, UK.
Janeiro, Brazil,Department of Electrical Engineering, FEI Sao Bernardo do Campo, 5. I. K. Park, H. Zhang, V. Vezhnevets. Image-Based 3D FaceModeling System, EURASIP Journal on Applied Signal Processing 2005. Sao Paulo, Brazil.
6. H. Chen and S. C. Zhu, A Generative Sketch Model for Human Hair Analysis and Synthesis, PAMI, Vol. 28,No. 7, July 2006. System,EURASIP Journal on Applied Signal Processing, 2005. Edition, USA, A. K Peters, 1996. Planck-Institut fur biologischeKybernetik.
9. Frederic I. Parke and Keith Waters, Computer facial animation book, First
CSE
25 | P a g e
FAPS
3D model
CSE
26 | P a g e