Sunteți pe pagina 1din 30

Software Design Document

FAPS-Face Age Progression in Sri Lanka


Supervised By:
Dr. Chathura De Silva (Int) Dr. Rapti De Silva (Int)

Group Members:

A. Anpaz (070026X) P. Kasun (070361T) S. Ajith (070014J)

R. Thirunavukkarasu (070497N)

Computer Science and Engineering, University of Moratuwa.

03/12/2010

FAPS

Software Design Document

Table of Figures ........................................................................................................................................................................ ii

Table of Contents

Table of Tables......................................................................................................................................................................... iii 1. Introduction .......................................................................................................................................................................... 1 3. Component Architecture ................................................................................................................................................. 4 1.1 Purpose............................................................................................................................................................................ 1 2. Architectural Representation ........................................................................................................................................ 2 3.1 System Overview ......................................................................................................................................................... 4 1.2 Project Scope ................................................................................................................................................................. 1 3.1.2 Functional Level Use Case Diagram ............................................................................................................ 5

3.2 Extract Facial Features from 2D image .............................................................................................................. 6

3.2.1 Image normalization ......................................................................................................................................... 7

3.2.2 Feature Extraction .............................................................................................................................................. 7 3.2.2.1 Region detection ......................................................................................................................................... 7

3.1.1.1 Noise removal .............................................................................................................................................. 7 3.1.1.2 Extract illumination effect of an image ............................................................................................. 7

3.2.3 Texture Generation ............................................................................................................................................ 9 3.2.4 Flow Diagram of the feature extraction..................................................................................................... 9

3.2.2.2 Feature position detection ..................................................................................................................... 8

3.2.2.3 Feature contours estimation ................................................................................................................. 8

3.3 Additional features extraction from face database ..................................................................................... 11

3.2.5 Sequence Diagram of the feature extraction ......................................................................................... 10

3.2.2.4 Hair extraction............................................................................................................................................. 9

3.2.6 Constrains ............................................................................................................................................................ 10

3.4 Creating the 3D model............................................................................................................................................. 13 3.4.2 Develop the 3D model using derived facial parameter set.............................................................. 14 i|Page 3.4.1.2 Creating the conformation parameter set...................................................................................... 14

3.3.1 Ear initialization ................................................................................................................................................ 11 3.3.2 Ear refinement. .................................................................................................................................................. 11

3.3.3 Facial features for 3D model ........................................................................................................................ 11

3.4.1 Developing the facial parameter set ......................................................................................................... 13

3.4.1.1 Creating the expression parameter set ........................................................................................... 14

CSE

FAPS

3.4.3 Facial texture and shape generation, using 2D image data. ............................................................ 16

3.5 Apply the face age progression factors on 3D model ................................................................................. 19

3.4.4 Flow Diagram of the 3D model.................................................................................................................... 17

3.4.5 Sequence Diagram of the 3D model .......................................................................................................... 18

3.4.2.1 Manipulation of facial data set in to the 3D model ..................................................................... 14

Software Design Document

3.4.6 Constrains ............................................................................................................................................................ 18

4. Data View ............................................................................................................................................................................. 23

3.5.1 Facial Component addition ........................................................................................................................... 20

3.5.2 Wrinkles Addition............................................................................................................................................. 21

5. References ........................................................................................................................................................................... 25 6. Acronyms and Abbreviations ...................................................................................................................................... 26

3.5.3 Constraints .......................................................................................................................................................... 22 Progression Parameter Adaptation...................................................................................................................... 23

Table of Figures

Figure 2: Follow diagram of Systems functions. ........................................................................................................ 4 Figure 3 System's Function Use Case Diagram ........................................................................................................... 5 Figure 6: Sequence Diagram of the feature extraction .......................................................................................... 10 Figure 8: Components of 3D model................................................................................................................................ 13

Figure 1: Overall System Architecture........................................................................................................................... 3 Figure 5: Flow Diagram of the feature extraction ...................................................................................................... 9

Figure 4: Component Diagram of facial feature Extraction ................................................................................... 6 Figure 7: Parameterization of face database.............................................................................................................. 12

Figure 12: Flow Diagram of Aging .................................................................................................................................. 19 CSE ii | P a g e

Figure 13: Flow Diagram of Facial Component Addition ...................................................................................... 20 Figure 14: Forehead wrinkle Addition ......................................................................................................................... 21

Figure 11: Sequence Diagram of the 3D model ......................................................................................................... 18

Figure 9: Overview of model parameters .................................................................................................................... 16 Figure 10: Flow Diagram of the 3D model .................................................................................................................. 17

FAPS

Figure 15: Cheek Wrinkle................................................................................................................................................... 21 Figure 16: Aging Factors Derivation.............................................................................................................................. 23

Software Design Document

Table of Tables

Table 1: Wrinkle..................................................................................................................................................................... 24 Table 2: Facial Component ................................................................................................................................................ 24

Table 3: Components extraction from Sample Database ...................................................................................... 24

CSE

iii | P a g e

FAPS

Software Design Document

1. Introduction
1.1 Purpose
progression for Sri Lankan context) that progress the age of a given person image in order applied for Sri Lankan context, this document will provide the detailed summary of the and domains into components and to describe how each component will be implemented. about applying the face age algorithms on this created model. This Software architecture document talks about the project FAPS (Face Age

to produce the future image of the person. With the aim of finding an algorithm that can be This document will first focus about getting facial data from facial databases and 2D

design and implementation of the FAPS project while breaking the project into domains images. Then it will address about creating a parameterized 3D model using the facial database data and merging 2D image data to the created 3D model. Then it will discuss

mentors, and for all academic staff members of the University of Moratuwa that who are Involved in supervising and evaluating the projects carried out under the CS-4200 module. engaged in this project and the project associated with it.

This software architecture document is a key reference material for all the contributors

This software architecture document is focuses as a reference for all the internal

1.2 Project Scope

applied for face progression in Sri Lankan context. FAPS is more focused on building more it can be applied in various fields or applications where face age progression needed.

general application or algorithm rather than building product for single application so that

The main intension of this FAPS project is to provide an algorithm that can be

CSE

1|Page

FAPS

Software Design Document

2. Architectural Representation

are two inputs passed to the system and those inputs will be stored into a database after the parameterization process. Before this process take in to place, noise removal and of the parameterized values. extract illumination effects will be applied on each image in order to increase the accuracy method help to reduce the error of the parameters. When we construct the 3D model these Once the images from database are parameterized, all images parameterized values

The following figure provides an overview of the system. As the system show there

model for Sri Lankan context. This model allow the system to create a person specific 3D model by changing some parameters which are show the also runs a algorithm in the back end to make the 3D model as a real one. variation of a face model with aging. One of the main features of using 3D model is, by system used an algorithm with the process to make the changes perfectly. and texture and wrinkles variation. Each part has its own database that contains the For the age progression, the process divided in to two parts called shape variation

mean parameter values applied on a sample model to construct the 3D parameterized

will be analyzed to get the mean values of specific parameters with different poses. This

changing the some specific parameters of the 3D model, the shape variation can be done easily. The texture and wrinkles variation will be applied directly on the model and the

CSE

2|Page

FAPS

Software Design Document

Develop the model using parameter set

Parameterize the analyzed database


Manipulating data in to the facial

Analysis Face database

Face database

Parameterized input image database

Person specific 3D model construction

Face age progression

Texture and Wrinkles variation with age

Shape variation with age

Figure 1: Overall System Architecture


CSE 3|Page

FAPS

Software Design Document

3. Component Architecture
3.1 System Overview
person. During the age progression process the system under goes through many processes in order to make the progressed image as a realistic one. The following diagram shows the entire system working path. Image database This system takes a person image and produce the age progressed image of that

Analysis and Parameterize the image database. Parameterize the input image

Construct the parameterized 3D model

Input image

Construct person specific 3D model

System Age progressed image Apply the texture and wrinkles changes. Apply the Shape variation on 3D model

provides a better way to progress the age of the face compare to 2D based age progression. Also the 3D model allows the system to increase the accuracy of the age progressed face by considering the other poses of the face. face database that used to construct the parameterized 3D face model. Both inputs will be on those parameters.
4|Page

The main component of the system is the parameterized 3D face model which Figure 2: Follow diagram of Systems functions.

passed through a special process called parameterization (Facial feature extraction). This is
CSE

a more sensitive process in the system, because the remaining system is developed based

Additional to the input image the system takes another input which is Sri Lankan

FAPS

Software Design Document

3.1.2 Functional Level Use Case Diagram

Figure 3 System's Function Use Case Diagram

CSE

5|Page

FAPS

Software Design Document


Our main purpose of the project is finding age progression algorithm for Sri Lankan

3.2 Extract Facial Features from 2D image

Context for a particular 2D face image which is given as input. In the initial phase of our project we get only a frontal face image and develop 3d model of that 2d face image. Main purpose of this step is facial feature Extraction for Developing 3d model of that face. This has 3 parts: Feature extraction, Image normalization and Texture generation. Facial feature Extraction

Image normalization

Texture generation

Feature extraction

Noise removal

Extract illumination effect

Feature contours estimation

Hair extraction

Region detection

Feature position detection

Lip Contour

Eye Contour

Chin and Cheek Counter

Nose Contour

Eye position

Figure 4: Component Diagram of facial feature Extraction

CSE

6|Page

FAPS

Software Design Document

3.2.1 Image normalization

3.1.1.1 Noise removal

In image normalization we remove noise and illumination effect of an image.

and it is produced by the sensor and circuitry of a scanner or digital camera. Noise is removal algorithm exits. But we are using new image mixed noise removal algorithm based pixel and to restore the image. on measuring of medium truth scale. It uses the distance ratio function to detect the noise generally regarded as an undesirable by-product of image capture. There lot of image

The random variation of brightness or color information in images is Image noise

3.1.1.2 Extract illumination effect of an image

distribution of light sources around a face have the effect of changing the brightness Cast shadows can generate prominent contours in facial images. The effects of the normalized image. the diffuse and specular reflection on a surface. After noise removal and extract the

distribution in the images, the locations of attached shadows, and specular reflections and illumination are obtained using the standard Phong model, which approximately describes illumination effect, we get a normalized image. Rests of the steps are depending on this

Illumination changes influence the appearance of a face image. Positions and

3.2.2 Feature Extraction

In the normalized image we extract frontal features

3.2.2.1 Region detection

strategies have been proposed. Among those, color based face region detection has gained increasing popularity. The success of color-based face detection depends heavily on the accuracy of the skin-color model. Skin color alone is usually not enough to detect the
CSE 7|Page

Many face detection algorithms exploiting different heuristic and appearance-based

potential face regions reliably due to possible inaccuracies of camera color reproduction

FAPS

and the presence of non-face skin-colored objects in the background. Popular methods for integral projection.

Software Design Document

skin-colored face region localization are based on connected components analysis and

3.2.2.2 Feature position detection


Eye position detection because the eyes provide the baseline information about the expected location of the other the red channel image provides a more stable result. Accurate eye detection is quite important in the subsequent feature extraction,

facial features. We use Luminance edge analysis because the detection of sharp changes in

3.2.2.3 Feature contours estimation


Eye Contour detection

lid curve in a quadratic polynomial, and an iris circle. The iris center and radius are estimated by the algorithm developed by Ahlberg. From this we can get Eyeball size, iris size & color, pupil size, reflection spots, eyebrow size, eyelid and eye lash parameters. Lip contour detection

The eye contour model consists of an upper lid curve in a cubic polynomial, a lower

lip color models are used to discriminate the lip pixels from the surrounding skin. From this we can get lower lip positions & size, corner positions and color parameters. Nose contour detection The representative shape of the nose side has already been exploited in order to

The lip color differs significantly from that of the skin. Iteratively refined skin and

increase the robustness, and its matching to the edge and dark pixels. It has some difficulties, so our approach utilizes the full information of the gradient vector from the parameters. But we cannot get bridge width.
CSE

edge detector. From this we can get Nose length, bridge width and nostril width
8|Page

FAPS
Chin and cheek contour detection

Software Design Document


Deformable models have proven to be efficient tools for detecting the chin and

cheek contour. We use a robust model that relies on deformable model. From this we can get Chin shape, teeth, lip corners, Cheek bone space and size and cheek hollow parameters.

3.2.2.4 Hair extraction

are grouped into strands and wisps in diverse hair styles. Extracting hair appearance is an important and challenging problem in here. We are using A Generative Sketch Model for Human Hair Analysis and Synthesis for hair extractions.

Human hair is a very complex visual pattern where hundreds of thousands of hairs

3.2.3 Texture Generation

containing the texture coordinates of the model vertices is created. This is a 2D plane coordinate space also makes it easy to combine the texture from a real photo with a synthetic texture. In order to create a textured model, photos are mapped onto a UV plane.

where the points are matched with the vertex positions of the generic model. Such a public

In order to combine the texture from different view angles, a public UV plane

3.2.4 Flow Diagram of the feature extraction


Input face Image

Noise removal
Chin and Cheek contour Hair Extraction

Extract the illumination Effect Nose contour

Face Region Detection

Eye position detection.

Lip contour

Eye contour

Texture Generation

Extracted Features

Figure 5: Flow Diagram of the feature extraction


CSE 9|Page

FAPS

Software Design Document

3.2.5 Sequence Diagram of the feature extraction

Figure 6: Sequence Diagram of the feature extraction

3.2.6 Constrains

profile image face and side texture, so we make an assumption to generate texture of parameters are used in the morphable model.
CSE 10 | P a g e

profile shape, we take texture same as in the frontal face has. Size of the ear and ear

Using the frontal image it is hard to get the information about ear boundary of the

FAPS

Software Design Document


Database has frontal face images, profile face images and different pose face images.

3.3 Additional features extraction from face database

Particularly From a profile image we can extract Nose tip, Nose Bridge top, under-nose ear shape with respective sizes. ear boundary. initialization

point, chin point, and neck top point. For the depth information we use both profile image steps ear initialization to match the template with the image ear and to translate it to an initial position, and ear refinement to deform the template in order to match the accurate Another Important feature of the face is Ear boundary detection. It has following

and Different pose image. From those we can get depth of the 3D face and nose depth, and

3.3.1 Ear initialization

We use the template as a five-degree polynomial. The skin-color boundary is used for ear

3.3.2 Ear refinement.

Based on the initialized ear template and the matched segment on the ear boundary image, fitting algorithm.

a contour-following method is developed to deform the template to match with the whole

ear boundary. The template with line segments is approximated using an adaptive polyline

3.3.3 Facial features for 3D model

After getting all the features for 3D model creation we create a 2d feature database. This database is modeling as follows feature (Eye, Ear, Nose, Lip, chin, cheek and Forehead) values are in different columns and different pose angle (frontal (0), profile (90), 45) are different pose angle. These mean values are applied to create 3D model.
CSE

in different rows. Using this database we analysis mean value of one features with
11 | P a g e

FAPS

Software Design Document

2D Face Database

Frontal Facial Feature

Additional Facial Feature 2D feature database Figure 7: Parameterization of face database

CSE

12 | P a g e

FAPS

Software Design Document


Creating the parameterized 3D model is one of important part of this project,

3.4 Creating the 3D model

because this is the place where we going to apply the face progression algorithms. An extracted face details from both images and facial database applicable to 3D model and shape and texture.

ideal parameterized 3D model would allow fitting any possible face and apply face progression algorithms. The basic parts of creating a 3D model will be preparing the applying those data in to the 3D model [8]. The modifications have to be done on facial
Developing the 3D model

Developing the facial parameter set (from facial database)

Develop the model using parameter set

Facial texture and shape generation, using 2D image data.

Expression parameter s

Conformati on parameter

Manipulating data in to the facial model

Figure 8: Components of 3D model

3.4.1 Developing the facial parameter set

the parameters based on underlying anatomy of the human face and observation of human
CSE 13 | P a g e

Here it will use a hybrid approach to develop the parameter set that involves develop

FAPS

face properties unique to each other. The facial parameters are basically divided in to two parameters that control conformation or shape in individual faces [9].

Software Design Document

parts, expression parameters that control observable facial expressions and conformation

3.4.1.1 Creating the expression parameter set

These parameters include eyebrow arch, eyebrow separation, jaw rotation, mouth width, upper lip position, mouth corner position.

3.4.1.2 Creating the conformation parameter set


face proportions.

shape, chick shape, neck shape, eye size and separation, face region proportions and overall

These parameters include jaw width, forehead shape, nose length, nose width, chin

3.4.2 Develop the 3D model using derived facial parameter set

represent the basic observations of the face and underlying structure of the face. The basic 3D face model is constructed using 3D polygonal surfaces. Then this model is manipulated scaling of the various features described by face data set. Here the face skin is constructed as the polygonal surface.

by using the parameters that control procedural shaping, interpolation, rotation, and

On development of the 3D face model the basic idea is to develop a model to

3.4.2.1 Manipulation of facial data set in to the 3D model

parametrically controlled 3D model. Techniques such as interpolation, rotation, translation, and scaling of the various features are used to transform static data into the dynamic 3D changes over skin as we are applying face age progression over 3D model.
CSE

face model. As we are using polygonal skin surface, it allows explicit, direct control over model that changes its shape [9]. The foreheads, mouth, neck, cheekbones arias are
14 | P a g e

The static data extracted from facial images need to be transform in to dynamic,

skin vertex positions. This direct control over skin vertex is very useful to control the shape

Interpolation techniques can be used to manipulate most regions data set in to the 3D

FAPS

independently interpolate. In these areas extreme positions are defined in the 3D model, and parameter values are interpolated between these shapes and parameter values. such as sizes of nose, mouth, jaw and chin [9]. position and eye orientation. affecting those features. Scaling techniques are used to control the relative size and placement of facial features

Software Design Document

Procedural construction techniques are used to construct the eyeball, iris, pupil size, eye The following diagram describes the overview of the model structure with the parameters

Position offset techniques are used to control the length of the nose, corners of the mouth, rising of the upper lip etc [9]. These effects are blended in to surrounding regions.

CSE

15 | P a g e

FAPS

Software Design Document

HEAD Skin color, aspect ratio

NECK Space, width

FACE Scaling of the face

CHEEK Cheek bone space and size, cheek hollow

EYES Eyeball size, iris size & color, pupil size, reflection

LIPS Upper and lower lip positions & size, corner positions,

JAW Chin shape, teeth, lip corners

NOSE Nose length, bridge width, nostril width

FOREHEAD Forehead scale, size, shape

Figure 9: Overview of model parameters

3.4.3 Facial texture and shape generation, using 2D image data.

to be modify in to the desired 3D model that looks same as the person appears in the 2D image. The 2D image will only contain frontal face data. These data are used to merge in to created 3D model using the same techniques as used for modeling 3D model. For the locally to 3D model, that is eyes, nose, mouth, forehead, chin, chick etc. are applied locally Procedural construction is used to model the eyes, interpolation is used to model
16 | P a g e

As 3D facial model is developed using the data from facial database, the model needs

[7].

purpose of getting the unique features that are belongs to 2D image the data is applied

forehead, mouth, cheekbone, and rotation is used to open the mouth by rotating the lower
CSE

FAPS

portion of the jaw and scaling techniques are use to get the relative sizes of the placement of parameters.

Software Design Document

3.4.4 Flow Diagram of the 3D model


Face features from face Face features from2D image

Face parameter Creation

Classify expression parameters

Classify conformation parameters

3D Model adjustment

Create the 3D model using parameters

Desired 3D d l

Figure 10: Flow Diagram of the 3D model

CSE

17 | P a g e

FAPS

Software Design Document

3.4.5 Sequence Diagram of the 3D model

Figure 11: Sequence Diagram of the 3D model

3.4.6 Constrains

Only frontal features are available from 2d images, so there is a lack of data from different that we extracted from images.

poses when creating the particular 3D model of that image. So modeling the ear becomes difficult. Getting an ideal 3D model for 2D images is depended on the quality of the data

CSE

18 | P a g e

FAPS

Software Design Document


Applying aging factors on 3D morph able model can be divided in to two categories

3.5 Apply the face age progression factors on 3D model

as shape variation and texture variation. Shape variation is considered under 3D morph able model using face component. Texture variation is considered by using wrinkles. From 3D specific model, the detail of gender for aging model is obtained to retrieve relevant information from database. 3D specific model Facial component addition Wrinkles addition

Poisson Image Editing Figure 12:11: Flow DiagramAging Figure Flow Diagram of of Aging

Aging 3D model

Different variations occur to different facial components during face aging. At the particular

age level, on the generated 3D face model, further addition of facial components one by one with their relevant changes moderates the 3D model. Then after wrinkles addition that perform on 3D model, formulate the 3D model to a synthesized aging model. To make the the face model. Since the different between two ages are identifying from the database, constrain on texture variation from person to person gets easily avoided.
CSE 19 | P a g e

model more realistic in nature, Poisson image editing technique eventually gets applied on

FAPS

Software Design Document

3.5.1 Facial Component addition

Figure 13: Flow Diagram of Facial Component Addition This component is a very crucial. All the shape variation of aging image ultimately depends

on this component. The effective algorithms develop for this component, can make aging image realistic. Estimator and component filter are two most important algorithms that run on this component. Each component would have its own estimator and component filter algorithm. Algorithms proposed in [1], [2] and Computational Models for Age Progression[3] with relevant may have come from existing age progression methods such as statistical approach

alteration for Sri Lankan context. Also with the adaptation of aging parameters from available images of Sri Lankan Data base, the proposed above approaches will be get analyzed.
CSE 20 | P a g e

FAPS

Software Design Document

3.5.2 Wrinkles Addition

This is the area which is going to cover texture variation of the aging model. As specified in the Database, under aging, wrinkles are categorized such as forehead and cheek. The level together with the accuracy of component addition and shape,

amount of wrinkle addition would bring the aging model to a considerable realistic age wrinkles from database into the 3D aging model. The forehead, cheek and Chin detection methodology can be found from 3D model generation.

transformation.There should be an effective algorithm that converts the parameters of

color

Figure 14: Forehead wrinkle Addition

Figure 15: Cheek Wrinkle Cheek aging changes will be obtained from Database as the differentiation (Fig. 4) of two ages. The above curve will be integrated with 3D model by an affective coordinate adopted to obtain seamless synthesis results. So after the integration, aging model engage with Poisson image editing [3] techniques are

inference algorithm. After the aging process at all three levels, the aging model as the integration together to generate final result. The inclusive face model is an additive model.

CSE

21 | P a g e

FAPS

Software Design Document

3.5.3 Constraints

Getting images from Sri Lanka which is belong to same person in different age level

Different components have different rate of age progression. It should be obtained

Applying Changes to 3D model require effectively handle shape variation be effective to generate realistic aging model. realistic changes.

Adaptation of the algorithm which exploits to add components and wrinkles should

from 18 up to 60, is tough.

by the proper analysis on image sequences.

The above additive model mostly requires appropriate filters to be applied to get after the adaptation of database.

The statistical analysis with database would provide further prominent aging factors and might be an evident to eliminate some of those already considered aging factors when they do not significantly influence on Sri Lankan aging progression.

Color variation on texture of the image with age progression need to be verified

CSE

22 | P a g e

FAPS

Software Design Document

4. Data View
Progression Parameter Adaptation
As far as age progression is the main research component of our system, it is adapted with available images of different person in different age appropriately from Registration of

Persons Department. To overcome the difficulty of collecting the photos of the same person for each part from similar images, where the parts can be cropped from faces of different makes the person different at the current age level from age 16. So that the entries are variability. Facial components and wrinkles are the two extracted major information from the mage. consultation with medical site and previous research of [1] has given the clue of aging factors to be categorized. persons. Even more each entry of database at a particular age, have the parameter which one image is available at same age, the Database entry would be the average of all To derive Face Component and Wrinkles, manual effort will be used. Same person at different age level images will be verified to acquire parameters of aging factors. The

at different age groups, our model decomposes face into parts, and learns the aging pattern stored in Database, represent the amount of variability of a component. When more than

Sample Database

Wrinkles Extraction Figure 16: Aging Factors Derivation

Facial Component Extraction


23 | P a g e

CSE

FAPS

Software Design Document

Representation of Wrinkle Table and Facial Component Table

The following tables will be distinctly stored according to verify male and female. Table 1: Wrinkle
Age 20 22 24 --Forehead Cheek Age 20 22 24 ---

Table 2: Facial Component


Eye Nose Mouth Eyebrow

Table 3: Components extraction from Sample Database


Wrinkle Facial Component

Wrinkles addition under eye

The above components are obtained from sample database by cropping the appropriate portion of the image. By taking age 18 components as a reference, other age level then database parameters will be directly applied on that mage.
CSE

Wrinkles in cheek

components differentiation will be calculated by eq.1. If an image is progressed from age 18 Database-Parameter = Component-age-X -Component-age-18; X: from 20 to 60Eq.1
24 | P a g e

FAPS

Software Design Document

5. References

1. Jinli Suo, Song-Chun Zhu, Shiguang Shan and Xilin Chen, A Compositional and and Lotus Hill Research Institute, China

Dynamic Model for Face Aging, University of California, Los Angeles, Los Angeles

2. Narayanan Ramanathan, Rama Chellappa and Soma Biswas, Age progression in Human Faces : A Survey.

4. Gilson A. Giraldi,Carlos E. Thomaz, Statistical Learning Models for Automatic Age Progression, National Laboratory for Scientific Computing Petropolis, Rio de

3. Patrick Perez Michel Gangnet Andrew Blake, Poisson Image Editing, Microsoft Research, UK.

Janeiro, Brazil,Department of Electrical Engineering, FEI Sao Bernardo do Campo, 5. I. K. Park, H. Zhang, V. Vezhnevets. Image-Based 3D FaceModeling System, EURASIP Journal on Applied Signal Processing 2005. Sao Paulo, Brazil.

6. H. Chen and S. C. Zhu, A Generative Sketch Model for Human Hair Analysis and Synthesis, PAMI, Vol. 28,No. 7, July 2006. System,EURASIP Journal on Applied Signal Processing, 2005. Edition, USA, A. K Peters, 1996. Planck-Institut fur biologischeKybernetik.

7. I. K. Park, H. Zhang and V. Vezhnevets, Image-Based 3D FaceModeling

9. Frederic I. Parke and Keith Waters, Computer facial animation book, First

8. V. Blanz, T. Vetter, A Morphable Model For The Synthesis Of 3D Faces, Max-

CSE

25 | P a g e

FAPS

Software Design Document

6. Acronyms and Abbreviations


FAPS FOSS ICAO OeneCV - Face Age Progression System - Free and Open Source Software - Open Computer Vision Library

3D model

- 3 dimensional morphable model

- International Civil Aviation organization

CSE

26 | P a g e

S-ar putea să vă placă și