Sunteți pe pagina 1din 7

A Database Approach to Human Motion Captured Data

Ali Keyvani , Mikael Ericsson , Henrik Johansson University West, Department of Engineering Science, Trollhttan, Sweden 2 Innovatum AB, Production Technology Centre, Trollhttan, Sweden Ali.Keyvani@hv.se

1,2

ABSTRACT
A unified database platform capable of storing both motion captured data and information about those motions (metadata) is described. The platform conforms to the conventional three-tier database concept, and stores large motion captured data in order to be used by different applications for searching, comparing, analysing and updating of existing motions. This procedure is intended to be used to choose a suitable realistic human motion in the production line simulation. The platform is also capable of supporting and handling different motion formats, various skeleton types and distinctive body regions in a uniform data model. Annotating system is also introduced to mark the captured data not only in the time domain (temporal) but also on different body regions (spatially). To utilize the platform, sample tests are performed to prove the functionality. Several motioncaptured data is uploaded to the database while MATLAB is used to access the data, ergonomically analyse the motions based on Ovako Working Posture Analysis System (OWAS) standard, and add the results to the database by automatic tagging of the postures.

Keywords: Motion Capture Database, Virtual Production Systems, Digital Human Modelling, Computerized Ergonomic Analysis
1. INTRODUCTION Simulation of work environments has been limited to the product and the production of the product for many years. Integrating human models into the virtual tools makes the simulation of the whole workflow easier in a production process. The implementation of Digital Human Modelling tools (DHM tools) has greatly affected the simulation of production during the last decade. DHM tools utilises human models, in 2D or 3D, in virtual environment. These human models are often referred as manikins. Introducing manikins in the production simulation enables the production engineer to estimate how humans interact and function within a work environment, such as an assembler at an assembly line. The production engineers often use these DHM tools to simulate different work postures for ergonomic analysis or investigate reach and line of sight. Advantage with DHM tools is the ability to make ergonomic analysis of work environments to prevent musculoskeletal disorders and increase quality and productivity [1]. Ergonomic of work environments is not the focus in this article; however, the database structure that is described in this paper can be used to support these analyses. The implementation of motion capture (Mocap) techniques has greatly affected the usage of DHM tools during the last decade. This paper describes how database techniques can be used to facilitate the usage of Mocap data within DHM tools, mainly storage and retrieval. The database techniques provide a framework for handling motion data, not only for DHM tools but also for all tools that need motion data. The outputs from Mocap systems are generally stored in datasets that include the motion data and information about the captured data. In section 4, a general workflow is presented of how datasets are used to populate the databases. In this approach, the motions become independent of the subject. Annotation methods are also introduced to describe information about the motions. The experiments in section 5 are used to verify if the database can handle the information according to the desired requirements. 2. BACKGROUND The need to use manikins and DHM tools in the production processes often comes from big companies, such as car manufactures. As a result, many of todays CAD vendors have integrated DHM tools into their software, such as Jack from Siemens PLM software [2]. One of the first applications of DHM was to analyse if it was possible to reach a specific location or control [3]. Today, DHM models are commonly used during the design of vehicle. Common industrial applications are to modelling the interaction between the driver, passengers and the interior in the car [4, 5, 6]. Example of an industrial task is to analyse if the DHM can reach a specified hand position in an assembly operation in a digital simulation of the factory [7, 8]. A large sector of DHM outside the industrial usage is entertainment such as games and movies [9]. The simulation of humans in the production is often done using mouse and keyboard. A natural posture of the manikin is often judged by the production engineer via visualisation and assumptions. However, manual manipulation of the manikins are both time consuming and not always reliable. To help the end user generating postures, several techniques have been introduced to predict realistic postures. Some of them are based on pure mathematical methods, normally

referred to as inverse kinematic methods [10]. Besides, optimization tools used in order to minimize joint stresses and predict a 2D posture based on that [11,12]. Furthermore, other methods try to use real human motions as a reference to predict postures. A common need in all these techniques, which use real human motions, is to have structured databases of recorded human motions. These databases shall include the changes in body joints during time and shall contain information about the characteristics of the motions (annotations). Mocap is a common way to collect real human motion data. Several different commercial Mocap systems are available today and can be categorized in three main groups: Optical systems, magnetic systems and mechanical systems. Other methods exist such as ultrasonic and internal systems [13]. As a result, considerable work has been done to introduce efficient techniques for collecting and manipulating the human Mocap data. The work has been divided into two main approaches: database techniques [14], and annotating of motion sequences [15]. To generate a posture from real human data, different methods have been suggested. Examples are: using statistical models [16], treating recorded motions as a manikin memory [17], warping an existing motion by adjusting the motion graphs [18], synthesizing recorded data with the help of annotations [19], and mixing recorded motions with inverse kinematic techniques [20, 21]. 3. PROBLEM DESCRIPTION It is time consuming to acquire and record new motion data for each specific task. Another issue when recording new Mocap data is the need of advanced equipment such as cameras, sensors and computers with advanced software. Mocap data is recorded using a human subject performing a specified task. The subject usually wears a special suit with markers. The position of each marker is tracked by the Mocap system. Recorded Mocap data from different sources can normally not be reused because of number of reasons. One issue is that different file formats are used to save the Mocap data due to different types of system. Another issue when reusing Mocap data is the unique nature of each motion, which in turn is sensitive to changes in the skeleton configuration. Depending on the application, different skeleton configurations are used. Variations in the skeleton configuration can be e.g. number of segments, joint configuration and the length of individual segments. Figure 1 shows an example of two skeletons with different number of segments. A third issue is the variations in naming convention due to lack of any standard. Each research group can have its own naming conventions both for segment parts and/or for tagging the performed action. For example,

use of trunk/chest in naming of segments and walk/pace/stride in naming the action.

Figure 1. Two different types of human skeleton, where lines and dots representing the segments and the joints Generally, Mocap data are collected in datasets, which are usually folders named with a proper term describing the action and populated with Mocap files. One problem with large datasets of motions is the limited possibility to search within Mocap files. One possibility to facilitate search will be to integrate the Mocap data in a database. A database is a system that provides means to manage, store, search, retrieve and process large amount of structured data [22]. Such a system should also allow the user to combine several existing motions and store it as a new entity. In this article, a unified database platform is described capable of storing both Mocap data and information about those motions (metadata). The platform conforms to the conventional three tier (Data, Query, and Application) database concept. It stores large Mocap data that can be used by different applications for searching, comparing, analyzing and updating existing motions. A new annotating system is proposed to mark the captured data. Compared to existing methods [1], the proposed technique covers annotations not only in the time domain (temporal) but also on different body regions (spatial). The advantage of this annotation method is that it will improve the search functionality to be more specific. 4. DATABASE ARCHITECTURE The main concept of the proposed platform is to provide a generic platform, which can manipulate Mocap data in an organized way. It also provides efficient ways for other applications, especially industrial virtual production/ergonomic tools, to access the data, analyse them and feedback the results to the platform. Later on, these data will be a source for DHM tools to find and generate a realistic and valid motion. For doing so, a standard relational database approach with a three-tier architecture was implemented, see Figure 2. Relational database systems main features, which make it a suitable solution in this application, are: Easy and fast to implement Expandable Accessible from external applications

Can accommodate different data formats (text, numbers, etc.) The inner layer, called data layer, keeps the data within tables. The middle layer, called query layer, is the link between the inner and outer layer. It handles the queries generated from the outer layer and communicates with the data layer. The query layer also insures that the data is consistent and well defined by means of relationships, attributes and rules. In fact, accurate presentation of this layer, called data schema, is one of contributions in the paper. The outer layer, called application layer, is the interface communicating with the user. It can be any application that needs to communicate with the data layer. It can be in both directions (publish and update, or search and retrieve) as long as the applications talk to the query layer in the standard language.

of key requirements for a Mocap database to be efficiently utilized. These key requirements are: Support of skeleton variations Support of customized body segmentation (body zones) and partial motion description (upper body, hands, etc.) Support of two domains (spatial and temporal) annotating strategy The data schema is based on several data objects (D.O.s), described as follows: Marker D.O.: Markers (in optical Mocap) are reflective objects attached to the subject (actor) body, and are the main targets for the Mocap device to be tracked accurately. Therefore, a Marker D.O. is a piece of data containing the marker name and its position through the space at a specific time (frame #). Since these data can be considered raw data and cannot be directly used, the data schema does not store and index them in the database tables. However, they can optionally be included in the motion data as a file attachment for further usage. To define a motion from the Mocap output, one needs to map the marker data to the joint centroids of the subject body. It means that first of all there should be a Skeleton (hierarchy) defining how the body segments are connected to each other; and second there should be a table defining the length of these segments for each person performing the motion (subject). Finally, the relational movements of these segments shall be defined by means of changes in the joint values called rotations. Regarding this, three different D.O.s are needed in the schema to precisely define a pure motion independent of the motion subject:

Figure 2. Data, query and application layers in a three-tier architecture. One benefit of having such architecture is that the application layer can access the data regardless of the application type. This means that many applications can communicate with the data core using a standard query builder (SQL in this case) without worrying about how the data is handled within the system. Furthermore, different functions such as visualization, analysis, updates and data combination can be done by different applications simultaneously. Preparations are needed to populate the database with motion files. This means that for one time, each Mocap file shall be interpreted and parsed into the database according to a specific workflow. To accomplish this, two separate procedures will be discussed in the rest of the paper, data storage and data retrieval. However, before going into these details, different data types to be handled by the database are presented in the next subsection. 4.1 Data Classes/objects definition In this section, the implementation of data schema is presented. The target of schema is to satisfy a number

Skeleton D.O.: As mentioned, the hierarchy of the body segments shall be defined for a specific motion in the Skeleton Data object. Notice that, depending on the application, different skeletons may be defined. This type of D.O. defines a hierarchy tree of parent-child segments connected to each other. Actors D.O.: Assuming identical skeleton hierarchy, two different subjects can be distinguished by assigning their own specific segment lengths to the same skeleton. An Actors D.O. contains these data in addition to other general data about the motion subject such as gender, weight, height, etc. Joint Value D.O.: having a skeleton and subjects segment length data, a motion can be sufficiently defined by the relative rotational values in each joint during time. Generally, three degrees of freedom are needed to define a generic rotation, and therefore Joint Value D.O.s are pieces of data containing the joint name and three rotation values in a specific time. BodyZone D.O.: In addition to above, motions are not necessarily performed, and/or needed to be retrieved on full body. Therefore, there should be a way to define a motion partially based on different body zones. The BodyZone D.O. defines how a number of joints can be grouped to form a zone. Moreover, it is possible to form

a new body zone by combining existing body zones (Both_Hands = Right_Hand + Left_Hand). Annotation D.O.: despite many existing annotating softwares, the proposed model is able to annotate motions both temporally (frame range) and spatially (body zones). Therefore, an Annotation D.O. contains information about the type of annotation, tag name, target motion object, frame range and the applicable body zone. 4.2 Data Storage Workflow In order to make the Mocap data generic and reusable, several actions need to be performed on the data while they are being populated into the database. Figure 3 shows the flowchart of these actions. When receiving a Mocap file, first a new object called Basic_Motion D.O. is created. The Basic_Motion object is the core object in the database placed in a table called Basic_Motion table. This table consists of some mandatory and optional attribute fields, which primarily describe the motion and consist of relational fields, which connect the Basic_Motion object to other tables. Examples of mandatory attribute fields are the unique motion ID (primary key), number of total frames and data format. Examples of optional fields are frame rate, primary motion description, date and number of cameras.

Examples of mandatory relational fields are relation key to Skeleton_Type table and relation key to Body_Zone table. Example of optional relation is relation key to Actors tables which show who was the primary subject of the motion. Notice that the schema assumes that the motions can be considered independent of the subject and can be applied to a new subject by changing the reference key in the segment data table. As a result, this is an optional and not a mandatory relation when defining Basic_Motion object. For the next step, skeleton data, subject data (segment length) and joint data are separated, analysed and stored in the database. Most of the current commercial Mocap file formats (.htr by Motion Analysis, .asf/amc by Acclaim) include these data. htr format keeps the hierarchy, segment lengths and joint values in one file and asf/amc keeps the skeleton information (.asf) and joint data (.amc) separately. As mentioned before, there can be different skeleton definitions and a motion can be performed by many subjects. When receiving a new motion, the Skeleton hierarchy will be checked to see if it exists in the database. In case of new skeleton, the database is populated by adding the skeleton and mapping the previously defined body zones to it. For example if a body zone called Right_Hand was defined for the previous skeletons, the definition shall be updated by assigning which joints from the new skeleton are also

Receiving new motion capture file

Create new Basic Motion Object

Get Joint Data

Get Actors Data

Get Skeleton Data

Apply format Conversion

Yes

New Actor? No

New Skeleton? No

Yes

Add to the Skeleton Database

Add to Joint Data tables

Add to actors database

Update Basic motion table Update Basic motion table

Full body motion? Yes

No

Find and map the BodyZone

Update Basic motion table

Figure 3. Data Storage Workflow

members of this body zone. Next, the basic motion D.O. is updated with the correct skeleton type and the assigned body zone (notice that not all the motions are performed full body). Similar scenario will be executed for the new subject definition. Finally, the joint values are stored in the tables. For each joint three values, corresponding to the Euler angle rotations will be stored. In case the file format was not based on Euler angles, a conversion is performed on the data to find the equivalent Euler angles and store them in the joint data tables. 4.3 Annotating The main goal of storing the motions in the database format is to provide access for different applications to analyse the motion data efficiently. The proposed model is able to assign a tag not only temporally (frame range) but also spatially (body zone). It means that the tagging system provides a 2D space to describe the motion pieces. This approach will help the user applications to search for part of the body and part of the motion for a specific request. Search and retrieve The proposed data schema provides enough flexibility to generate a range of both simple and complex queries. These queries are sent from the user applications to the data schema for different purposes: In a simple scenario, the query is retrieving a complete or part of a specific motion for visualising purposes. Analysis tools can ask queries to receive part of a motion, analyse it, and adding back the results in form of annotations to the data schema. Motion synthesizers can generate queries to search for a specific action (previously annotated) with closest match to a certain scenario. 5. EXPERIMENTS A number of experiments were set up to validate that the requirements in previous sections could be fulfilled with the proposed platform. 5.1 Data entry Sample motions were generated using an in-house Mocap laboratory. The laboratory consists of 10 Hawk cameras and Cortex 2.0 software from Motion Analysis. The generated motions were all full body motions of different lengths (150 to 2500 frames), and the generated output was .htr format. Two types of skeleton definitions (27 and 15 segments) were used, and four different persons performed the motions. Moreover, eight different body zones were assigned to the skeleton definitions. Data schema was implemented in Microsoft Access, and MATLAB was used as a user application to connect to the database using SQL toolbox. 5.2 Motion Analysis and Data Update Since the focus of this work is to reuse real human motions in virtual production systems, ergonomic

analysis is one of the core interests. Among other possible standards, The Ovako Working Posture Analysis System (OWAS) [23] is used as an example of an ergonomic tool to analyze the motion. MATLAB is used to evaluate and annotate postures frame by frame from the ergonomics point of view based on OWAS standard. The OWAS concerns four categories, which are: Back, Upper limbs, Lower limbs and Carrying Load. Based on the joint values in each frame the position of body segments are decided. The result is a 4-digit code, each digit corresponding to one category (Figure 4). The final combination will decide the ergonomic condition a posture is by assigning Green, Yellow, Orange and Red colour codes. These results are updated into the data schema, so other applications can use them while searching for an ergonomically accepted posture.
Fr 457 Fr 480 Fr 507 Fr 540

Figure 4. Frame samples presenting postures tagged with OWAS code 3121 (twisted standing on both feet with hands below shoulder) 5.3 Annotation Report As mentioned, OWAS analysis results were fed to the data schema by means of annotations. In addition, performed motions were also annotated manually based on the type of actions that subjects are doing. Tags such as walk forward, walk backward, seated, stand on one foot, etc. were applied to the corresponding frame ranges and body zones. The annotation tables enable the end user applications to generate several types of reports and diagrams. Figure 5 shows overall ergonomic condition of a selected motion. Based on the OWAS code each frame is situated in a coloured zone. The green zone represents healthy postures and red zone representing harmful postures.

applications to search for a specific motion in part of the body. The data schema confirmed to be a suitable platform for the DHM and ergonomic tools. It is showed that via interpreters ergonomic tools can communicate with the platform and examine motions frame by frame for hazardous work postures. It is also shown that DHM tools can search for a required posture or motion through real human data.

7. REFERENCES [1] Eklund. J, (1995). relationships between ergonomics and quality in assembly work, Applied Ergonomics, Vol. 26, pp15-20 Figure 5. OWAS code results of a sample motion during time (Frame #) 5.4 Visualization Again, MATLAB and Cortex 2.0 were used to search for part of a motion in the database, retrieve it and visualize it. Search functions can be any type of simple or complex query and the result can be a single frame, range of frames, and full body or partial body motion. Figure 6 shows a sample search result for a range of motion tagged with Walk Backward.
Fr 1009 Fr 1030 Fr 1058 Fr 1083

[2] Siemens, Teamcenter http://www.plm.automation.siemens.com/en_us/pro ducts/tecnomatix/assembly_planning/jack/index.sht ml [3] Ryan, P., Springer, W., and Hiastala, M. (1970). Cockpit geometry evaluation. Joint Army-Navy Aircraft Instrumentation Research Report 700201. [4] Reed, M.P. and Huang, S. (2008). Modeling vehicle ingress and egress using the Human Motion Simulation Framework. Technical Paper 2008-01-1896. Proceedings of the 2008 SAE Digital Human Modeling for Design and Engineering Conference. SAE International, Warrendale, PA [5] Don B. Chaffin, Digital Human Modeling for Vehicle and Workplace Design, (2001) SAE International, ISBN: 978-0-7680-0687-2 [6] Jingzhou Yang and Joo H. Kim and Karim AbdelMalek and Timothy Marler and Steven Beck and Gregory R. Kopp (2007), A new digital human environment and assessment of vehicle interior design, Computer-Aided Design, 7 (39), 548 558, [7] Sigrid Wenzel and Ulrich Jessen and Jochen Bernhard (2005), Classifications and conventions structure the handling of models within the Digital Factory, Computers in Industry", 4 (56), 334 346

Figure 6. Frame samples visualizing action tagged as "Walk Backward 6. CONCLUSION & DISCUSSIONS The data schema presented in this article successfully stored and retrieved different motions regardless of variations in the skeleton types. It is possible to annotate the motions based on customized body zones independent of the skeleton types. The presented data schema is validated using different type of queries (motion level queries, which are searching inside annotations, and joint level queries, which are searching for a specific posture). The implemented annotation system enables the

[8] D. Mavrikios and V. Karabatsou and M. Pappas and G. Chryssolouris (2007), An efficient approach to human motion modeling for the verification of human-centric product design and manufacturing in virtual environments, Robotics and ComputerIntegrated Manufacturing, 5 (23), 533 543, [9] Aberto Menache, (2011) Understanding Motion Capture for Computer Animation, Elsevier Inc. 2010, ISBN 13: 978-0-12-381496-8 [10] E. S. Jung, D. Kee, and M. K. Chung, (1995) Upperbody reach posture prediction for ergonomics evaluation models,Int. J. Ind. Ergon., vol. 16, pp.95107 [11] Xudong Zhang, Arthur D. Kuo, Don B. Chaffin (1998). Optimization-based differential kinematic modeling exhibits a velocity- control strategy for

dynamic posture determination in seated reaching movements. Journal of Biomechanics 31, pp10351042. [12] C. J. Lin, M. M. Ayoub, and T. M. Bernard, (1999). Computer motion simulation for sagittal plane lifting activities. Int. J. Indust. Ergon., vol. 24, pp. 141 155. [13] Midori Kitagawa, Brian Windsor (2008), MoCap for Artists: Workflow and Techniques for Motion Capture, Elsevier Science & Technology, ISBN: 9780240810003 [14]C. Awad, N. Courty, and S. Gibet (2009). A Database Architecture For Real-Time Motion Retrieval, In Proc. of the 7th International Workshop on Content-Based Multimedia Indexing (CBMI09), IEEE CS (ed.), pp. 225-230. [15] Kanav Kahol, Priyamvada Tripathi, Sethuraman Panchanathan (2006). Documenting Motion Sequences with a Personalized Annotation System, IEEE Multimedia, vol. 13, pp. 37-45. [16] J. J. Faraway (1997). Regression analysis for functional response,Technometrics, vol. 3, pp. 254261. [17] Park, W., Chaffin, D. B., Martin, B. J., & Yoon, J. (2008). Memory-based human motion simulation for computer-aided ergonomic design. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, vol. 38(3), pp513527. [18] Andrew Witkin and Zoran Popovic (1995). Motion Warping, SIGGRAPH 95 [19] Okan Arikan, David A. Forsyth and James F. O'Brien. (2003) Motion Synthesis from Annotations. ACM Transactions on Graphics (ACM SIGGRAPH 2003), Vol: 33, pp 402-408 [20] FARAWAY Julian J. (2003). Data-based motion prediction, Society of Automotive Engineers, Vol.112, pp722-732 [21] Aydin Y., Nakajima M. (1999) Database guided computer animation of human grasping using forward and inverse kinematics. Computers and Graphics, vol.23, 145-154 [22] Hector Garcia-Molina; Jeffrey D. Ullman; Jennifer Widom, Database Systems: The Complete Book, Second Edition, Prentice Hall, 2009, ISBN-10 0-13187325-3, [23] Louhevaara and Suurnakki, (1992) OWASA method for the evaluation of postural load during work, Training Publication 11, Institute of Occupational Health and Centre for Occupational Safety, Helsinki

S-ar putea să vă placă și