Documente Academic
Documente Profesional
Documente Cultură
AbstractThe robot manipulation in human environment perform similar tasks, this paper deals with the generation and
is a challenging issue because the human environment is control of human-like arm motions and the interaction between
complex, dynamic, unstructured, and difcult to perceive the robot hand and environments.
reliably. In order to implement promising robot applications
in our daily lives, robots need to perform manipulation A number of studies on the kinematic and dynamic aspects
tasks within human environment. Particularly for a hu- of human arm movements have been conducted [9], [10]. Flash
manoid robot, the manipulability of the objects is essential and Hogan introduced a mathematical model based on minimal
to assist the humans in human environment. This paper jerk for an unconstrained point-to-point arm movement and
presents a method for manipulating an object with both experimentally showed the bell-shaped velocity profiles of the
arms of a humanoid robot. We focus on the generation of
human-like movements by using the human motion cap- curved motion computed from the model [9]. Arimoto and
ture data. Then, the control method based on the virtual Sekimoto employed the bell-shaped velocity profile for the
dynamics model is proposed to control both the motion point-to-point arm movement to verify the human-likeness of
and force under the uniform control system. This method their controller for a robot arm [10]. These models may not be
empowers the robot to perform the object manipulation good enough to deal with more complicated arm movements
task including reaching, grasping, and moving an object
in sequence. The proposed algorithm is implemented on requested for such more dexterous tasks as writing letters on a
a humanoid robot with independent joint controller at each white board, rotating a screw driver, moving an object with both
motor; its performance is demonstrated by manipulating an arms, and so on.
object with both arms. In our knowledge, human-like movements cannot be charac-
Index TermsDexterous manipulation, grasping, human- terized just by the bell-shaped velocity profiles, but depend on
like motion generation, humanoid robot, virtual dynamics the characteristics and the purposes of given tasks. A possible
model (VDM). approach to produce human-like motions or behaviors is, first,
to analyze a given task and solve it with robot programming by
I. I NTRODUCTION a human [11], [12]. To reduce those human efforts and perform
more complex tasks, another approach based on learning the-
V ERSATILE and flexible manipulation skills of robot sys-
tems are necessary to perform complicated tasks such
as a human handles tools and objects [1]. Particularly for a
ories has been studied by many researchers [12][16]. Robot
programming by demonstration (PbD), which is also referred
humanoid robot, the manipulability in human environments and to as imitation learning, appeared to automate tedious man-
circumstances such as handling tools, objects, and equipment ual programming for manipulating robots [14]. Ijspeert et al.
is still a challenging issue since the human environment is [15] designed a motor representation based on dynamical sys-
complex, dynamic, and difficult to perceive and control reli- tems for encoding movements and replaying them in various
ably [1][6]. The human-like motion is one of the significant conditions. Lim et al. [16] employed the principal component
issues for a humanoid robot to perform dexterous manipulation analysis (PCA) and the dynamics-based optimization algorithm
tasks in human environments [7], [8]. For a personal service to generate torque minimized human-like motions. However,
robot, people may also feel comfort, cooperative, and friendly the dynamics-based optimization algorithm was merely applied
from a human-like motion-based manipulation instinctively and offline, and the inverse kinematics problem has to be considered
empirically [8]. People usually manipulate a number of objects to obtain the final arm posture that may not look like human arm
by using both arms for a certain task, keeping proper forces posture.
between the objects and the hands. For a humanoid robot to We herein introduce one of the motion generation methods
that enables to preserve the human-like characteristics of a
given task for a robotic arm. Our assumption is that the char-
Manuscript received February 19, 2014; revised June 9, 2014; ac- acteristics of a specific task are implicitly absorbed within
cepted July 23, 2014. Date of publication August 28, 2014; date of the human demonstrations; therefore, we use the human mo-
current version March 6, 2015. This work was supported in part by
the Korea Institute of Science and Technology under Project 2E24721. tion capture data for generating the motion of the robot. The
(Corresponding author: ChangHwan Kim.) human-like arm motion can be characterized by the move-
S. Y. Shin is with the Department of Mechanical Engineering, The Uni- ments of hand and elbow. In this paper, we define the criteria of
versity of Texas at Austin, Austin, TX 78712 USA (e-mail: syshin0228@
utexas.edu; syshin0228@gmail.com). human-likeness as the trajectories of hand and the trajectories
C. Kim is with the Center for Bionics, Korea Institute of Science and of elbow generated by the elbow elevation angle (EEA) [7].
Technology, Seoul 130-650, Korea (e-mail: ckim@kist.re.kr). The human-like characteristics of the arm motion are evaluated
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. by comparing the criteria of human-likeness of the generated
Digital Object Identifier 10.1109/TIE.2014.2353017 motion with that of the captured human motion.
0278-0046 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
2266 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015
Fig. 1. Acquisition of reaching motion data. (a) Human motion capturing. (b) (red crosses) Center positions of the target object. (c) Sample
reaching motion data. (d) Sample reaching motion data in the y-axis of left hand (hand position is defined as the center point between the thumb
and pinky positions).
The approaches in [12][16] are exclusively based on motion human programming efforts. The human-like motion gener-
trajectory generation in kinematics level. These approaches still ation for the dual-arm object manipulation and the control
require an additional force controller to deal with the interaction method based on VDM are presented in Sections II and III,
between a robot and an environment. The impedance behavior respectively. The experimental results and comparisons with
for dual-arm manipulation was studied by Wimbock et al. [17], other approaches are described in Section IV, followed by
[18] with a humanoid robot. They demonstrated object manip- discussion and conclusion.
ulation with both robot arms by implementing the impedance
behaviors to the controller. Ozawa et al. [19] and Yoshida et al.
[20] proposed a method to grasp an object with dual-finger II. H UMAN -L IKE A RM M OTION G ENERATION
robots. However, these algorithms are based on a torque control
A. Acquisition of Human Motion Data
method where the sensors measuring the torque (or current) are
necessary to implement the approaches in [17][20]. The human-like movement is generated based on the cap-
We proposed a control method based on virtual dynamics tured human motion data. The human motion data were cap-
model (VDM) [21]. VDM is an ideal dynamic model that exists tured by Xsenss Moven motion capturing suit (http://www.
in the simulation space, neglecting the nonlinear effects such as xsens.com), which is a cameraless inertial motion capture sys-
friction and uncertainty terms and compensating gravitational tem that provides 3-D kinematics of full human body. The ini-
effects [21]. The target trajectory of a motion (or a force) to be tial step for constructing motion data is to acquire the multiple
controlled is filtered through the VDM-based controller, before sets of human motions by using the motion capture system.
executing it by the robot motor controller. Using the VDM- The captured motion data are composed of the discrete time
based controller, the target motion for a robot arm generated trajectories of virtual points of interest on the human body.
in a Cartesian space can be directly controlled without solving After performing the geometrical scaling to resolve the length
inverse kinematics and avoiding geometrical singularities. An- differences between the human and robot arms [22], the scaled
other advantage of the method is that this approach does not motion is stored. Each motion primitive is composed of the
need to sense the torque (or current) at each joint motor, but trajectories of ten virtual points of interest, which are defined on
only requires the encoders to acquire the current joint positions the upper body of the human: three points on the hand (thumb,
and the force and torque (FT) sensors (attached to the wrist) middle, and pinky), one point on the elbow, and one point on
to measure the external forces applied to the robot arms. Such the shoulder for both arms.
measured forces and torques are kept constant at a certain level 1) Object Reaching Motion Data: First, the object reach-
for the robot arms to physically interact with an environment or ing motions with both arms are captured by a human subject. It
an object like holding an object. is assumed that an object is placed in front of the human sub-
In this paper, we present a control method for manipulating jects chest, as shown in Fig. 1(a) and (b). In Fig. 1(b), the red
an object with both arms of a humanoid robot by reducing crosses in front of the human subjects chest are assumed to be
SHIN AND KIM: HUMAN-LIKE MOTION GENERATION AND CONTROL FOR HUMANOIDS OBJECT MANIPULATION 2267
Fig. 2. Acquisition of object moving motion data. (a) Sample object moving data (object position is defined as the center point between the left
and right hands). (b) Sample object moving data in the z-axis.
Fig. 8. Simulation results of hand trajectories during the reaching motion: (red solid line) EM, (magenta dotted line) EM-VDM, (blue dashed line)
VSDH, and (black dashdot line) HM. EM is the eigenmotion trajectory, and EM-VDM is the eigenmotion trajectory filtered by the VDM controller.
(a) Reaching motion in 3-D space. (b)(d) Time-normalized hand position and velocity trajectories in the x-, y-, and z-axes, respectively.
Fig. 9. Simulation results of elbow trajectories during the reaching motion: (red solid line) EEA, (magenta dotted line) EEA-VDM, (blue dashed
line) VSDH, and (black dashdot line) HM. EEA is the elbow trajectory obtained by EEA, and EEA-VDM is the elbow trajectory obtained by EEA
and filtered by the VDM controller. (a) Reaching motion in 3-D space. (b)(d) Time-normalized elbow position and velocity trajectories in the x-, y-,
and z-axes, respectively.
TABLE III obtained from Section II-C is used as the reference hand mo-
RMS E RROR OF L EFT E LBOW T RAJECTORY IN R EACHING M OTION
tion [see thick black line in Fig. 10(a)]. The initial and final
points for the object moving motion are given based on the
final positions of both hands after the reaching motion for the
sequential performance. The elbow trajectory of human (HM) is
calculated by averaging 20 sets of moving motion data acquired
in Section II-A2, and VSDH is obtained by allowing the elbow
movement to depend on the hand and shoulder movements
without applying the virtual spring-damper force at the elbow
divided into two parts: the reaching motion and the object position [10]. The elbow trajectories of the four groups in
moving motion from Section II-B and C, respectively. The 3-D space are shown in Fig. 10(a), and the elbow position
elbow trajectories are calculated with EEA by implementing trajectories in the x-, y-, and z-axis directions are shown in
the trajectories of shoulder and hand [7]. Fig. 10(b)(d), respectively. The RMS errors between the HM
First, we compare the elbow trajectories of four groups and the other three trajectories are denoted in Table IV.
during the reaching motion compared in Section III-A: 1) elbow
trajectory obtained by EEA (EEA: red solid line); 2) elbow
C. Object Manipulation With Humanoid Robot
trajectory obtained by EEA and filtered by the VDM controller
(EEA-VDM: magenta dotted line); 3) elbow trajectory con- The object manipulation task including reaching, grasping,
nected with VSDH (VSDH: blue dashed line); and 4) elbow tra- and moving an object (ball, diameter: 30 cm, weight: approx-
jectory captured by human (HM: black dashdot line). VSDH imately 280 g) is performed in sequence with the humanoid
is obtained by connecting the initial and final elbow points robot Mahru. Initially, the human-like reaching motion gener-
of the given reaching motion with the VSDH controller. The ated by the eigenmotion from Section II-B is controlled through
elbow trajectories of the four groups in 3-D space are shown the VDM controller in order to reach the object with both
in Fig. 9(a), and the elbow position and velocity trajectories hands. After the reaching, the force is imposed on the object
in the x-, y-, and z-axis directions are shown in Fig. 9(b)(d), to grasp it (more details for grasping different types of objects
respectively. The RMS errors between the HM and the other are referred in [21]). Then, the object moving motion generated
three trajectories are denoted in Table III. by the eigenmotion from Section II-C is once again controlled
Second, we compare the elbow trajectories of the four groups through the VDM controller to move the object on the profile
during the object moving motion. The profile of movement of movement.
SHIN AND KIM: HUMAN-LIKE MOTION GENERATION AND CONTROL FOR HUMANOIDS OBJECT MANIPULATION 2273
Fig. 10. Simulation results of elbow trajectories during the object moving motion: (red solid line) EEA, (magenta dotted line) EEA-VDM, (blue
dashed line) VSDH, and (black dashdot line) HM. EEA is the elbow trajectory obtained by EEA, and EEA-VDM is the elbow trajectory obtained by
EEA and filtered by the VDM controller. (a) Trajectories of the object and both elbows in 3-D space. (b)(d) Elbow position trajectories in the x-, y-,
and z-axes, respectively.
TABLE IV obtained by the eigenmotion and EEA with those obtained by
RMS E RROR OF L EFT E LBOW T RAJECTORY IN O BJECT M OVING M OTION
the captured human movements. Many studies have employed
the bell-shaped velocity profiles as the key factor for character-
izing the human-likeness [9], [10]. One of the methods is the
VSDH controller, which uses the bell-shaped velocity profile
of the point-to-point motions to verify the human-likeness of
The trajectories of the object and both hands in 3-D their controller [10]. However, particularly for the specific tasks
space during the whole manipulation session are depicted in such as object reaching or moving, we claim that the human-
Fig. 11(a), and the target and current trajectories of the left like characteristics cannot be just argued by the bell-shaped
hand in the x- and z-axis directions (both hands in the y-axis velocity profiles, but depend on the characteristics and the
direction) are shown in Fig. 11(b). For the depiction simplicity, purpose of the given tasks, where the evidences can be found
only the left hand is shown in the x- and z-axis directions, in our experimental results; by observing the HM (see black
whereas both hands are shown in the y-axis direction [check dashdot lines in Figs. 810), it can be seen that the position
the coordinates in Fig. 11(a)]. To observe the applied force on and velocity profiles of the hand and elbow do not just depend
the object, the force amplitude measured from the FT sensor on the bell-shaped velocity profiles, but draw the curves that
attached to the left wrist is shown in Fig. 11(c). implicitly absorbed within the characteristics of the reaching
motion (or the object moving motion).
In order to evaluate the human-like characteristics of our
V. D ISCUSSION
method, we compared with the VSDH controller [10]. In
One trait of the motion generation method based on robot Figs. 8(b)(d) and 9(b)(d), it is shown that both the hand and
PbD, or imitation learning, appeared to automate tedious man- elbow trajectories given by EM and EM-VDM are qualitatively
ual programming for manipulating robots [12][16]. In this closer to HM compared with VSDH both in position and
paper, the eigenmotion, one of the motion generation methods, velocity; in particular, large error appears in VSDH in the x-
is adopted for generating human-like arm movements. The and y-axis directions (see Tables II and III for the quantitative
human-like characteristics of the method are evaluated through evaluation). Another given example of human-like movement
the experiments by comparing the hand and elbow movements was to move on the profile of movement after reaching and
2274 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015
beneficial since this error is caused by the force generated for robotics, IEEE Trans. Ind. Electron., vol. 61, no. 8, pp. 40434051,
holding the object. Indeed, it is shown that the force has been Aug. 2014.
[7] S. Kim, C. H. Kim, and J. H. Park, Human-like arm motion generation
applied on the object during the object grasping and moving for humanoid robots using motion capture database, in Proc. IEEE/RSJ
period [see approximately 1143 s in Fig. 11(c)]. Thus, this IROS, Beijing, China, Oct. 2006, pp. 34863491.
error is meaningful in the perspective of the compliance control, [8] V. Potkonjak, S. Tzafestas, D. Kostic, and G. Djordjevic, Human-like
behavior of robot arms: General considerations and the handwriting
and this is the reason why we adopted VDM control method in taskPart I: Mathematical description of human-like motion: Distributed
this work. positioning and virtual fatigue, Robot. Comput.-Integr. Manuf., vol. 17,
Finally, the method for object manipulation in this paper no. 4, pp. 305315, Aug. 2001.
[9] T. Flash and N. Hogan, The coordination of arm movements: An ex-
still leaves many rooms for improvements. For example, we perimentally confirmed mathematical model, J. Neurosci., vol. 5, no. 7,
assumed that the situation such as dropping or slipping object pp. 16881703, Jul. 1985.
is not considered in this work. To overcome this issue, a vision [10] S. Arimoto and M. Sekimoto, Human-like movements of robotic arms
with redundant DOFs: Virtual spring-damper hypothesis to tackle the
system could be integrated for the position feedback of the Bernstein problem, in Proc. IEEE ICRA, Orlando, FL, USA, May 2006,
object to assure the grasping stability. In addition, we only pp. 18601866.
tested with a light object (ball, weight: 280 g) in this work, but [11] E. Sisbot, L. Marin-Urias, X. Broquere, D. Sidobre, and R. Alami, Syn-
thesizing robot motions adapted to human presence, Int. J. Soc. Robot.,
grasping heavy objects should be also investigated (such as by vol. 2, no. 3, pp. 329343, 2010.
integrating finger control) to improve the object manipulation [12] S. Schaal, Is imitation learning the route to humanoid robots, Trends
task with both arms of the robot. Cognitive Sci., vol. 3, no. 6, pp. 233242, 1999.
[13] R. Zollner, T. Asfour, and R. Dillmann, Programming by demonstration:
Dual-arm manipulation tasks for humanoid robots, in Proc. IEEE/RSJ
IROS, Sendai, Japan, 2004, pp. 479488.
VI. C ONCLUSION [14] A. Billard, S. Calinon, R. Dillmann, and S. Schaal, Robot program-
ming by demonstration, in Handbook of Robotics, B. Siciliano and
We have presented a method for manipulating an object O. Khatib, Eds. New York, NY, USA: Springer-Verlag, 2008, ch. 59.
including reaching, grasping, and moving with both arms of [15] A. Ijspeert, J. Nakanishi, and S. Schaal, Movement imitation with non-
the humanoid robot. The main purpose of this work is focused linear dynamical systems in humanoid robots, in Proc. IEEE ICRA,
Washington DC, USA, 2002, pp. 13981403.
on the human-like motion generation and control for the object [16] B. Lim, S. Ra, and F. C. Park, Movement primitives, principal component
manipulation. To achieve the goal, the eigenmotion and EEA analysis, and the efficient generation of natural motions, in Proc. IEEE
are implemented to generate the human-like arm movements ICRA, 2005, pp. 46304635.
[17] T. Wimbock, C. Ott, A. Albu-Schaffer, and G. Hirzinger, Comparison of
by using the human motion capture data. Then, the VDM object-level grasp controllers for dynamic dexterous manipulation, Int. J.
controller is used to control both the motion and force under the Robot. Res., vol. 31, no. 1, pp. 323, 2012.
uniform control system, and the trajectories generated by the [18] T. Wimbock, C. Ott, and G. Hirzinger, Impedance behaviors for two-
handed manipulation: Design and experiments, in Proc. IEEE ICRA,
eigenmotion are used as the reference input of the VDM con- Roma, Italy, May 2007, pp. 41824189.
troller. Through the simulations and experiments, the human- [19] R. Ozawa, S. Arimoto, S. Nakamura, and J. Bae, Control of
like characteristics are evaluated by comparing the hand and an object with parallel surfaces by a pair of finger robots with-
out object sensings, IEEE Trans. Robot., vol. 21, no. 5, pp. 965976,
elbow trajectories obtained by the proposed method with those Oct. 2005.
obtained by the captured human trajectories. Finally, the object [20] M. Yoshida, S. Arimoto, and J. Bae, Blind grasp and manipulation of a
manipulation task including reaching, grasping, and moving an rigid object by a pair of robot fingers with soft tips, in Proc. IEEE ICRA,
Roma, Italy, May 2007, pp. 47074714.
object is demonstrated in sequence with the actual humanoid [21] S. Y. Shin and C. H. Kim, Humanoids dual arm object manipulation
robot. While this approach based on the human motion capture based on virtual dynamics model, in Proc. IEEE ICRA, Saint Paul, MN,
data may provide broader freedom for performing the human- USA, 2012.
[22] S. Y. Shin and C. H. Kim, On-line human motion transition and control
like manipulation tasks with the robots, the generalization of the for humanoid upper body manipulation, in Proc. IEEE/RSJ IROS, Taipei,
manipulation tasks is missing in the presented approach; the Taiwan, 2010, pp. 477482.
extendability of the manipulation tasks highly depends on [23] M. Turk and A. Pentland, Eigenfaces for recognition, J. Cognitive
Neurosci., vol. 3, no. 1, pp. 7186, 1991.
the database, which is true for all imitation learning methods. [24] C. Canudas De Wit and S. B. Brogliato, Direct adaptive impedance
The reduction of the database is part of our ongoing work. control including transition phases, Automatica, vol. 33, no. 4, pp. 643
649, Apr. 2004.
[25] J. Lee, P. H. Chang, and R. S. Jamisola, Jr., Relative impedance control
R EFERENCES for dual-arm robots performing asymmetric bimanual tasks, IEEE Trans.
[1] C. Smith et al., Dual arm manipulationA survey, Robot Auton. Syst., Ind. Electron., vol. 61, no. 7, pp. 37863796, Jul. 2014.
vol. 60, no. 10, pp. 13401353, Oct. 2012. [26] A. Dumlu and K. Erenturk, Trajectory tracking control for a 3-DOF
[2] K. C. Tan, Y. J. Chen, K. K. Tan, and T. H. Lee, Task-oriented develop- parallel manipulator using fractional-order P I D control, IEEE Trans.
mental learning for humanoid robots, IEEE Trans. Ind. Electron., vol. 52, Ind. Electron., vol. 61, no. 7, pp. 34173426, Jul. 2014.
no. 3, pp. 906914, Jun. 2005. [27] N. Hogan and S. P. Buerger, Impedance and interaction control, in
[3] W. Chung, C. Rhee, Y. Shim, H. Lee, and S. Park, Door-opening control Robotics and Automation Handbook. Boca Raton, FL, USA: CRC Press,
of a service robot using the multifingered robot hand, IEEE Trans. Ind. Oct. 2004, ch. 19.
Electron., vol. 56, no. 10, pp. 39753984, Oct. 2009. [28] H. Kim and B. Kim, Online minimum-energy trajectory planning and
[4] C. Kemp, A. Edsinger, and E. Torres-Jara, Challenges for robot manipu- control on a straight-line path for three-wheeled omnidirectional mo-
lation in human environments, IEEE Robot. Autom. Mag., vol. 14, no. 1, bile robots, IEEE Trans. Ind. Electron., vol. 61, no. 9, pp. 47714779,
pp. 2029, Mar. 2007. Sep. 2014.
[5] L. M. Capisani and A. Ferrara, Trajectory planning and second-order [29] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and
sliding mode motion/interaction control for robot manipulators in un- Control. Hoboken, NJ, USA: Wiley, 2005.
known environments, IEEE Trans. Ind. Electron., vol. 59, no. 8, [30] N. Motoi, T. Shimono, R. Kubo, and A. Kawamura, Task realization
pp. 31893198, Aug. 2012. by a force-based variable compliance controller for flexible motion con-
[6] R. C. Luo and C. C. Lai, Multisensor fusion-based concurrent envi- trol system, IEEE Trans. Ind. Electron., vol. 61, no. 2, pp. 10091021,
ronment mapping and moving object detection for intelligent service Feb. 2014.
2276 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 4, APRIL 2015
Sung Yul Shin was born in Korea. He received ChangHwan Kim received the B.S. degree in
the B.S. degree in mechanical design engineer- mechanical engineering and the M.S. degree
ing from Korea Polytechnic University, Siheung, in machine design engineering from Hanyang
Korea, in 2009 and the M.S. degree in robotics University, Seoul, Korea, in 1993 and 1995, re-
engineering from the University of Science and spectively, and the Ph.D. degree in mechanical
Technology, Daejeon, Korea, in 2011. He is engineering from the University of Iowa, Iowa
currently working toward the Ph.D. degree in City, IA, USA, in 2002.
the Department of Mechanical Engineering, The From 2002 to 2004, he was a Research As-
University of Texas at Austin, Austin, TX, USA. sociate with the Robotics and Automation Lab-
From 2011 to 2014, he was a Researcher oratory, University of Notre Dame, Notre Dame,
with the Center for Bionics, Korea Institute of IN, USA. From 2004 to 2007 and from 2007 to
Science and Technology, Seoul, Korea. His research interests include 2011, he was with the Center for Intelligent Robotics and the Center
motion generation and control for humanoid robots and rehabilitation for Cognitive Robotics, respectively, with the Korea Institute of Science
robots. and Technology (KIST), Seoul, where he is currently with the Center
for Bionics. His research interests include human motion imitation and
motion generation of a humanoid, human modeling, motion planning of
mobile robots, cooperation of multiple robots, and rehabilitation robots.