Sunteți pe pagina 1din 1284

Lecture Notes in Computer Science

Commenced Publication in 1973

Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
University of Dortmund, Germany
Madhu Sudan
Massachusetts Institute of Technology, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Moshe Y. Vardi
Rice University, Houston, TX, USA
Gerhard Weikum
Max-Planck Institute of Computer Science, Saarbruecken, Germany


Yong Shi Geert Dick van Albada

Jack Dongarra Peter M.A. Sloot (Eds.)

Science ICCS 2007
7th International Conference
Beijing, China, May 27 - 30, 2007
Proceedings, Part II


Volume Editors
Yong Shi
Graduate University of the Academy of Sciences
Beijing 100080, China
Geert Dick van Albada
Peter M.A. Sloot
University of Amsterdam, Section Computational Science
1098 SJ Amsterdam, The Netherlands
E-mail: {dick, sloot}
Jack Dongarra
University of Tennessee, Computer Science Department
Knoxville, TN 37996-3450, USA

Library of Congress Control Number: 2007927049

CR Subject Classication (1998): F, D, G, H, I.1, I.3, I.6, J, K.3, C.2-3
LNCS Sublibrary: SL 1 Theoretical Computer Science and General Issues

3-540-72585-7 Springer Berlin Heidelberg New York
978-3-540-72585-5 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microlms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media
Springer-Verlag Berlin Heidelberg 2007
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientic Publishing Services, Chennai, India
Printed on acid-free paper
SPIN: 12065738


The Seventh International Conference on Computational Science (ICCS 2007)

was held in Beijing, China, May 27-30, 2007. This was the continuation of previous conferences in the series: ICCS 2006 in Reading, UK; ICCS 2005 in Atlanta,
Georgia, USA; ICCS 2004 in Krakow, Poland; ICCS 2003 held simultaneously at
two locations in, Melbourne, Australia and St. Petersburg, Russia; ICCS 2002
in Amsterdam, The Netherlands; and ICCS 2001 in San Francisco, California,
USA. Since the rst conference in San Francisco, the ICCS series has become
a major platform to promote the development of Computational Science. The
theme of ICCS 2007 was Advancing Science and Society through Computation. It aimed to bring together researchers and scientists from mathematics
and computer science as basic computing disciplines, researchers from various
application areas who are pioneering the advanced application of computational
methods to sciences such as physics, chemistry, life sciences, and engineering,
arts and humanitarian elds, along with software developers and vendors, to
discuss problems and solutions in the area, to identify new issues, and to shape
future directions for research, as well as to help industrial users apply various
advanced computational techniques.
During the opening of ICCS 2007, Siwei Cheng (Vice-Chairman of the Standing Committee of the National Peoples Congress of the Peoples Republic of
China and the Dean of the School of Management of the Graduate University
of the Chinese Academy of Sciences) presented the welcome speech on behalf of
the Local Organizing Committee, after which Hector Ruiz (President and CEO,
AMD) made remarks on behalf of international computing industries in China.
Seven keynote lectures were delivered by Vassil Alexandrov (Advanced Computing and Emerging Technologies, University of Reading, UK) - Ecient Scalable Algorithms for Large-Scale Computations; Hans Petter Langtangen (Simula Research Laboratory, Lysaker, Norway) - Computational Modelling of Huge
Tsunamis from Asteroid Impacts; Jiawei Han (Department of Computer Science, University of Illinois at Urbana-Champaign, USA) - Research Frontiers
in Advanced Data Mining Technologies and Applications; Ru-qian Lu (Institute of Mathematics, Chinese Academy of Sciences) - Knowledge Engineering
and Knowledge Ware; Alessandro Vespignani (School of Informatics, Indiana
University, USA) -Computational Epidemiology and Emergent Disease Forecast; David Keyes (Department of Applied Physics and Applied Mathematics,
Columbia University) - Scalable Solver Infrastructure for Computational Science
and Engineering; and Yves Robert (Ecole Normale Suprieure de Lyon , France)
- Think Before Coding: Static Strategies (and Dynamic Execution) for Clusters
and Grids. We would like to express our thanks to all of the invited and keynote
speakers for their inspiring talks. In addition to the plenary sessions, the conference included 14 parallel oral sessions and 4 poster sessions. This year, we



received more than 2,400 submissions for all tracks combined, out of which 716
were accepted.
This includes 529 oral papers, 97 short papers, and 89 poster papers, spread
over 35 workshops and a main track. For the main track we had 91 papers (80
oral papers and 11 short papers) in the proceedings, out of 360 submissions. We
had some 930 people doing reviews for the conference, with 118 for the main
track. Almost all papers received three reviews. The accepted papers are from
more than 43 dierent countries and 48 dierent Internet top-level domains.
The papers cover a large volume of topics in computational science and related areas, from multiscale physics to wireless networks, and from graph theory
to tools for program development.
We would like to thank all workshop organizers and the Program Committee
for the excellent work in maintaining the conferences standing for high-quality
papers. We would like to express our gratitude to sta and graduates of the Chinese Academy of Sciences Research Center on Data Technology and Knowledge
Economy and the Institute of Policy and Management for their hard work in
support of ICCS 2007. We would like to thank the Local Organizing Committee
and Local Arrangements Committee for their persistent and enthusiastic work
towards the success of ICCS 2007. We owe special thanks to our sponsors, AMD,
Springer; University of Nebraska at Omaha, USA and Graduate University of
Chinese Academy of Sciences, for their generous support.
ICCS 2007 was organized by the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, with support from the Section Computational Science at the Universiteit van Amsterdam and Innovative
Computing Laboratory at the University of Tennessee, in cooperation with the
Society for Industrial and Applied Mathematics (SIAM), the International Association for Mathematics and Computers in Simulation (IMACS), the Chinese
Society for Management Modernization (CSMM), and the Chinese Society of
Optimization, Overall Planning and Economical Mathematics (CSOOPEM).
May 2007

Yong Shi


ICCS 2007 was organized by the Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, with support from the Section Computational Science at the Universiteit van Amsterdam and Innovative
Computing Laboratory at the University of Tennessee, in cooperation with the
Society for Industrial and Applied Mathematics (SIAM), the International Association for Mathematics and Computers in Simulation (IMACS), and the Chinese
Society for Management Modernization (CSMM).

Conference Chairs
Conference Chair - Yong Shi (Chinese Academy of Sciences, China/University
of Nebraska at Omaha USA)
Program Chair - Dick van Albada (Universiteit van Amsterdam,
The Netherlands)
ICCS Series Overall Scientic Co-chair - Jack Dongarra (University of Tennessee,
ICCS Series Overall Scientic Chair - Peter M.A. Sloot (Universiteit van
Amsterdam, The Netherlands)

Local Organizing Committee

Weimin Zheng (Tsinghua University, Beijing, China) Chair
Hesham Ali (University of Nebraska at Omaha, USA)
Chongfu Huang (Beijing Normal University,
Beijing, China)
Masato Koda (University of Tsukuba, Japan)
Heeseok Lee (Korea Advanced Institute of Science and Technology, Korea)
Zengliang Liu (Beijing University of Science and Technology, Beijing, China)
Jen Tang (Purdue University, USA)
Shouyang Wang (Academy of Mathematics and System Science, Chinese
Academy of Sciences, Beijing, China)
Weixuan Xu (Institute of Policy and Management, Chinese Academy of Sciences,
Beijing, China)
Yong Xue (Institute of Remote Sensing Applications, Chinese Academy of
Sciences, Beijing, China)
Ning Zhong (Maebashi Institute of Technology, USA)
Hai Zhuge (Institute of Computing Technology, Chinese Academy of Sciences,
Beijing, China)



Local Arrangements Committee

Weixuan Xu, Chair
Yong Shi, Co-chair of events
Benfu Lu, Co-chair of publicity
Hongjin Yang, Secretary
Jianping Li, Member
Ying Liu, Member
Jing He, Member
Siliang Chen, Member
Guanxiong Jiang, Member
Nan Xiao, Member
Zujin Deng, Member

Sponsoring Institutions
World Scientic Publlishing
University of Nebraska at Omaha, USA
Graduate University of Chinese Academy of Sciences
Institute of Policy and Management, Chinese Academy of Sciences Universiteit
van Amsterdam

Program Committee
J.H. Abawajy, Deakin University, Australia
D. Abramson, Monash University, Australia
V. Alexandrov, University of Reading, UK
I. Altintas, San Diego Supercomputer Center, UCSD
M. Antolovich, Charles Sturt University, Australia
E. Araujo, Universidade Federal de Campina Grande, Brazil
M.A. Baker, University of Reading, UK
B. Balis, Krakow University of Science and Technology, Poland
A. Benoit, LIP, ENS Lyon, France
I. Bethke, University of Amsterdam, The Netherlands
J.A.R. Blais, University of Calgary, Canada
I. Brandic, University of Vienna, Austria
J. Broeckhove, Universiteit Antwerpen, Belgium
M. Bubak, AGH University of Science and Technology, Poland
K. Bubendorfer, Victoria University of Wellington, Australia
B. Cantalupo, DATAMAT S.P.A, Italy
J. Chen Swinburne, University of Technology, Australia
O. Corcho, University of Manchester, UK
J.C. Cunha, Univ. Nova de Lisboa, Portugal


S. Date, Osaka University, Japan

F. Desprez, INRIA, France
T. Dhaene, University of Antwerp, Belgium
I.T. Dimov, ACET, The University of Reading, UK
J. Dongarra, University of Tennessee, USA
F. Donno, CERN, Switzerland
C. Douglas, University of Kentucky, USA
G. Fox, Indiana University, USA
W. Funika, Krakow University of Science and Technology, Poland
H.J. Gardner, Australian National University, Australia
G. Geethakumari, University of Hyderabad, India
Y. Gorbachev, St. Petersburg State Polytechnical University, Russia
A.M. Goscinski, Deakin University, Australia
M. Govindaraju, Binghamton University, USA
G.A. Gravvanis, Democritus University of Thrace, Greece
D.J. Groen, University of Amsterdam, The Netherlands
T. Gubala, ACC CYFRONET AGH, Krakow, Poland
M. Hardt, FZK, Germany
T. Heinis, ETH Zurich, Switzerland
L. Hluchy, Institute of Informatics, Slovak Academy of Sciences, Slovakia
A.G. Hoekstra, University of Amsterdam, The Netherlands
W. Homann, University of Amsterdam, The Netherlands
C. Huang, Beijing Normal University Beijing, China
M. Humphrey, University of Virginia, USA
A. Iglesias, University of Cantabria, Spain
H. Jin, Huazhong University of Science and Technology, China
D. Johnson, ACET Centre, University of Reading, UK
B.D. Kandhai, University of Amsterdam, The Netherlands
S. Kawata, Utsunomiya University, Japan
W.A. Kelly, Queensland University of Technology, Australia
J. Kitowski, Inst.Comp.Sci. AGH-UST, Cracow, Poland
M. Koda, University of Tsukuba Japan
D. Kranzlm
uller, GUP, Joh. Kepler University Linz, Austria
B. Kryza, Academic Computer Centre CYFRONET-AGH, Cracow, Poland
M. Kunze, Forschungszentrum Karlsruhe (FZK), Germany
D. Kurzyniec, Emory University, Atlanta, USA
A. Lagana, University of Perugia, Italy
J. Lee, KISTI Supercomputing Center, Korea
C. Lee, Aerospace Corp., USA
L. Lefevre, INRIA, France
A. Lewis, Grith University, Australia
H.W. Lim, Royal Holloway, University of London, UK
P. Lu, University of Alberta, Canada
M. Malawski, Institute of Computer Science AGH, Poland



M. Mascagni, Florida State University, USA

V. Maxville, Curtin Business School, Australia
A.S. McGough, London e-Science Centre, UK
E.D. Moreno, UEA-BENq, Manaus, Brazil
J.T. Moscicki, Cern, Switzerland
S. Naqvi, CoreGRID Network of Excellence, France
P.O.A. Navaux, Universidade Federal do Rio Grande do Sul, Brazil
Z. Nemeth, Computer and Automation Research Institute, Hungarian Academy
of Science, Hungary
J. Ni, University of Iowa, USA
G. Norman, Joint Institute for High Temperatures of RAS, Russia
B. O
ain, University of Amsterdam, The Netherlands
C.W. Oosterlee, Centrum voor Wiskunde en Informatica, CWI, The Netherlands
S. Orlando, Universit`
a Ca Foscari, Venice, Italy
M. Paprzycki, IBS PAN and SWPS, Poland
M. Parashar, Rutgers University, USA
L.M. Patnaik, Indian Institute of Science, India
C.P. Pautasso, ETH Z
urich, Switzerland
R. Perrott, Queens University, Belfast, UK
V. Prasanna, University of Southern California, USA
T. Priol, IRISA, France
M.R. Radecki, Krakow University of Science and Technology, Poland
M. Ram, C-DAC Bangalore Centre, India
A. Rendell, Australian National University, Australia
P. Rhodes, University of Mississippi, USA
M. Riedel, Research Centre Juelich, Germany
D. Rodrguez Garca, University of Alcal
a, Spain
K. Rycerz, Krakow University of Science and Technology, Poland
R. Santinelli, CERN, Switzerland
J. Schneider, Technische Universit
at Berlin, Germany
B. Schulze, LNCC, Brazil
J. Seo, The University of Manchester, UK
Y. Shi, Chinese Academy of Sciences, Beijing, China
D. Shires, U.S. Army Research Laboratory, USA
A.E. Solomonides, University of the West of England, Bristol, UK
V. Stankovski, University of Ljubljana, Slovenia
H. Stockinger, Swiss Institute of Bioinformatics, Switzerland
A. Streit, Forschungszentrum J
ulich, Germany
H. Sun, Beihang University, China
R. Tadeusiewicz, AGH University of Science and Technology, Poland
J. Tang, Purdue University USA
M. Taufer, University of Texas El Paso, USA
C. Tedeschi, LIP-ENS Lyon, France
A. Thandavan, ACET Center, University of Reading, UK
A. Tirado-Ramos, University of Amsterdam, The Netherlands


P. Tvrdik, Czech Technical University Prague, Czech Republic

G.D. van Albada, Universiteit van Amsterdam, The Netherlands
F. van Lingen, California Institute of Technology, USA
J. Vigo-Aguiar, University of Salamanca, Spain
D.W. Walker, Cardi University, UK
C.L. Wang, University of Hong Kong, China
A.L. Wendelborn, University of Adelaide, Australia
Y. Xue, Chinese Academy of Sciences, China
L.T. Yang, St. Francis Xavier University, Canada
C.T. Yang, Tunghai University, Taichung, Taiwan
J. Yu, The University of Melbourne, Australia
Y. Zheng, Zhejiang University, China
W. Zheng, Tsinghua University, Beijing, China
L. Zhu, University of Florida, USA
A. Zomaya, The University of Sydney, Australia
E.V. Zudilova-Seinstra, University of Amsterdam, The Netherlands

J.H. Abawajy
D. Abramson
A. Abran
P. Adriaans
W. Ahn
R. Akbani
K. Akkaya
R. Albert
M. Aldinucci
V.N. Alexandrov
B. Alidaee
I. Altintas
K. Altmanninger
S. Aluru
S. Ambroszkiewicz
L. Anido
K. Anjyo
C. Anthes
M. Antolovich
S. Antoniotti
G. Antoniu
H. Arabnia
E. Araujo
E. Ardeleanu
J. Aroba
J. Astalos

B. Autin
M. Babik
G. Bai
E. Baker
M.A. Baker
S. Balfe
B. Balis
W. Banzhaf
D. Bastola
S. Battiato
M. Baumgarten
M. Baumgartner
P. Beckaert
A. Belloum
O. Belmonte
A. Belyaev
A. Benoit
G. Bergantz
J. Bernsdorf
J. Berthold
I. Bethke
I. Bhana
R. Bhowmik
M. Bickelhaupt
J. Bin Shyan
J. Birkett

J.A.R. Blais
A. Bode
B. Boghosian
S. Bolboaca
C. Bothorel
A. Bouteiller
I. Brandic
S. Branford
S.J. Branford
R. Braungarten
R. Briggs
J. Broeckhove
W. Bronsvoort
A. Bruce
C. Brugha
Y. Bu
K. Bubendorfer
I. Budinska
G. Buemi
B. Bui
H.J. Bungartz
A. Byrski
M. Cai
Y. Cai
Y.Q. Cai
Z.Y. Cai




B. Cantalupo
K. Cao
M. Cao
F. Capkovic
A. Cepulkauskas
K. Cetnarowicz
Y. Chai
P. Chan
G.-L. Chang
S.C. Chang
W.A. Chaovalitwongse
P.K. Chattaraj
C.-K. Chen
E. Chen
G.Q. Chen
G.X. Chen
J. Chen
J. Chen
J.J. Chen
K. Chen
Q.S. Chen
W. Chen
Y. Chen
Y.Y. Chen
Z. Chen
G. Cheng
X.Z. Cheng
S. Chiu
K.E. Cho
Y.-Y. Cho
B. Choi
J.K. Choi
D. Choinski
D.P. Chong
B. Chopard
M. Chover
I. Chung
M. Ciglan
B. Cogan
G. Cong
J. Corander
J.C. Corchado
O. Corcho
J. Cornil
H. Cota de Freitas

E. Coutinho
J.J. Cuadrado-Gallego
Y.F. Cui
J.C. Cunha
V. Curcin
A. Curioni
R. da Rosa Righi
S. Dalai
M. Daneva
S. Date
P. Dazzi
S. de Marchi
V. Debelov
E. Deelman
J. Della Dora
Y. Demazeau
Y. Demchenko
H. Deng
X.T. Deng
Y. Deng
M. Mat Deris
F. Desprez
M. Dewar
T. Dhaene
Z.R. Di
G. di Biasi
A. Diaz Guilera
P. Didier
I.T. Dimov
L. Ding
G.D. Dobrowolski
T. Dokken
J.J. Dolado
W. Dong
Y.-L. Dong
J. Dongarra
F. Donno
C. Douglas
G.J. Garcke
R.P. Mundani
R. Drezewski
D. Du
B. Duan
J.F. Dufourd
H. Dun

C. Earley
P. Edmond
T. Eitrich
A. El Rhalibi
T. Ernst
V. Ervin
D. Estrin
L. Eyraud-Dubois
J. Falcou
H. Fang
Y. Fang
X. Fei
Y. Fei
R. Feng
M. Fernandez
K. Fisher
C. Fittschen
G. Fox
F. Freitas
T. Friesz
K. Fuerlinger
M. Fujimoto
T. Fujinami
W. Funika
T. Furumura
A. Galvez
L.J. Gao
X.S. Gao
J.E. Garcia
H.J. Gardner
M. Garre
G. Garsva
F. Gava
G. Geethakumari
M. Geimer
J. Geiser
J.-P. Gelas
A. Gerbessiotis
M. Gerndt
S. Gimelshein
S.G. Girdzijauskas
S. Girtelschmid
Z. Gj
C. Glasner
A. Goderis


D. Godoy
J. Golebiowski
S. Gopalakrishnan
Y. Gorbachev
A.M. Goscinski
M. Govindaraju
E. Grabska
G.A. Gravvanis
C.H. Grelck
D.J. Groen
L. Gross
P. Gruer
A. Grzech
J.F. Gu
Y. Guang Xue
T. Gubala
V. Guevara-Masis
C.H. Guo
X. Guo
Z.Q. Guo
L. Guohui
C. Gupta
I. Gutman
A. Haegee
K. Han
M. Hardt
A. Hasson
J. He
J. He
K. He
T. He
J. He
M.R. Head
P. Heinzlreiter
H. Chojnacki
J. Heo
S. Hirokawa
G. Hliniak
L. Hluchy
T.B. Ho
A. Hoekstra
W. Homann
A. Hoheisel
J. Hong
Z. Hong

D. Horvath
F. Hu
L. Hu
X. Hu
X.H. Hu
Z. Hu
K. Hua
H.W. Huang
K.-Y. Huang
L. Huang
L. Huang
M.S. Huang
S. Huang
T. Huang
W. Huang
Y. Huang
Z. Huang
Z. Huang
B. Huber
E. Hubo
J. Hulliger
M. Hultell
M. Humphrey
P. Hurtado
J. Huysmans
T. Ida
A. Iglesias
K. Iqbal
D. Ireland
N. Ishizawa
I. Lukovits
R. Jamieson
J.K. Jan
P. Janderka
M. Jankowski
L. Jantschi
S.J.K. Jensen
N.J. Jeon
T.H. Jeon
T. Jeong
H. Ji
X. Ji
D.Y. Jia
C. Jiang
H. Jiang


M.J. Jiang
P. Jiang
W. Jiang
Y. Jiang
H. Jin
J. Jin
L. Jingling
G.-S. Jo
D. Johnson
J. Johnstone
J.J. Jung
K. Juszczyszyn
J.A. Kaandorp
M. Kabelac
B. Kadlec
R. Kakkar
C. Kameyama
B.D. Kandhai
S. Kandl
K. Kang
S. Kato
S. Kawata
T. Kegl
W.A. Kelly
J. Kennedy
G. Khan
J.B. Kido
C.H. Kim
D.S. Kim
D.W. Kim
H. Kim
J.G. Kim
J.H. Kim
M. Kim
T.H. Kim
T.W. Kim
P. Kiprof
R. Kirner
M. Kisiel-Dorohinicki
J. Kitowski
C.R. Kleijn
M. Kluge
A. Kn
I.S. Ko
Y. Ko



R. Kobler
B. Koblitz
G.A. Kochenberger
M. Koda
T. Koeckerbauer
M. Koehler
I. Kolingerova
V. Korkhov
T. Korkmaz
L. Kotulski
G. Kou
J. Kozlak
M. Krafczyk
D. Kranzlm
B. Kryza
V.V. Krzhizhanovskaya
M. Kunze
D. Kurzyniec
E. Kusmierek
S. Kwang
Y. Kwok
F. Kyriakopoulos
H. Labiod
A. Lagana
H. Lai
S. Lai
Z. Lan
G. Le Mahec
B.G. Lee
C. Lee
H.K. Lee
J. Lee
J. Lee
J.H. Lee
S. Lee
S.Y. Lee
V. Lee
Y.H. Lee
L. Lefevre
L. Lei
F. Lelj
A. Lesar
D. Lesthaeghe
Z. Levnajic
A. Lewis

A. Li
D. Li
D. Li
E. Li
J. Li
J. Li
J.P. Li
M. Li
P. Li
X. Li
X.M. Li
X.S. Li
Y. Li
Y. Li
J. Liang
L. Liang
W.K. Liao
X.F. Liao
G.G. Lim
H.W. Lim
S. Lim
A. Lin
I.C. Lin
I-C. Lin
Y. Lin
Z. Lin
P. Lingras
C.Y. Liu
D. Liu
D.S. Liu
E.L. Liu
F. Liu
G. Liu
H.L. Liu
J. Liu
J.C. Liu
R. Liu
S.Y. Liu
W.B. Liu
X. Liu
Y. Liu
Y. Liu
Y. Liu
Y. Liu
Y.J. Liu

Y.Z. Liu
Z.J. Liu
S.-C. Lo
R. Loogen
B. Lopez
A. Lopez Garca de
F. Loulergue
G. Lu
J. Lu
J.H. Lu
M. Lu
P. Lu
S. Lu
X. Lu
Y.C. Lu
C. Lursinsap
L. Ma
M. Ma
T. Ma
A. Macedo
N. Maillard
M. Malawski
S. Maniccam
S.S. Manna
Z.M. Mao
M. Mascagni
E. Mathias
R.C. Maurya
V. Maxville
A.S. McGough
R. Mckay
T.-G. MCKenzie
K. Meenal
R. Mehrotra
M. Meneghin
F. Meng
M.F.J. Meng
E. Merkevicius
M. Metzger
Z. Michalewicz
J. Michopoulos
J.-C. Mignot
R. mikusauskas
H.Y. Ming


G. Miranda Valladares
M. Mirua
G.P. Miscione
C. Miyaji
A. Miyoshi
J. Monterde
E.D. Moreno
G. Morra
J.T. Moscicki
H. Moshkovich
V.M. Moskaliova
G. Mounie
C. Mu
A. Muraru
H. Na
K. Nakajima
Y. Nakamori
S. Naqvi
S. Naqvi
R. Narayanan
A. Narjess
A. Nasri
P. Navaux
P.O.A. Navaux
M. Negoita
Z. Nemeth
L. Neumann
N.T. Nguyen
J. Ni
Q. Ni
K. Nie
G. Nikishkov
V. Nitica
W. Nocon
A. Noel
G. Norman
B. O
N. OBoyle
J.T. Oden
Y. Ohsawa
H. Okuda
D.L. Olson
C.W. Oosterlee
V. Oravec
S. Orlando

F.R. Ornellas
A. Ortiz
S. Ouyang
T. Owens
S. Oyama
B. Ozisikyilmaz
A. Padmanabhan
Z. Pan
Y. Papegay
M. Paprzycki
M. Parashar
K. Park
M. Park
S. Park
S.K. Pati
M. Pauley
C.P. Pautasso
B. Payne
T.C. Peachey
S. Pelagatti
F.L. Peng
Q. Peng
Y. Peng
N. Petford
A.D. Pimentel
W.A.P. Pinheiro
J. Pisharath
G. Pitel
D. Plemenos
S. Pllana
S. Ploux
A. Podoleanu
M. Polak
D. Prabu
B.B. Prahalada Rao
V. Prasanna
P. Praxmarer
V.B. Priezzhev
T. Priol
T. Prokosch
G. Pucciani
D. Puja
P. Puschner
L. Qi
D. Qin

H. Qin
K. Qin
R.X. Qin
X. Qin
G. Qiu
X. Qiu
J.Q. Quinqueton
M.R. Radecki
S. Radhakrishnan
S. Radharkrishnan
M. Ram
S. Ramakrishnan
P.R. Ramasami
P. Ramsamy
K.R. Rao
N. Ratnakar
T. Recio
K. Regenauer-Lieb
R. Rejas
F.Y. Ren
A. Rendell
P. Rhodes
J. Ribelles
M. Riedel
R. Rioboo
Y. Robert
G.J. Rodgers
A.S. Rodionov
D. Rodrguez Garca
C. Rodriguez Leon
F. Rogier
G. Rojek
L.L. Rong
H. Ronghuai
H. Rosmanith
F.-X. Roux
R.K. Roy
U. R
M. Ruiz
T. Ruofeng
K. Rycerz
M. Ryoke
F. Safaei
T. Saito
V. Sakalauskas




L. Santillo
R. Santinelli
K. Sarac
H. Saraan
M. Sarfraz
V.S. Savchenko
M. Sbert
R. Schaefer
D. Schmid
J. Schneider
M. Schoeberl
S.-B. Scholz
B. Schulze
S.R. Seelam
B. Seetharamanjaneyalu
J. Seo
K.D. Seo
Y. Seo
O.A. Serra
A. Sfarti
H. Shao
X.J. Shao
F.T. Sheldon
H.Z. Shen
S.L. Shen
Z.H. Sheng
H. Shi
Y. Shi
S. Shin
S.Y. Shin
B. Shirazi
D. Shires
E. Shook
Z.S. Shuai
M.A. Sicilia
M. Simeonidis
K. Singh
M. Siqueira
W. Sit
M. Skomorowski
A. Skowron
P.M.A. Sloot
M. Smolka
B.S. Sniezynski
H.Z. Sojka

A.E. Solomonides
C. Song
L.J. Song
S. Song
W. Song
J. Soto
A. Sourin
R. Srinivasan
V. Srovnal
V. Stankovski
P. Sterian
H. Stockinger
D. Stokic
A. Streit
B. Strug
P. Stuedi
A. St
S. Su
V. Subramanian
P. Suganthan
D.A. Sun
H. Sun
S. Sun
Y.H. Sun
Z.G. Sun
M. Suvakov
H. Suzuki
D. Szczerba
L. Szecsi
L. Szirmay-Kalos
R. Tadeusiewicz
B. Tadic
T. Takahashi
S. Takeda
J. Tan
H.J. Tang
J. Tang
S. Tang
T. Tang
X.J. Tang
J. Tao
M. Taufer
S.F. Tayyari
C. Tedeschi
J.C. Teixeira

F. Terpstra
C. Te-Yi
A.Y. Teymorian
D. Thalmann
A. Thandavan
L. Thompson
S. Thurner
F.Z. Tian
Y. Tian
Z. Tianshu
A. Tirado-Ramos
A. Tirumala
P. Tjeerd
W. Tong
A.S. Tosun
A. Tropsha
C. Troyer
K.C.K. Tsang
A.C. Tsipis
I. Tsutomu
A. Turan
P. Tvrdik
U. Ufuktepe
V. Uskov
B. Vaidya
E. Valakevicius
I.A. Valuev
S. Valverde
G.D. van Albada
R. van der Sman
F. van Lingen
A.J.C. Varandas
C. Varotsos
D. Vasyunin
R. Veloso
J. Vigo-Aguiar
J. Vill`
a i Freixa
V. Vivacqua
E. Vumar
R. Walentkynski
D.W. Walker
B. Wang
C.L. Wang
D.F. Wang
D.H. Wang


F. Wang
F.L. Wang
H. Wang
H.G. Wang
H.W. Wang
J. Wang
J. Wang
J. Wang
J. Wang
J.H. Wang
K. Wang
L. Wang
M. Wang
M.Z. Wang
Q. Wang
Q.Q. Wang
S.P. Wang
T.K. Wang
W. Wang
W.D. Wang
X. Wang
X.J. Wang
Y. Wang
Y.Q. Wang
Z. Wang
Z.T. Wang
A. Wei
G.X. Wei
Y.-M. Wei
X. Weimin
D. Weiskopf
B. Wen
A.L. Wendelborn
I. Wenzel
A. Wibisono
A.P. Wierzbicki
R. Wism
F. Wolf
C. Wu
C. Wu
F. Wu
G. Wu
J.N. Wu
X. Wu
X.D. Wu

Y. Wu
Z. Wu
B. Wylie
M. Xavier Py
Y.M. Xi
H. Xia
H.X. Xia
Z.R. Xiao
C.F. Xie
J. Xie
Q.W. Xie
H. Xing
H.L. Xing
J. Xing
K. Xing
L. Xiong
M. Xiong
S. Xiong
Y.Q. Xiong
C. Xu
C.-H. Xu
J. Xu
M.W. Xu
Y. Xu
G. Xue
Y. Xue
Z. Xue
A. Yacizi
B. Yan
N. Yan
N. Yan
W. Yan
H. Yanami
C.T. Yang
F.P. Yang
J.M. Yang
K. Yang
L.T. Yang
L.T. Yang
P. Yang
X. Yang
Z. Yang
W. Yanwen
S. Yarasi
D.K.Y. Yau


P.-W. Yau
M.J. Ye
G. Yen
R. Yi
Z. Yi
J.G. Yim
L. Yin
W. Yin
Y. Ying
S. Yoo
T. Yoshino
W. Youmei
Y.K. Young-Kyu Han
J. Yu
J. Yu
L. Yu
Z. Yu
Z. Yu
W. Yu Lung
X.Y. Yuan
W. Yue
Z.Q. Yue
D. Yuen
T. Yuizono
J. Zambreno
P. Zarzycki
M.A. Zatevakhin
S. Zeng
A. Zhang
C. Zhang
D. Zhang
D.L. Zhang
D.Z. Zhang
G. Zhang
H. Zhang
H.R. Zhang
H.W. Zhang
J. Zhang
J.J. Zhang
L.L. Zhang
M. Zhang
N. Zhang
P. Zhang
P.Z. Zhang
Q. Zhang



S. Zhang
W. Zhang
W. Zhang
Y.G. Zhang
Y.X. Zhang
Z. Zhang
Z.W. Zhang
C. Zhao
H. Zhao
H.K. Zhao
H.P. Zhao
J. Zhao
M.H. Zhao
W. Zhao

Z. Zhao
L. Zhen
B. Zheng
G. Zheng
W. Zheng
Y. Zheng
W. Zhenghong
P. Zhigeng
W. Zhihai
Y. Zhixia
A. Zhmakin
C. Zhong
X. Zhong
K.J. Zhou

L.G. Zhou
X.J. Zhou
X.L. Zhou
Y.T. Zhou
H.H. Zhu
H.L. Zhu
L. Zhu
X.Z. Zhu
Z. Zhu
M. Zhu.
J. Zivkovic
A. Zomaya
E.V. Zudilova-Seinstra

Workshop Organizers
Sixth International Workshop on Computer Graphics and Geometric
A. Iglesias, University of Cantabria, Spain
Fifth International Workshop on Computer Algebra Systems and
A. Iglesias, University of Cantabria, Spain,
A. Galvez, University of Cantabria, Spain
PAPP 2007 - Practical Aspects of High-Level Parallel Programming
(4th International Workshop)
A. Benoit, ENS Lyon, France
F. Loulerge, LIFO, Orlans, France
International Workshop on Collective Intelligence for Semantic and
Knowledge Grid (CISKGrid 2007)
N.T. Nguyen, Wroclaw University of Technology, Poland
J.J. Jung, INRIA Rh
one-Alpes, France
K. Juszczyszyn, Wroclaw University of Technology, Poland
Simulation of Multiphysics Multiscale Systems, 4th International
V.V. Krzhizhanovskaya, Section Computational Science, University of
Amsterdam, The Netherlands
A.G. Hoekstra, Section Computational Science, University of Amsterdam,
The Netherlands



S. Sun, Clemson University, USA

J. Geiser, Humboldt University of Berlin, Germany
2nd Workshop on Computational Chemistry and Its Applications
(2nd CCA)
P.R. Ramasami, University of Mauritius
Ecient Data Management for HPC Simulation Applications
R.-P. Mundani, Technische Universit
at M
unchen, Germany
J. Abawajy, Deakin University, Australia
M. Mat Deris, Tun Hussein Onn College University of Technology, Malaysia
Real Time Systems and Adaptive Applications (RTSAA-2007)
J. Hong, Soongsil University, South Korea
T. Kuo, National Taiwan University, Taiwan
The International Workshop on Teaching Computational Science
(WTCS 2007)
L. Qi, Department of Information and Technology, Central China Normal
University, China
W. Yanwen, Department of Information and Technology, Central China Normal
University, China
W. Zhenghong, East China Normal University, School of Information Science
and Technology, China
Y. Xue, IRSA, China
Risk Analysis
C.F. Huang, Beijing Normal University, China
Advanced Computational Approaches and IT Techniques in
M.A. Pauley, University of Nebraska at Omaha, USA
H.A. Ali, University of Nebraska at Omaha, USA
Workshop on Computational Finance and Business Intelligence
Y. Shi, Chinese Acedemy of Scienes, China
S.Y. Wang, Academy of Mathematical and System Sciences, Chinese Academy
of Sciences, China
X.T. Deng, Department of Computer Science, City University of Hong Kong,



Collaborative and Cooperative Environments

C. Anthes, Institute of Graphics and Parallel Processing, JKU, Austria
V.N. Alexandrov, ACET Centre, The University of Reading, UK
D. Kranzlm
uller, Institute of Graphics and Parallel Processing, JKU, Austria
J. Volkert, Institute of Graphics and Parallel Processing, JKU, Austria
Tools for Program Development and Analysis in Computational
A. Kn
upfer, ZIH, TU Dresden, Germany
A. Bode, TU Munich, Germany
D. Kranzlm
uller, Institute of Graphics and Parallel Processing, JKU, Austria
J. Tao, CAPP, University of Karlsruhe, Germany
R. Wissm
uller FB12, BSVS, University of Siegen, Germany
J. Volkert, Institute of Graphics and Parallel Processing, JKU, Austria
Workshop on Mining Text, Semi-structured, Web or Multimedia
Data (WMTSWMD 2007)
G. Kou, Thomson Corporation, R&D, USA
Y. Peng, Omnium Worldwide, Inc., USA
J.P. Li, Institute of Policy and Management, Chinese Academy of Sciences, China
2007 International Workshop on Graph Theory, Algorithms and Its
Applications in Computer Science (IWGA 2007)
M. Li, Dalian University of Technology, China
2nd International Workshop on Workow Systems in e-Science
(WSES 2007)
Z. Zhao, University of Amsterdam, The Netherlands
A. Belloum, University of Amsterdam, The Netherlands
2nd International Workshop on Internet Computing in Science and
Engineering (ICSE 2007)
J. Ni, The University of Iowa, USA
Workshop on Evolutionary Algorithms and Evolvable Systems
(EAES 2007)
B. Zheng, College of Computer Science, South-Central University for
Nationalities, Wuhan, China
Y. Li, State Key Lab. of Software Engineering, Wuhan University, Wuhan, China
J. Wang, College of Computer Science, South-Central University for
Nationalities, Wuhan, China
L. Ding, State Key Lab. of Software Engineering, Wuhan University, Wuhan,



Wireless and Mobile Systems 2007 (WMS 2007)

H. Choo, Sungkyunkwan University, South Korea
WAFTS: WAvelets, FracTals, Short-Range Phenomena
Computational Aspects and Applications
C. Cattani, University of Salerno, Italy
C. Toma, Polythecnica, Bucharest, Romania
Dynamic Data-Driven Application Systems - DDDAS 2007
F. Darema, National Science Foundation, USA
The Seventh International Workshop on Meta-synthesis and
Complex Systems (MCS 2007)
X.J. Tang, Academy of Mathematics and Systems Science, Chinese Academy of
Sciences, China
J.F. Gu, Institute of Systems Science, Chinese Academy of Sciences, China
Y. Nakamori, Japan Advanced Institute of Science and Technology, Japan
H.C. Wang, Shanghai Jiaotong University, China
The 1st International Workshop on Computational Methods in
Energy Economics
L. Yu, City University of Hong Kong, China
J. Li, Chinese Academy of Sciences, China
D. Qin, Guangdong Provincial Development and Reform Commission, China
High-Performance Data Mining
Y. Liu, Data Technology and Knowledge Economy Research Center, Chinese
Academy of Sciences, China
A. Choudhary, Electrical and Computer Engineering Department, Northwestern
University, USA
S. Chiu, Department of Computer Science, College of Engineering, Idaho State
University, USA
Computational Linguistics in HumanComputer Interaction
H. Ji, Sungkyunkwan University, South Korea
Y. Seo, Chungbuk National University, South Korea
H. Choo, Sungkyunkwan University, South Korea
Intelligent Agents in Computing Systems
K. Cetnarowicz, Department of Computer Science, AGH University of Science
and Technology, Poland
R. Schaefer, Department of Computer Science, AGH University of Science and
Technology, Poland



Networks: Theory and Applications

B. Tadic, Jozef Stefan Institute, Ljubljana, Slovenia
S. Thurner, COSY, Medical University Vienna, Austria
Workshop on Computational Science in Software Engineering
D. Rodrguez, University of Alcala, Spain
J.J. Cuadrado-Gallego, University of Alcala, Spain
International Workshop on Advances in Computational
Geomechanics and Geophysics (IACGG 2007)
H.L. Xing, The University of Queensland and ACcESS Major National Research
Facility, Australia
J.H. Wang, Shanghai Jiao Tong University, China
2nd International Workshop on Evolution Toward Next-Generation
Internet (ENGI)
Y. Cui, Tsinghua University, China
Parallel Monte Carlo Algorithms for Diverse Applications in a
Distributed Setting
V.N. Alexandrov, ACET Centre, The University of Reading, UK
The 2007 Workshop on Scientic Computing in Electronics
Engineering (WSCEE 2007)
Y. Li, National Chiao Tung University, Taiwan
High-Performance Networked Media and Services 2007 (HiNMS
I.S. Ko, Dongguk University, South Korea
Y.J. Na, Honam University, South Korea

Table of Contents Part II

Resolving Occlusion Method of Virtual Object in Simulation Using

Snake and Picking Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
JeongHee Cha, GyeYoung Kim, and HyungIl Choi

Graphics Hardware-Based Level-Set Method for Interactive

Segmentation and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Helen Hong and Seongjin Park

Parameterization of Quadrilateral Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . .

Li Liu, CaiMing Zhang, and Frank Cheng


Pose Insensitive 3D Retrieval by Poisson Shape Histogram . . . . . . . . . . . .

Pan Xiang, Chen Qi Hua, Fang Xin Gang, and Zheng Bo Chuan


Point-Sampled Surface Simulation Based on Mass-Spring System . . . . . . .

Zhixun Su, Xiaojie Zhou, Xiuping Liu, Fengshan Liu, and Xiquan Shi


Sweeping Surface Generated by a Class of Generalized Quasi-cubic

Interpolation Spline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Benyue Su and Jieqing Tan


An Articial Immune System Approach for B-Spline Surface

Approximation Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Erkan Ulker
and Veysi I


Implicit Surface Reconstruction from Scattered Point Data with

Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jun Yang, Zhengning Wang, Changqian Zhu, and Qiang Peng


The Shannon Entropy-Based Node Placement for Enrichment and

Simplication of Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Vladimir Savchenko, Maria Savchenko, Olga Egorova, and
Ichiro Hagiwara


Parameterization of 3D Surface Patches by Straightest Distances . . . . . . .

Sungyeol Lee and Haeyoung Lee


Facial Expression Recognition Based on Emotion Dimensions on

Manifold Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Young-suk Shin


AI Framework for Decision Modeling in Behavioral Animation of

Virtual Avatars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A. Iglesias and F. Luengo



Table of Contents Part II

Studies on Shape Feature Combination and Ecient Categorization of

3D Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tianyang Lv, Guobao Liu, Jiming Pang, and Zhengxuan Wang


A Generalised-Mutual-Information-Based Oracle for Hierarchical

Radiosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jaume Rigau, Miquel Feixas, and Mateu Sbert


Rendering Technique for Colored Paper Mosaic . . . . . . . . . . . . . . . . . . . . . .

Youngsup Park, Sanghyun Seo, YongJae Gi, Hanna Song, and
Kyunghyun Yoon


Real-Time Simulation of Surface Gravity Ocean Waves Based on the

TMA Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Namkyung Lee, Nakhoon Baek, and Kwan Woo Ryu


Determining Knots with Quadratic Polynomial Precision . . . . . . . . . . . . . .

Zhang Caiming, Ji Xiuhua, and Liu Hui


Interactive Cartoon Rendering and Sketching of Clouds and Smoke . . . . .

Eduardo J. Alvarez,
Celso Campos, Silvana G. Meire,
Ricardo Quir
os, Joaquin Huerta, and Michael Gould


Spherical Binary Images Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Liu Wei and He Yuanjun


Dynamic Data Path Prediction in Network Virtual Environment . . . . . . .

Sun-Hee Song, Seung-Moon Jeong, Gi-Taek Hur, and Sang-Dong Ra


Modeling Inlay/Onlay Prostheses with Mesh Deformation Techniques . . .

Kwan-Hee Yoo, Jong-Sung Ha, and Jae-Soo Yoo


Automatic Generation of Virtual Computer Rooms on the Internet

Using X3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Aybars Ugur and Tahir Emre Kalayc


Stained Glass Rendering with Smooth Tile Boundary . . . . . . . . . . . . . . . . .

SangHyun Seo, HoChang Lee, HyunChul Nah, and KyungHyun Yoon


Guaranteed Adaptive Antialiasing Using Interval Arithmetic . . . . . . . . . .

Jorge Fl
orez, Mateu Sbert, Miguel A. Sainz, and Josep Veh


Restricted Non-cooperative Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Seth J. Chandler


A New Application of CAS to LATEX Plottings . . . . . . . . . . . . . . . . . . . . . . .

Masayoshi Sekiguchi, Masataka Kaneko, Yuuki Tadokoro,
Satoshi Yamashita, and Setsuo Takato


Table of Contents Part II


JMathNorm: A Database Normalization Tool Using Mathematica . . . . . .

Ali Yazici and Ziya Karakaya


Symbolic Manipulation of Bspline Basis Functions with Mathematica . . .

A. Iglesias, R. Ipanaque, and R.T. Urbina


Rotating Capacitor and a Transient Electric Network . . . . . . . . . . . . . . . . .

Haiduke Saraan and Nenette Saraan


Numerical-Symbolic Matlab Program for the Analysis of

Three-Dimensional Chaotic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Akemi G


Safety of Recreational Water Slides: Numerical Estimation of the

Trajectory, Velocities and Accelerations of Motion of the Users . . . . . . . .
Piotr Szczepaniak and Ryszard Walenty


Computing Locus Equations for Standard Dynamic Geometry

Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Francisco Botana, Miguel A. Ab
anades, and Jes
us Escribano


Symbolic Computation of Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Andres Iglesias and Sinan Kapcak


Dynaput: Dynamic Input Manipulations for 2D Structures of

Mathematical Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Deguchi Hiroaki


On the Virtues of Generic Programming for Symbolic Computation . . . .

Xin Li, Marc Moreno Maza, and Eric



Semi-analytical Approach for Analyzing Vibro-Impact Systems . . . . . . . .

Algimantas Cepulkauskas,
Regina Kulvietiene,
Genadijus Kulvietis, and Jurate Mikucioniene


Formal Verication of Analog and Mixed Signal Designs in

Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mohamed H. Zaki, Ghiath Al-Sammane, and So`ene Tahar
Ecient Computations of Irredundant Triangular Decompositions with
the RegularChains Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changbo Chen, Francois Lemaire, Marc Moreno Maza,
Wei Pan, and Yuzhen Xie
Characterisation of the Surfactant Shell Stabilising Calcium Carbonate
Dispersions in Overbased Detergent Additives: Molecular Modelling
and Spin-Probe-ESR Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Francesco Frigerio and Luciano Montanari





Table of Contents Part II

Hydrogen Adsorption and Penetration of Cx (x=58-62) Fullerenes with

Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Xin Yue, Jijun Zhao, and Jieshan Qiu


Ab Initio and DFT Investigations of the Mechanistic Pathway of

Singlet Bromocarbenes Insertion into C-H Bonds of Methane and
Ethane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
M. Ramalingam, K. Ramasami, P. Venuvanalingam, and
J. Swaminathan


Theoretical Gas Phase Study of the Gauche and Trans Conformers of

1-Bromo-2-Chloroethane and Solvent Eects . . . . . . . . . . . . . . . . . . . . . . . .
Ponnadurai Ramasami


Dynamics Simulation of Conducting Polymer Interchain Interaction

Eects on Polaron Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jose Rildo de Oliveira Queiroz and Geraldo Magela e Silva


Cerium (III) Complexes Modeling with Sparkle/PM3 . . . . . . . . . . . . . . . . .

Alfredo Mayall Simas, Ricardo Oliveira Freire, and
Gerd Bruno Rocha


The Design of Blue Emitting Materials Based on Spirosilabiuorene

Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Miao Sun, Ben Niu, and Jingping Zhang


Regulative Eect of Water Molecules on the Switches of

Guanine-Cytosine (GC) Watson-Crick Pair . . . . . . . . . . . . . . . . . . . . . . . . . .
Hongqi Ai, Xian Peng, Yun Li, and Chong Zhang


Energy Partitioning Analysis of the Chemical Bonds in mer-Mq3

(M = AlIII , GaIII , InIII , TlIII ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ruihai Cui and Jingping Zhang


Ab Initio Quantum Chemical Studies of Six-Center Bond Exchange

Reactions Among Halogen and Halogen Halide Molecules . . . . . . . . . . . . .
I. Noorbatcha, B. Arin, and S.M. Zain


Comparative Analysis of the Interaction Networks of HIV-1 and Human

Proteins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kyungsook Han and Byungkyu Park


Protein Classication from Protein-Domain and Gene-Ontology

Annotation Information Using Formal Concept Analysis . . . . . . . . . . . . . .
Mi-Ryung Han, Hee-Joon Chung, Jihun Kim,
Dong-Young Noh, and Ju Han Kim
A Supervised Classier Based on Articial Immune System . . . . . . . . . . . .
Lingxi Peng, Yinqiao Peng, Xiaojie Liu, Caiming Liu, Jinquan Zeng,
Feixian Sun, and Zhengtian Lu



Table of Contents Part II

Ab-origin: An Improved Tool of Heavy Chain Rearrangement Analysis

for Human Immunoglobulin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Xiaojing Wang, Wu Wei, SiYuan Zheng, Z.W. Cao, and Yixue Li
Analytically Tuned Simulated Annealing Applied to the Protein
Folding Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Juan Frausto-Solis, E.F. Rom
an, David Romero,
Xavier Soberon, and Ernesto Li
Training the Hidden Vector State Model from Un-annotated Corpus . . . .
Deyu Zhou, Yulan He, and Chee Keong Kwoh
Using Computer Simulation to Understand Mutation Accumulation
Dynamics and Genetic Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
John Sanford, John Baumgardner, Wes Brewer, Paul Gibson, and
Walter ReMine






An Object Model Based Repository for Biological Pathways Using

XML Database Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Keyuan Jiang


Protein Folding Simulation with New Move Set in 3D Lattice Model . . . .

X.-M. Li


A Dynamic Committee Scheme on Multiple-Criteria Linear

Programming Classication Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Meihong Zhu, Yong Shi, Aihua Li, and Jing He


Kimberlites Identication by Classication Methods . . . . . . . . . . . . . . . . . .

Yaohui Chai, Aihua Li, Yong Shi, Jing He, and Keliang Zhang


A Fast Method for Pricing Early-Exercise Options with the FFT . . . . . . .

R. Lord, F. Fang, F. Bervoets, and C.W. Oosterlee


Neural-Network-Based Fuzzy Group Forecasting with Application to

Foreign Exchange Rates Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lean Yu, Kin Keung Lai, and Shouyang Wang


Credit Risk Evaluation Using Support Vector Machine with Mixture of

Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Liwei Wei, Jianping Li, and Zhenyu Chen


Neuro-discriminate Model for the Forecasting of Changes of Companies

Financial Standings on the Basis of Self-organizing Maps . . . . . . . . . . . . . .
Egidijus Merkevicius, Gintautas Garsva, and Rimvydas Simutis


A New Computing Method for Greeks Using Stochastic Sensitivity

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Masato Koda



Table of Contents Part II

Application of Neural Networks for Foreign Exchange Rates Forecasting

with Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Wei Huang, Kin Keung Lai, and Shouyang Wang


An Experiment with Fuzzy Sets in Data Mining . . . . . . . . . . . . . . . . . . . . .

David L. Olson, Helen Moshkovich, and Alexander Mechitov


An Application of Component-Wise Iterative Optimization to

Feed-Forward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Yachen Lin


ERM-POT Method for Quantifying Operational Risk for Chinese

Commercial Banks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fanjun Meng, Jianping Li, and Lijun Gao


Building Behavior Scoring Model Using Genetic Algorithm and Support

Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Defu Zhang, Qingshan Chen, and Lijun Wei


An Intelligent CRM System for Identifying High-Risk Customers:

An Ensemble Data Mining Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kin Keung Lai, Lean Yu, Shouyang Wang, and Wei Huang


The Characteristic Analysis of Web User Clusters Based on Frequent

Browsing Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Zhiwang Zhang and Yong Shi


A Two-Phase Model Based on SVM and Conjoint Analysis for Credit

Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kin Keung Lai, Ligang Zhou, and Lean Yu


A New Multi-Criteria Quadratic-Programming Linear Classication

Model for VIP E-Mail Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Peng Zhang, Juliang Zhang, and Yong Shi


Ecient Implementation of an Optimal Interpolator for Large Spatial

Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Nargess Memarsadeghi and David M. Mount


Development of an Ecient Conversion System for GML Documents . . .

Dong-Suk Hong, Hong-Koo Kang, Dong-Oh Kim, and Ki-Joon Han


Eective Spatial Characterization System Using Density-Based

Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chan-Min Ahn, Jae-Hyun You, Ju-Hong Lee, and Deok-Hwan Kim


MTF Measurement Based on Interactive Live-Wire Edge Extraction . . . .

Peng Liu, Dingsheng Liu, and Fang Huang


Table of Contents Part II

Research on Technologies of Spatial Conguration Information

Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Haibin Sun
Modelbase System in Remote Sensing Information Analysis and Service
Grid Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Yong Xue, Lei Zheng, Ying Luo, Jianping Guo, Wei Wan,
Wei Wei, and Ying Wang




Density Based Fuzzy Membership Functions in the Context of

Geocomputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Victor Lobo, Fernando Bac
ao, and Miguel Loureiro


A New Method to Model Neighborhood Interaction in Cellular

Automata-Based Urban Geosimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Yaolong Zhao and Yuji Murayama


Articial Neural Networks Application to Calculate Parameter Values

in the Magnetotelluric Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Andrzej Bielecki, Tomasz Danek, Janusz Jagodzi
nski, and
Marek Wojdyla
Integrating Ajax into GIS Web Services for Performance
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Seung-Jun Cha, Yun-Young Hwang, Yoon-Seop Chang,
Kyung-Ok Kim, and Kyu-Chul Lee
Aerosol Optical Thickness Retrieval over Land from MODIS Data on
Remote Sensing Information Service Grid Node . . . . . . . . . . . . . . . . . . . . . .
Jianping Guo, Yong Xue, Ying Wang, Yincui Hu, Jianqin Wang,
Ying Luo, Shaobo Zhong, Wei Wan, Lei Zheng, and Guoyin Cai




Universal Execution of Parallel Processes: Penetrating NATs over the

Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Insoon Jo, Hyuck Han, Heon Y. Yeom, and Ohkyoung Kwon


Parallelization of C# Programs Through Annotations . . . . . . . . . . . . . . . .

Cristian Dittamo, Antonio Cisternino, and Marco Danelutto


Fine Grain Distributed Implementation of a Dataow Language with

Provable Performances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Thierry Gautier, Jean-Louis Roch, and Frederic Wagner


Ecient Parallel Tree Reductions on Distributed Memory

Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kazuhiko Kakehi, Kiminori Matsuzaki, and Kento Emoto


Ecient Implementation of Tree Accumulations on Distributed-Memory

Parallel Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kiminori Matsuzaki



Table of Contents Part II

SymGrid-Par: Designing a Framework for Executing Computational

Algebra Systems on Computational Grids . . . . . . . . . . . . . . . . . . . . . . . . . . .
Abdallah Al Zain, Kevin Hammond, Phil Trinder, Steve Linton,
Hans-Wolfgang Loidl, and Marco Costanti


Directed Network Representation of Discrete Dynamical Maps . . . . . . . . .

Fragiskos Kyriakopoulos and Stefan Thurner


Dynamical Patterns in Scalefree Trees of Coupled 2D Chaotic Maps . . . .

Zoran Levnajic and Bosiljka Tadic


Simulation of the Electron Tunneling Paths in Networks of

Nano-particle Films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Milovan Suvakov
and Bosiljka Tadic


Classication of Networks Using Network Functions . . . . . . . . . . . . . . . . . .

Makoto Uchida and Susumu Shirayama


Eective Algorithm for Detecting Community Structure in Complex

Networks Based on GA and Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Xin Liu, Deyi Li, Shuliang Wang, and Zhiwei Tao


Mixed Key Management Using Hamming Distance for Mobile Ad-Hoc

Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Seok-Lae Lee, In-Kyung Jeun, and Joo-Seok Song


An Integrated Approach for QoS-Aware Multicast Tree Maintenance . . .

Wu-Hong Tsai and Yuan-Sun Chu


A Categorial Context with Default Reasoning Approach to

Heterogeneous Ontology Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ruliang Xiao and Shengqun Tang


An Interval Lattice Model for Grid Resource Searching . . . . . . . . . . . . . . .

Wen Zhou, Zongtian Liu, and Yan Zhao


Topic Maps Matching Computation Based on Composite Matchers . . . . .

Jungmin Kim and Hyunsook Chung


Social Mediation for Collective Intelligence in a Large Multi-agent

Communities: A Case Study of AnnotGrid . . . . . . . . . . . . . . . . . . . . . . . . . .
Jason J. Jung and Geun-Sik Jo


Metadata Management in S-OGSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Oscar Corcho, Pinar Alper, Paolo Missier, Sean Bechhofer,
Carole Goble, and Wei Xing
Access Control Model Based on RDB Security Policy for OWL
Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dongwon Jeong, Yixin Jing, and Doo-Kwon Baik



Table of Contents Part II

Semantic Fusion for Query Processing in Grid Environment . . . . . . . . . . .

Jinguang Gu
SOF: A Slight Ontology Framework Based on Meta-modeling for
Change Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Li Na Fang, Sheng Qun Tang, Ru Liang Xiao, Ling Li, You Wei Xu,
Yang Xu, Xin Guo Deng, and Wei Qing Chen
Data Forest: A Collaborative Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ronan Jamieson, Adrian Haegee, Priscilla Ramsamy, and
Vassil Alexandrov
NetODrom An Example for the Development of Networked
Immersive VR Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Christoph Anthes, Alexander Wilhelm, Roland Landertshamer,
Helmut Bressler, and Jens Volkert






Intelligent Assembly/Disassembly System with a Haptic Device for

Aircraft Parts Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Christiand and Jungwon Yoon


Generic Control Interface for Networked Haptic Virtual

Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Priscilla Ramsamy, Adrian Haegee, and Vassil Alexandrov


Physically-Based Interaction for Networked Virtual Environments . . . . . .

Christoph Anthes, Roland Landertshamer, and Jens Volkert


Middleware in Modern High Performance Computing System

Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Christian Engelmann, Hong Ong, and Stephen L. Scott


Usability Evaluation in Task Orientated Collaborative Environments . . .

Florian Urmetzer and Vassil Alexandrov


Developing Motivating Collaborative Learning Through Participatory

Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gustavo Zurita, Nelson Baloian, Felipe Baytelman, and
Antonio Farias


A Novel Secure Interoperation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Li Jin and Zhengding Lu


Scalability Analysis of the SPEC OpenMP Benchmarks on Large-Scale

Shared Memory Multiprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Karl F
urlinger, Michael Gerndt, and Jack Dongarra


Analysis of Linux Scheduling with VAMPIR . . . . . . . . . . . . . . . . . . . . . . . . .

Michael Kluge and Wolfgang E. Nagel



Table of Contents Part II

An Interactive Graphical Environment for Code Optimization . . . . . . . . .

Jie Tao, Thomas Dressler, and Wolfgang Karl


Memory Allocation Tracing with VampirTrace . . . . . . . . . . . . . . . . . . . . . . .

Matthias Jurenz, Ronny Brendel, Andreas Kn
Matthias M
uller, and Wolfgang E. Nagel


Automatic Memory Access Analysis with Periscope . . . . . . . . . . . . . . . . . .

Michael Gerndt and Edmond Kereku


A Regressive Problem Solver That Uses Knowledgelet . . . . . . . . . . . . . . . .

Kuodi Jian


Resource Management in a Multi-agent System by Means of

Reinforcement Learning and Supervised Rule Learning . . . . . . . . . . . . . . .
Bartlomiej Snie


Learning in Cooperating Agents Environment as a Method of Solving

Transport Problems and Limiting the Eects of Crisis Situations . . . . . . .
Jaroslaw Kozlak


Distributed Adaptive Design with Hierarchical Autonomous Graph

Transformation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Leszek Kotulski and Barbara Strug


Integration of Biological, Psychological, and Social Aspects in

Agent-Based Simulation of a Violent Psychopath . . . . . . . . . . . . . . . . . . . . .
Tibor Bosse, Charlotte Gerritsen, and Jan Treur


A Rich Servants Service Model for Pervasive Computing . . . . . . . . . . . . . .

Huai-dong Shi, Ming Cai, Jin-xiang Dong, and Peng Liu


Techniques for Maintaining Population Diversity in Classical and

Agent-Based Multi-objective Evolutionary Algorithms . . . . . . . . . . . . . . . .
Rafal Drezewski and Leszek Siwik


Agents Based Hierarchical Parallelization of Complex Algorithms on

the Example of hp Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . .
M. Paszy


Sexual Selection Mechanism for Agent-Based Evolutionary

Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rafal Drezewski and Krzysztof Cetnarowicz


Agent-Based Evolutionary and Immunological Optimization . . . . . . . . . . .

Aleksander Byrski and Marek Kisiel-Dorohinicki


Strategy Description for Mobile Embedded Control Systems Exploiting

the Multi-agent Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Vilem Srovnal, Bohumil Hor
ak, V
aclav Sn
asel, Jan Martinovic,
Pavel Kr
omer, and Jan Platos


Table of Contents Part II


Agent-Based Modeling of Supply Chains in Critical Situations . . . . . . . . .

Jaroslaw Kozlak, Grzegorz Dobrowolski, and Edward Nawarecki


Web-Based Integrated Service Discovery Using Agent Platform for

Pervasive Computing Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kyu Min Lee, Dong-Uk Kim, Kee-Hyun Choi, and Dong-Ryeol Shin


A Novel Modeling Method for Cooperative Multi-robot Systems Using

Fuzzy Timed Agent Based Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hua Xu and Peifa Jia


Performance Evaluation of Fuzzy Ant Based Routing Method for

Connectionless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Seyed Javad Mirabedini and Mohammad Teshnehlab


Service Agent-Based Resource Management Using Virtualization for

Computational Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sung Ho Jang and Jong Sik Lee


Fuzzy-Aided Syntactic Scene Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Marzena Bielecka and Marek Skomorowski


Agent Based Load Balancing Middleware for Service-Oriented

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jun Wang, Yi Ren, Di Zheng, and Quan-Yuan Wu


A Transformer Condition Assessment System Based on Data Warehouse

and Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Xueyu Li, Lizeng Wu, Jinsha Yuan, and Yinghui Kong


Shannon Wavelet Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Carlo Cattani


Wavelet Analysis of Bifurcation in a Competition Model . . . . . . . . . . . . . .

Carlo Cattani and Ivana Bochicchio


Evolution of a Spherical Universe in a Short Range Collapse/Generation

Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ivana Bochicchio and Ettore Laserra


On the Dierentiable Structure of Meyer Wavelets . . . . . . . . . . . . . . . . . . . 1004

Carlo Cattani and Luis M. S
anchez Ruiz
Towards Describing Multi-fractality of Trac Using Local Hurst
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
Ming Li, S.C. Lim, Bai-Jiong Hu, and Huamin Feng
A Further Characterization on the Sampling Theorem for Wavelet
Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021
Xiuzhen Li and Deyun Yang


Table of Contents Part II

Characterization on Irregular Tight Wavelet Frames with Matrix

Dilations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
Deyun Yang, Zhengliang Huan, Zhanjie Song, and Hongxiang Yang
Feature Extraction of Seal Imprint Based on the Double-Density
Dual-Tree DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
Li Runwu, Fang Zhijun, Wang Shengqian, and Yang Shouyuan
Vanishing Waves on Semi-closed Space Intervals and Applications in
Mathematical Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
Ghiocel Toma
Modelling Short Range Alternating Transitions by Alternating
Practical Test Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
Stefan Pusca
Dierent Structural Patterns Created by Short Range Variations of
Internal Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060
Flavia Doboga
Dynamic Error of Heat Measurement in Transient . . . . . . . . . . . . . . . . . . . . 1067
Fang Lide, Li Jinhai, Cao Suosheng, Zhu Yan, and Kong Xiangjie
Truncation Error Estimate on Random Signals by Local Average . . . . . . . 1075
Gaiyun He, Zhanjie Song, Deyun Yang, and Jianhua Zhu
A Numerical Solutions Based on the Quasi-wavelet Analysis . . . . . . . . . . . 1083
Z.H. Huang, L. Xia, and X.P. He
Plant Simulation Based on Fusion of L-System and IFS . . . . . . . . . . . . . . . 1091
Jinshu Han
A System Behavior Analysis Technique with Visualization of a
Customers Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099
Shoichi Morimoto
Research on Dynamic Updating of Grid Service . . . . . . . . . . . . . . . . . . . . . . 1107
Jiankun Wu, Linpeng Huang, and Dejun Wang
Software Product Line Oriented Feature Map . . . . . . . . . . . . . . . . . . . . . . . . 1115
Yiyuan Li, Jianwei Yin, Dongcai Shi, Ying Li, and Jinxiang Dong
Design and Development of Software Conguration Management Tool
to Support Process Performance Monitoring and Analysis . . . . . . . . . . . . . 1123
Alan Cline, Eun-Pyo Lee, and Byong-Gul Lee
Data Dependency Based Recovery Approaches in Survival Database
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
Jiping Zheng, Xiaolin Qin, and Jin Sun

Table of Contents Part II


Usage-Centered Interface Design for Quality Improvement . . . . . . . . . . . . . 1139

Chang-Mog Lee, Ok-Bae Chang, and Samuel Sangkon Lee
Description Logic Representation for Requirement Specication . . . . . . . . 1147
Yingzhou Zhang and Weifeng Zhang
Ontologies and Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
Waralak V. Siricharoen
Epistemological and Ontological Representation in Software
Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1162
J. Cuadrado-Gallego, D. Rodrguez, M. Garre, and R. Rejas
Exploiting Morpho-syntactic Features for Verb Sense Distinction in
KorLex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1170
Eunryoung Lee, Ae-sun Yoon, and Hyuk-Chul Kwon
Chinese Ancient-Modern Sentence Alignment . . . . . . . . . . . . . . . . . . . . . . . . 1178
Zhun Lin and Xiaojie Wang
A Language Modeling Approach to Sentiment Analysis . . . . . . . . . . . . . . . 1186
Yi Hu, Ruzhan Lu, Xuening Li, Yuquan Chen, and Jianyong Duan
Processing the Mixed Properties of Light Verb Constructions . . . . . . . . . . 1194
Jong-Bok Kim and Kyung-Sup Lim
Concept-Based Question Analysis for an Ecient Document Ranking . . . 1202
Seung-Eun Shin, Young-Min Ahn, and Young-Hoon Seo
Learning Classier System Approach to Natural Language Grammar
Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
Olgierd Unold
Text Retrieval Oriented Auto-construction of Conceptual
Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
Yi Hu, Ruzhan Lu, Yuquan Chen, and Bingzhen Pei
Filtering Methods for Feature Selection in Web-Document Clustering . . . 1218
Heum Park and Hyuk-Chul Kwon
A Korean Part-of-Speech Tagging System Using Resolution Rules for
Individual Ambiguous Word . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
Young-Min Ahn, Seung-Eun Shin, Hee-Geun Park,
Hyungsuk Ji, and Young-Hoon Seo
An Interactive User Interface for Text Display . . . . . . . . . . . . . . . . . . . . . . . 1226
Hyungsuk Ji and Hyunseung Choo
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231

Resolving Occlusion Method of Virtual Object in

Simulation Using Snake and Picking Algorithm
JeongHee Cha, GyeYoung Kim, and HyungIl Choi
Information and media institute, School of Computing, School of Media,
Soongsil University , Sangdo 5 Dong , DongJak Gu, Seoul, Korea, {gykim1,hic}

Abstract. For realistic simulation, it is essential to register the two worlds,

calculate the occlusion realm between the real world and the virtual object, and
determine the location of the virtual object based on the calculation. However,
if the constructed map is not accurate or the density is not sufficient to estimate
the occlusion boundary, it is very difficult to determine the occlusion realm. In
order to solve this problem, this paper proposes a new method for calculating
the occlusion realm using the snake and picking algorithm. First, the wireframe
generated by the CCD image and DEM was mapped using the visual clues to
acquire 3D information in the experimental realm, and the 3D information was
calculated at the point where occlusion problem for a moving target. The
validity of the proposed approach under the environment in which partial
occlusion occurs has been provided by an experiment.
Keywords: Occlusion, Snake, Picking, DEM, Augmented Reality, Simulation.

1 Introduction
Augmented reality is an area of technology that has originated in virtual reality. While
virtual reality offers a virtual world in which users are completely immersed,
augmented reality offers virtual objects on the basis of real world images. At present,
augmented reality technology is being researched and applied to various areas
including the military, medicine, education, construction, game, and broadcasting.
This paper studied on the development of a realistic simulated training model through
the display of virtual targets in the input images of CCD camera mounted in a tank
and the determination of occlusion areas generated from the creation and movement
of virtual objects through a movement path according to a scenario. Augmented
reality has three general characteristics: image registration, interaction, and real
time[1]. Image registration refers to the matching of the locations of the real world
object that users watch and the related virtual object, real time refers to the real time
image registration and interaction. Interaction implies that the combination of virtual
objects and the objects in real images must be harmonized with surrounding
environment in a realistic manner, and refers to the determination of occlusion areas
according to the changed location or line of sight of the observer or the re-rendering
of virtual objects after detection of collisions. However, to solve the problems of
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1 8, 2007.
Springer-Verlag Berlin Heidelberg 2007

J. Cha, G. Kim, and H. Choi

occlusion such as the hiding of farther virtual objects by closer objects and the
covering of objects in real images by other objects, the two worlds must be accurately
coordinated and then the depth of the actual scene must be compared with the depth
of virtual objects[2][3]. But if the accuracy or density of the created map is
insufficient to estimate the boundary of occlusion area, it is difficult to determine the
occlusion area. To solve this problem, first, we created a 3D wireframe using the
DEM of the experiment area and then coordinate this using CCD camera images and
visual clues. Second, to solve the problem of occlusion by accurately estimating the
boundary regardless of the density of map, this paper also proposed a method to
obtain the reference 3D information of the occlusion points using the Snake algorithm
and the Picking algorithm and then to infer the 3D information of other boundaries
using the proportional relations between 2D and 3D DEMs. Third, for improving
processing speed, we suggest a method by comparing the MER(Minimum Enclosing
Rectangle) area of the object in the cameras angle of vision and the MER of the
virtual target. Fig. 1 shows the proposed system framework.

Fig. 1. Proposed System Framework

2 Methodology
2.1 Formation of Wireframe Using DEM and Registration with Real Images
Using Visual Clues
The topographical information DEM (Digital Elevation Model) is used to map the
real world coordinates to each point of the 2D CCD image. DEM has information on
the latitude and longitude coordinates expressed in X and Y and heights in fixed
interval. The DEM used for this experiment is a grid-type DEM which had been
produced to have the height information for 2D coordinates in 1M interval for the
limited experiment area of 300 m x 300 m. The DEM data are read to create a mesh

Resolving Occlusion Method of Virtual Object in Simulation

with the vertexes of each rectangle and a wireframe with 3D depth information as
Fig. 2[4][5]. This is overlaid on the sensor image to check the coordination, and visual
clues are used to move the image to up, down, left or right as shown in Fig. 3, thus
reducing error. Based on this initial coordinated location, the location changes by
movement of vehicles were frequently updated using GPS (Global Positioning
System) and INS (Inertial Navigation System).

Fig. 2. Wireframe Creation using DEM

Fig. 3. Registration of Two Worlds using Visual Clues

2.2 Extracting the Outline of Objects and Acquiring 3D Information

The Snake algorithm[6][7] is a method of finding the outline of an object by
repeatedly moving to the direction of minimizing energy function from the snake
vertex input by user. The energy function is shown in Expression [1]. As the energy
function is calculated for a discrete space, the parameters of each energy function
become the coordinates of each vertex in the image. In Expression [1], v (s ) is the


snake point, and v( s ) = ( x ( s ), y ( s )) , where x s and y (s ) refer to the positions

of x and y in the image of the snake point. Also, , and are weights, and this
paper gave = 1 , = 0.4 , and

= 2.0 , respectively.

Esnake = (Econt (v( s )) + Ecurve (v( s )) + Eimage (v( s )))ds



J. Cha, G. Kim, and H. Choi

The first term is the energy function that represents the continuity of the snake
vertexes surrounding the occlusion area and the second term is the energy function
that controls the smoothness of the curve forming the snake, of which the value
increases along with the curvature, enabling the detection of corner points. Lastly,
Eimage is a image feature function. All energy functions are normalized to have a
value between 1 and 0. As shown in Table 1, this algorithm extracts the outline by
repeatedly performing the energy minimization algorithm which sets a 3 pixels x3
pixels window at each vertex v(i ) , finds positions where energy is minimized in
consideration of the continuity between previous and next vertexes, curvature, and
edge strength, and then moves the vertexes to the positions.
Table 1. Snake Algorithm

2.3 Acquisition of 3D Information Using the Picking Algorithm

In order to acquire the 3D information of the extracted vertexes, this paper used the
Picking algorithm which is a well-known 3D graphics technique[8]. It finds the
collision point with the 3D wireframe created by DEM that corresponds to the points
in 2D image and provides the 3D information of the points. The picking search point
is the lowest point of the vertexes of the objects extracted from the 2D image. The
screen coordinate system that is a rectangular area indicating a figure that has been
projection transformed in the 3D image rendering process must be converted to the
viewport coordinate system in which the actual 3D topography exists to pick the
coordinate system where the mouse is actually present. First, the conversion matrix to
convert viewport to screen is used to obtain the conversion formula from 2D screen to
3D projection window, and then the ray of light is lengthened gradually from the
projection window to the ground surface to obtain the collision point between the
point to search and the ground surface. Fig. 4 is an example of picking the collision
point between the ray of light and DEM. The lowest point of the occlusion area
indicated by an arrow is the reference point to search, and this becomes the actual
position value of 2D image in a 3D space.

Resolving Occlusion Method of Virtual Object in Simulation

(a)occlusion candidate (b)matching ref.point and DEM (c)3D information extraction

Fig. 4. 3D information Extraction using Collision point of Ray and DEM

2.4 Creation of 3D Information Using Proportional Relational Expression

The collision point, or reference point, has 3D coordinates in DEM, but other vertexes
of the snake indicated as object outline cannot obtain 3D coordinates because they
dont have a collision point. Therefore, this paper suggested obtaining a proportional
relation between 2D image and 3D DEM using the collision reference point and then
obtaining the 3D coordinates of another vertex. Fig. 5 shows the proportional relation
between 2D and 3D vertexes. In Fig. 5, S m is center of screen, S B is reference point
of snake vertex (lowest point),

S B = (S xB , S yB ) , S k is a point except reference

S k = (S xk , S yk ) . Pm is projection point of straight line of PB in 3D,

which is through the center of screen.

PB = (PxB , Py B , PzB ),



PB is 3D correspondence point of S B ,





Pk = (Pxk , Pyk , Pzk ) , t = Po PB , t m = Po Pm , B : tt , B : t t m . t ' is

projected vector of t to xz plane.


Fig. 5. Proportional Relation of the Vertex in 2D and 3D

pm that passes the center of the screen using the coordinates of the
reference point obtained above, t must be obtained first. As the t value is given by
To get

J. Cha, G. Kim, and H. Choi

the picking ray, the given


t value and y B are used to get B and t ' is obtained using

in Expression (2).

B = sin 1 (
To get


), t ' = t B cos ( B )

t' = t'


t m , B is obtained from Expression (2) which is the angle between t and

t m can be obtained using B from Expression (3).

B = tan 1 (
Because t m


), t ' = t m cos (B ) , t m =

cos (B )

tm = t m


= p zm , Pm = (0,0, t m ) .

Now, we can present the relation between the 2D screen view in Fig. 5 and the 3D
space coordinates, and this can be used to get pk , which corresponds to the 2D snake

S B : PB = S k : Pk , S xB : PxB = S xk : Pxk ,
Pxk =

PxB S xk
S xB

, S yB

Consequently, we can get

: PyB S yk : Pyk , Pyk =

PyB S yk


S yB

Pk = (Pxk , Pyk ) , which is the 3D space point

corresponding to each snake vertex to search.

2.5 Creation of Virtual Target Path and Selection of Candidate Occlusion
Objects Using MER (Minimum Enclosing Rectangle)
To test the proposed occlusion-resolving algorithm, we created the movement path of
a virtual target, and determined the changes of the direction and shape of the target as
well as the 3D position of the target. First, the beginning and end points of the target
set by instructor were saved and the angle of these two points was calculated, and the
direction and shape of the target were updated in accordance with the change of the
angle. Further, the remaining distance was calculated using the speed and time of the
target, and the 3D coordinates corresponding to the position after movements were
determined. We also suggest a method of improving processing speed by comparing
the MER (Minimum Enclosing Rectangle) area of the object in the cameras angle of
vision and the MER of the virtual target because the relational operations between all
objects extracted from the image for occlusion processing and the virtual target take
much time. The MER (Minimum Enclosing Rectangle) of an object refers to the

Resolving Occlusion Method of Virtual Object in Simulation

minimum rectangle that can enclose the object and determines the object that has an
overlapping area by comparing the objects in the camera image and the MER of the
virtual target. In addition, the distance between object and virtual target is obtained
using the fact that the determined object and virtual target are placed more or less in a
straight line from the camera, and this value was used to determine whether there
exists an object between the virtual target and the camera.

3 Experimental Results
Fig. 6(left) shows movement path of the virtual target which trainee sets. Also, (right)
shows the various virtual targets created to display the targets changing with
movement on the image.

Fig. 6.

Moving Route Creation(left) and Appearance of Virtual Object as it Moved(right)

Fig. 7 shows the virtual images moving along the path by frame. We can see that as
the frames increase, it is occluded between the tank and the object.

Fig. 7. Experimental Results of Moving and Occlusion

Table 2 compares between the case of using snake vertexes to select objects in the
image to compare with virtual targets and the case of using the proposed MER. With
the proposed method, the processing speed decreased by 1.671, which contributed to
performance improvement.

J. Cha, G. Kim, and H. Choi

Table 2. Speed Comparison

Snake vertexes

Total frame

Used object


Frame per sec.


4 Conclusions
To efficiently solve the problem of occlusion that occurs when virtual targets are
moved along the specified path over an actual image, we created 3D virtual world
using DEM and coordinated this using camera images and visual clues. Moreover, the
Snake algorithm and the Picking algorithm were used to extract an object that is close
to the original shape to determine the 3D information of the point to be occluded. To
increase the occlusion processing speed, this paper also used the method of using the
3D information of the MER area of the object, and proved the validity of the proposed
method through experiment. In the future, more research is required on a more
accurate extracting method for occlusion area that is robust against illumination as
well as on the improvement of operation speed.

This work was supported by the Korea Research Foundation Grant funded by the
Korean Government(MOEHRD)(KRF-2006-005-J03801).

[1] Bimber, O. and Raskar, R.,Spatial Augmented Reality: A Modern Approach to Augmented
Reality, Siggraph 2005, Los Angeles USA
[2] J. Yong Noh and U. Neumann. Expression cloning. In SIGGRAPH'01, pages 277-288,
[3] E. Chen. Quicktime VR-an image-based approach to virtual environment navigation. Proc.
of SIGGRAPH, 1995.
[4] Lilian Ji, Hong Yan, "Attractable snakes based on the greedy algorithm for contour
extraction", Pattern Recognition 35, pp.791-806 (2002)
[5] Charles C. H. Lean, Alex K. B. See, S. Anandan Shanmugam, "An Enhanced Method for
the Snake Algorithm," icicic, pp. 240-243, First International Conference on Innovative
Computing, Information and Control - Volume I (ICICIC'06), 2006
[6] Wu, S.-T., Abrantes, M., Tost, D., and Batagelo, H. C. 2003. Picking and snapping for 3d
input devices. In Proceedings of SIBGRAPI 2003, 140-147.

Graphics Hardware-Based Level-Set Method

for Interactive Segmentation and Visualization
Helen Hong1 and Seongjin Park2

Division of Multimedia Engineering, College of Information and Media,

Seoul Womens University, 126 Gongreung-dong, Nowon-gu, Seoul 139-774 Korea
School of Computer Science and Engineering, Seoul National University,
San 56-1 Shinlim-dong, Kwanak-gu, Seoul 151-741 Korea,

Abstract. This paper presents an efficient graphics hardware-based method to

segment and visualize level-set surfaces as interactive rates. Our method is
composed of memory manager, level-set solver, and volume renderer. The
memory manager which performs in CPU generates page table, inverse page
table and available page stack as well as process the activation and inactivation
of pages. The level-set solver computes only voxels near the iso-surface. To run
efficiently on GPUs, volume is decomposed into a set of small pages. Only
those pages with non-zero derivatives are stored on GPU. These active pages
are packed into a large 2D texture. The level-set partial differential equation
(PDE) is computed directly on this packed format. The memory manager is
used to help managing the packing of the active data. The volume renderer
performs volume rendering of the original data simultaneouly with the evolving level set in GPU. Experimental results using two chest CT datasets
show that our graphics hardware-based level-set method is much faster than
software-based one.
Keywords: Segmentation, Level-Set, Volume rendering, Graphics hardware,
CT, Lung.

1 Introduction
The level-set method is a numerical technique for tracking interfaces and shapes[1].
The advantage of the level-set method is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects. In addition, the level-set method makes it easy to follow
shapes which change topology. All these make the level-set method a great tool for
modeling time-varying objects. Thus, deformable iso-surfaces modeled by level-set
method have demonstrated a great potential in visualization for applications such as
segmentation, surface processing, and surface reconstruction. However, the use of
level sets in visualization has a limitation in their high computational cost and reliance
on significant parameter tuning.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 9 16, 2007.
Springer-Verlag Berlin Heidelberg 2007


H. Hong and S. Park

Several methods have been suggested for accelerate the computation time. Adalsteinson and Sethian [2] have proposed the narrow band method, which only computes the points near the front at each time step. Thus it is more efficient than the
standard level-set approach. However, the computational time is still large, especially when the image size is large. Paragios and Deriche introduced the Hermes
algorithm which propagates in a small window each time to achieve a much faster
computation. Sethian [3] presented a monotonically advancing scheme. It is restricted to a one directional speed term and the fronts geometric properties are
omitted. Unfortunately, the stop criteria have to be decided carefully so that the
front will not exceed the boundary. Whitaker[4] proposed the sparse-field method,
which introduces a scheme in which updates are calculated only on the wavefront,
and several layers around that wavefront are updated via a distance transform at
each iteration.
To overcome those limitations of software-based level-set methods, we propose an
efficient graphics hardware-based method to segment and visualize level set surfaces
as interactive rates.

2 Level-Set Method on Graphics Hardware

Our method is composed of memory manager, level-set solver and volume renderer as
shown in Figure 1. First, in order to help managing the packing of the active data, the
memory manager generates page table, inverse page table and available page stack as
well as process the activation and inactivation of pages. Second, level-set solver computes only voxels near the iso-surface like the sparse field level-set method. To run
efficiently on GPUs, volume is decomposed into a set of small pages. Third, volume
renderer performs volume rendering of the original data simultaneously with the
evolving level set.
2.1 Memory Manager
Generally, the size of texture memory in graphics hardware is rather small. Thus,
there has a limitation to load a large volume medical dataset which has over 1000
slices with 512 x 512 image size to texture memory. In order to overcome this limitation, it is required to load level sets only near the iso-surface, which called active
pages. In this section, we propose an efficient method to manage these active pages.
Firstly, main memory in CPU and texture memory in GPU is divided into pages.
Then data structure as shown in Fig. 2 is generated. In order to exchange the corresponding page number between main memory and texture memory, the page table
which converts the page number of main memory to corresponding page number of
texture memory and the inverse page table which converts the page number of texture
memory to corresponding page number of main memory is generated respectively. In
addition, the available page stack is generated to manage empty pages in texture

Graphics Hardware-Based Level-Set Method for Interactive Segmentation

Fig. 1. The flow chart of our method on graphics hardware

Fig. 2. Data structure for memory management



H. Hong and S. Park

In level-set method, the page including front is changed as front is increased or decreased. To manage these pages, activation and inactivation is performed as shown in
Fig. 3. The activation process is occurred when evolving front use the inactive page in
texture memory. At this process, main memory asks new page of texture memory to
available page stack. Then top page of available page stack is popped as shown in Fig.
3(a). The inactivation process is occurred when evolving front is departed from active pages of texture memory. As shown in Fig. 3(b), main memory asks the removal
of active page to texture memory, and the removed active page is pushed to available
page stack.


Fig. 3. The process of page activation and inactivation (a) page activation process (b) page
inactivation process

During level-set computation in GPU, partial differential equation is computed using information of current and neighbor pixels. In case of referring inside pixel of
page in texture memory, PDE can be calculated without preprocessing. In case of
referring boundary pixel of page, neighbor page should be referred to know information of neighbor pixel. However, it is difficult to know such information during PDE
calculation in GPU. In the case, vertex buffer is created in CPU to save the location of
current and neighbor pixels. For this, we define nine different cases as shown in Fig. 4.
In 1st, 3rd, 5th, 7th vertices, two pages neighbor to them are referred to know and save
the location of neighbor pixel to vertex buffer with the location of current pixel.
In 2nd, 4th, 6th, 8th vertices, one page neighbor to them are referred. In 9th vertex, the

Graphics Hardware-Based Level-Set Method for Interactive Segmentation


location of current is saved to vertex buffer without referring of neighbor page. The
location of neighbor pixel is calculated using page table and inverse page table
like Eq. (1).

Fig. 4. Nine different cases for referring neighbor page

M num = InversePageTable(Tnum )
neighbor ( M num ) = M num + neighborOffset
neighbor (Tnum ) = PageTable(neighbor ( M num ))

Tnum =


where Tnum is page number in texture memory, Taddress is page address in texture memory, M num is page number in main memory, PageSize is defined in 16 x 16.
2.2 Level-Set Solver
The efficient solution of the level set PDEs relies on updating only those voxels that
are on or near the iso-surface. The narrow band and sparse field methods achieve this
by operating on sequences of heterogeneous operations. For instance, the sparse-field
method keeps a linked list of active voxels on which the computation is performed.
However, the maximum efficiency for computing on GPU is achieved when homogeneous operations are applied to each pixel. To apply different operations to each pixel
in page has a burden in CPU-to-GPU message passing. To run efficiently on GPUs,
our level-set solver applies heterogeneous operations to nine different cases divided in
creation of vertex buffer.
Fig. 5 shows that vertex buffer is transferred to GPU during vertex shading, which
is divided into apex (1st, 3rd, 5th, 7th), edge (2nd, 4th, 6th, 8th) and the inner parts (9th).
Sixteen vertex buffers which include the location of four different apex points for the
apex case, eight ends points for the edge case, and four apex points for the inner case
are transferred. Then level-set computations are achieved by using Eq. (2) and (3).
D( I ) = | I T |
= | | D( I )



H. Hong and S. Park

where I is the intensity value of the image, D(I) is speed function, is level-set value,
T and is the average intensity value and standard deviation of segmenting region,

Fig. 5. The process of efficient level-set operation in GPU

2.3 Volume Renderer

The conventional software-based volume rendering techniques such as ray-casting
and shear-warp factorization have a limitation to visualize level-set surfaces as interactive rate. Our volume renderer performs a texture-based volume rendering on
graphics hardware of the original data simultaneously with the evolving level set.
Firstly, updated level-set values in texture memory are saved to main memory
through inverse page table. Then texture-based volume rendering is applied for visualizing original volume with level-set surfaces. For efficient memory use, we use only
two channels for intensity value and level set value instead of using RGBA four
channels. Then proxy geometry is generated using parallel projection. Finally, we
map three-dimensional texture memory in GPU to the proxy geometry. The mapped
slices onto the proxy geometry render using compositing modes that include maximum intensity projection.

3 Experimental Result
All our implementation and tests were performed using a general computer equipped
with an Intel Pentium 4, 2.4 GHz CPU and 1GB memory. The graphics hardware was
ATI Radeon 9600 GPU with 256 MB of memory. The programs are written in the

Graphics Hardware-Based Level-Set Method for Interactive Segmentation


DirectX shader program language. Our method was applied to each unilateral lung of
two chest CT datasets to evaluate its accuracy and processing time. The volume resolution of each unilateral lung is 512 x 256 x 128. For packing active pages, the size of
2D texture memory is 2048 x 2048. Fig. 6 and 7 show how our method segment accurately in two- and three-dimensionally. The segmented lung boundary is presented in
red. In Fig. 7, original volume with level set surfaces is visualized by using maximum
intensity projection.

Fig. 6. The results of segmentation using our

graphics hardware-based level-set method

Fig. 7. The results of visualizing original

volume with level-set surfaces

We have compared our technique with software-based level-set method under the
same condition. Table 1 shows a comparison of the total processing time using the 2
different techniques. The total processing time includes times for performing page management and level-set computations. As shown in Table 1, our method is over 3.4 times
faster than software-based level set method. In particular, our method for computing
level set PDE is over 14 times faster than that of software-based level set method.
Table 1. The comparison results of total processing time using two different techniques.
Level set
processing time
Proposed graphAR
ics hardware0.44
based method
L: left lung, R: right lung

4 Conclusion
We have developed a new tool for interactive segmentation and visualization of level
set surfaces on graphics hardware. Our memory manager helps managing the packing


H. Hong and S. Park

of the active data. A dynamic, packed texture format allows the efficient processing of
time-dependent, sparse GPU computations. While the GPU updates the level set, it
renders the surface model directly from this packed texture format. Our method was
over 3.4 times faster than software-based level set method. In particular, our method
for computing level set PDE was over 14 times faster than that of software-based
level set method. The average of total processing time of our method was 0.6 seconds.
The computation time for memory management took almost times in total processing
time. Experimental results show that our solution is much faster than previous optimized solutions based on software technique.

This study is supported in part by Special Basic Research Program grant R01-2006000-11244-0 under the Korea Science and Engineering Foundation and in part by
Seoul R&D Program.

1. Osher S, Sethian J.A., Front propagating with curvature dependant speed: algorithms based
on Hamilton-Jacobi formulation, Journal of Computational Physics, Vol. 79 (1988) 12-49.
2. Adalsteinson D, Sethian J.A., A fast level set method for propagating interfaces, Journal of
Computational Physics (1995) 269-277.
3. Sethian J.A., A fast matching level set method for monotonically advancing fronts, Proc.
Natl. Acad. Sci. USA Vol. 93 (1996) 1591-1595.
4. Whitaker R., A level-set approach to 3D reconstruction from range data, International Journal of Computer Vision (1998) 203-231.

Parameterization of Quadrilateral Meshes

Li Liu 1, CaiMing Zhang 1,2, and Frank Cheng3

School of Computer Science and Technology, Shandong University, Jinan, China

Department of Computer Science and Technology, Shandong Economic University, Jinan,
Department of Computer Science, College of Engineering, University of Kentucky,

Abstract. Low-distortion parameterization of 3D meshes is a fundamental

problem in computer graphics. Several widely used approaches have been
presented for triangular meshes. But no direct parameterization techniques are
available for quadrilateral meshes yet. In this paper, we present a
parameterization technique for non-closed quadrilateral meshes based on mesh
simplification. The parameterization is done through a simplify-project-embed
process, and minimizes both the local and global distortion of the quadrilateral
meshes. The new algorithm is very suitable for computer graphics applications
that require parameterization with low geometric distortion.
Keywords: Parameterization, mesh simplification, Gaussian curvature,

1 Introduction
Parameterization is an important problem in Computer Graphics and has applications
in many areas, including texture mapping [1], scattered data and surface fitting [2],
multi-resolution modeling [3], remeshing [4], morphing [5], etc. Due to its importance
in mesh applications, the subject of mesh parameterization has been well studied.
Parameterization of a polygonal mesh in 3D space is the process of constructing a
one-to-one mapping between the given mesh and a suitable 2D domain. Two major
paradigms used in mesh parameterization are energy functional minimization and the
convex combination approach. Maillot proposed a method to minimize the norm of
the Green-Lagrange deformation tensor based on elasticity theory [6]. The harmonic
embedding used by Eck minimizes the metric dispersion instead of elasticity [3].
Lvy proposed an energy functional minimization method based on orthogonality and
homogeneous spacing [7]. Non-deformation criterion is introduced in [8] with
extrapolation capabilities. Floater [9] proposed shape-preserving parameterization,
where the coefficients are determined by using conformal mapping and barycentric
coordinates. The harmonic embedding [3,10] is also a special case of this approach,
except that the coefficients may be negative.
However, these techniques are developed mainly for triangular mesh
parameterization. Parameterization of quadrilateral meshes, on the other hand, is
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 1724, 2007.
Springer-Verlag Berlin Heidelberg 2007


L. Liu, C. Zhang , and F. Cheng

actually a more critical problem because quadrilateral meshes, with their good
properties, are preferred in finite element analysis than triangular meshes.
Parameterization techniques developed for triangle meshes are not suitable for
quadrilateral meshes because of different connectivity structures.
In this paper, we present a parameterization technique for non-closed quadrilateral
meshes through a simplify-project-embed process. The algorithm has the following
advantages:(1) the method provably produces good parameterization results for any
non-closed quadrilateral mesh that can be mapped to the 2D plane; (2) the method
minimizes the distortion of both angle and area caused by parameterization; (3) the
solution does not place any restrictions on the boundary shape; (4) since the
quadrilateral meshes are simplified, the method is fast and efficient.
The remaining part of this paper is organized as follows. The new model and the
algorithm are presented in detail in Section 2. Test results of the new algorithm are
shown in Section 3. Concluding remarks are given in Section 4.

2 Parameterization
Given a non-closed quadrilateral mesh, the parameterization process consists of four
steps. The first step is to get a simplified version of the mesh by keeping the boundary
and interior vertices with high Gaussian curvature, but deleting interior vertices with
low Gaussian curvature. The second step is to map the simplified mesh onto a 2D
domain through a global parameterization process. The third step is to embed the
deleted interior vertices onto the 2D domain through a weighted discrete mapping.
This mapping preserves angles and areas and, consequently, minimizes angle and area
distortion. The last step is to perform an optimization process of the parameterization
process to eliminate overlapping. Details of these steps are described in the
subsequent sections.
For a given vertex v in a quadrilateral mesh, the one-ring neighbouring vertices of
the vertex v are the vertices that share a common face with v . A one-ring
neighboring vertex of the vertex v is called an immediate neighboring vertex if this
vertex shares a common edge with v . Otherwise, it is called a diagonally neighboring
2.1 Simplification Algorithm
The computation process, as well as the distortion, may be too large if the entire
quadrilateral mesh is projected onto the plane. To speed up the parameterization and
minimize the distortion, we simplify the mesh structure by reducing the number of
interior vertices but try to retain a good approximation of the original shape and
appearance. The discrete curvature is one of the good criteria of simplification while
preserving the shape of an original model.
In spite of the extensive use of quadrilateral meshes in geometric modeling and
computer graphics, there is no agreement on the most appropriate way to estimate
geometric attributes such as curvature on discrete surfaces. By thinking of a

Parameterization of Quadrilateral Meshes


quadrilateral mesh as a piecewise linear approximation of an unknown smooth

surface, we can try to estimate the curvature of a vertex using only the information
that is given by the quadrilateral mesh itself, such as the edge and angles. The
estimation does not have to be precise. To speed up the computation, we ignore the
effect of diagonally neighboring vertices, and use only immediate neighboring
vertices to estimate the Gaussian curvature of a vertex, as shown in Fig.1-(a). We
define the integral Gaussian curvature K = K v with respect to the area
S = S v attributed to v by

K = K = 2


i =1

where i is the angle between two successive edges. To derive the curvature from the
integral values, we assume the curvature to be uniformly distributed around the vertex
and simply normalized by the area




where S is the sum of the areas of adjacent faces around the vertex v . Different
ways of defining the area S result in different curvature values. We use the Voronoi
area, which sums up the areas of vertex v s local Voronoi cells. To determine the
areas of the local Voronoi cells restricted to a triangle, we distinguish obtuse and nonobtuse triangles as shown in Fig. 1. In the latter case they are given by

SA =

( vi v k

cot( i ) + v i v j

cot( i )) .


For obtuse triangles,

SB =

vi v k

tan( i ), S C =

vi v j

tan( i ), S A = S S B S C .


A vertex deletion means the deletion of a vertex with low Gaussian curvature and
the incident edges. During the simplification process, we can adjust the tolerance
value to control the number of vertices reduced.




Fig. 1. Voronoi area. (a) Voronoi cells around a vertex; (b) Non-obtus angle; (c) Obtus angle.


L. Liu, C. Zhang , and F. Cheng

2.2 Global Parameterization

Parameterizing a polygonal mesh amounts to computing a correspondence between

the 3D mesh and an isomorphic planar mesh through a piecewise linear mapping. For
the simplified mesh M obtained in the first step, the goal here is to construct a

mapping between M and an isomorphic planar mesh U in R that best preserves

the intrinsic characteristics of the mesh M . We denote by vi the 3D position of the ith vertex in the mesh M , and by ui the 2D position (parameterized value) of the
corresponding vertex in the 2D mesh U .
The simplified polygonal mesh M approximates the original quadrilateral mesh,
but the angles and areas of M are different from the original mesh. We take the edges
of the mesh M as springs and project vertices of the mesh onto the parameterization
domain by minimizing the following edge-based energy function

2 {i , j}Edge v v

ui u j , r 0 .


where Edge is the edge set of the simplified mesh. The coefficients can be chosen in
different ways by adjusting r . This global parameterization process is performed on a
simplified mesh (with less vertices), so it is different from the global parameterization
and the fixed-boundary parameterization of triangular meshes.
2.3 Local Parameterization

After the boundary and interior vertices with high Gaussian curvature are mapped
onto a 2D plane, those vertices with low curvature, are embedded back onto the
parametrization plane. This process has great impact on the result of the
parametrization. Hence, it should preserve as many of the intrinsic qualities of a mesh
as possible. We need to define what it means by intrinsic qualities for a discrete mesh.
In the following, the minimal distortion means best preservation of these qualities.
2.3.1 Discrete Conformal Mapping
Conformal parameterization preserves angular structure, and is intrinsic to the
geometry and stable with respect to small deformations. To flatten a mesh onto a twodimensional plane so that it minimizes the relative distortion of the planar angles with
respect to their counterparts in the 3D space, we introduce an angle-based energy
function as follows

EA =

jN (i )



+ cot


) ui u j


where N (i ) is the set of immediate one-ring neighbouring vertices, and ij , ij are

the left and opposite angles of vi , as shown in Fig. 2-(a). The coefficients in the

Parameterization of Quadrilateral Meshes


formula (6) are always positive, which reduces the overlapping in the 2D mesh. To
minimize the discrete conformal energy, we get a discrete quadratic energy in the
parameterization and it depends only on the angles of the original surface.
2.3.2 Discrete Authalic Mapping
Authalic mapping preserves the area as much as possible. A quadrilateral mesh in 3D
space usually is not flat, so we cannot get an exact area of each quadrilateral patch. To
minimize the area distortion, we divide each quadrilateral patch into four triangular
parts and preserve the areas of these triangles respectively. For instance, in Fig. 2-(b)
the quadrilateral mesh vi v j v k v j +1 is divided into triangular meshes v i v j v j +1 ,

vi v j v k , vi vk v j +1 and v j vk v j +1 respectively. This changes the problem of

quadrilateral area preserving into that of triangular area preserving.
The mapping resulted from the energy minimization process has the property of
preserving the area of each vertex's one-ring neighbourhood in the mesh, and can be
written as follows

Ex =

jN (i )



+ cot

vi v j


ui u j


where ij , ij are corresponding angles of the edge (vi , v j ) as shown in Fig. 2-(c).
The parameterization deriving from E x is easily obtained, and the way to solve this
system is similar to that of the discrete conformal mapping, but the linear coefficients
now are functions of local areas of the 3D mesh.




Fig. 2. Edge and angles. (a) Edge and opposite left angles in the conformal mapping; (b)
Quadrilateral mesh divided into four triangles; (c) Edge and angles in the authalic mapping.

2.3.3 Weighted Discrete Parameterization

Discrete conformal mapping can be seen as an angle preserving mapping which
minimizes the angle distortion for the interior vertices. The resulting mapping will
preserve the shape but not the area of the original mesh. Discrete authalic mapping is
area preserving which minimizes the area distortion. Although the area of the original


L. Liu, C. Zhang , and F. Cheng

mesh would locally be preserved, the shape tends to be distorted since the mapping
from 3D to 2D will in general generate twisted distortion.
To minimize the distortion and get better parameterization results, we define linear
combinations of the area and the angle distortions as the distortion measures. It turns
out that the family of admissible, simple distortion measures is reduced to linear
combinations of the two discrete distortion measures defined above. A general
distortion measure can thus always be written as

E = qE A + (1 q) E X .


where q is a real number between 0 and 1. By adjusting the scaling factor q ,

parameterizations appropriate for special applications can be obtained.
2.4 Mesh Optimization

The above parameterization process does not impose restriction, such as convexity, on
the given quadrilateral mesh. Consequently, overlapping might occur in the projection
process. To eliminate overlapping, we optimize the parameterization mesh by
adjusting vertex location without changing the topology. Mesh optimization is a local
iterative process. Each vertex is optimized for a new location in a number of

Let ui be the q times iteration location of the parameterization value

ui . The

optimisation process to find the new location in iterations is the following formula

u j q 1 ui q 1

i =1

ui q = ui q 1i (

) + 2 (
i =1

uk q 1 ui q 1
),0 < 1 + 2 < 1 . (9)

where u j , uk are the parameterization values of the immediate and diagonal

neighbouring vertices respectively. It is found that vertex optimization in the order of
"worst first" is very helpful. We define the priority of the vertex follows

u j q 1 ui q 1

i =1

= i (

uk q 1 ui q 1
) + 2 (
i =1


The priority is simply computed based on shape metrics of each parameterization

vertex. For a vertex with the worst quality, the highest priority is assigned. Through
experiments, we find that more iterations are needed if vertices are not overlapped in
an order of "first come first serve". Besides, we must point out that the optimization
process is local and we only optimize overlapping vertices and its one-ring vertices,
which will minimize the distortion and preserve the parameterization results better.

3 Examples
To evaluate the visual quality of a parameterization we use the checkerboard texture
shown in Fig. 3, where the effect of the scaling factor q in Eq. (8) can be found. In

Parameterization of Quadrilateral Meshes


fact, while q is equal to 0 or 1, the weighted discrete mapping is discrete conformal

mapping and authalic mapping separately. We can find few parameterization methods
of quadrilateral meshes, so the weighted discrete mapping is compared with discrete
conformal mapping and authalic mapping of quadrilateral meshes with q = 0 and
q = 1 in Eq. (8) separately.
Fig. 3-(a) and (e) show the sampled quadrilateral meshes. Fig. 3-(b) and (f) show
the models with a checkerboard texture map using discrete conformal mapping with
q = 0 . Fig.3-(c) and (g) show the models with a checkerboard texture map using
weighted discrete mapping with q = 0.5 . Fig. 3-(d) and (h) show the models with a
checkerboard texture map using discrete authalic mapping with q = 1 . It is seen that
the results using weighted discrete mapping is much better than the ones using
discrete conformal mapping and discrete authalic mapping.









Fig. 3. Texture mapping. (a) and (e) Models; (b) and (f) Discrete conformal mapping , q=0; (c)
and (g) Weighted discrete mapping , q=0.5; (d) and (h) Discrete Authalic mapping , q=1.

The results demonstrate that the medium value (about 0.5) can get smoother
parameterization and minimal distortion energy of the parameterization. And the
closer q to value 0 or 1, the larger the angle and area distortions are.

4 Conclusions
A parameterization technique for quadrilateral meshes is based on mesh
simplification and weighted discrete mapping is presented. Mesh simplification


L. Liu, C. Zhang , and F. Cheng

reduces computation, and the weighted discrete mapping minimizes angle and area
distortion. The scaling factor q of the weighted discrete mapping provides users with
the flexibility of getting appropriate parameterisations according to special
applications, and establishes different smoothness and distortion.
The major drawback in our current implementation is that the proposed approach
may contain concave quadrangles in the planar embedding. It is difficult to make all
of the planar quadrilateral meshes convex, even though we change the triangular
meshes into quadrilateral meshes by deleting edges. In the future work, we will focus
on using a better objective function to obtain better solutions and developing a good
solver that can keep the convexity of the planar meshes.

1. Levy, B.: Constrained texture mapping for polygonal meshes. In: Fiume E, (ed.):
Proceedings of Computer Graphics. ACM SIGGRAPH, New York (2001) 417-424
2. Alexa, M.: Merging polyhedron shapes with scattered features. The Visual Computer. 16
(2000): 26-37
3. Eck, M., DeRose, T., Duchamp, T., Hoppe, H., Lounsbery, M., Stuetzle, W.:
Multiresolution analysis of arbitrary meshes. In: Mair, S.G., Cook, R.(eds.): Proceedings
of Computer Graphics. ACM SIGGRAPH, Los Angeles (1995) 173-182
4. Alliez, P., Meyer, M., Desbrun, M.: Interactive geometry remeshing. In: Proceedings of
Computer Graphics.ACM SIGGRAPH, San Antonio (2002) 347-354
5. Alexa, M.: Recent advances in mesh morphing. Computer Graphics Forum. 21(2002)
6. Maillot, J., Yahia, H., Verroust, A.: Interactive texture mapping. In: Proceedings of
Computer Graphics, ACM SIGGRAPH, Anaheim (1993) 27-34
7. Levy, B., Mallet, J.: Non-distorted texture mapping for sheared triangulated meshes. In:
Proceedings of Computer Graphics, ACM SIGGRAPH, Orlando (1998) 343-352
8. Jin, M., Wang, Y., Yau, S.T., Gu. X.: Optimal global conformal surface parameterization.
In: Proceedings of Visualization, Austin (2004) 267-274
9. Floater, M.S.: Parameterization and smooth approximation of surface triangulations.
Computer Aided Geometric Design.14(1997) 231-250
10. Lee, Y., Kim, H.S., Lee, S.: Mesh parameterization with a virtual boundary. Computer &
Graphics. 26 (2006) 677-686

Pose Insensitive 3D Retrieval by Poisson Shape

Pan Xiang1, Chen Qi Hua2, Fang Xin Gang1, and Zheng Bo Chuan3

Institute of Software, Zhejiang University of Technology

Institute of Mechanical, Zhejiang University of Technology
College of Mathematics & Information , China West Normal University
310014, Zhejiang, 3637002, Nanchong, P.R. China

Abstract. With the rapid increase of available 3D models, content-based 3D retrieval is attracting more and more research interests. Histogram is the most
widely in constructing 3d shape descriptor. Most existing histogram based descriptors, however, will not remain invariant under rigid transform. In this paper, we proposed a new kind of descriptor called poisson shape histogram. The
main advantage of the proposed descriptor is not sensitive for rigid transform. It
can remain invariant under rotation as well. To extract poisson shape histogram,
we first convert the given 3d model into voxel representation. Then, the poisson
solver with dirichlet boundary condition is used to get shape signature for each
voxel. Finally, the poisson shape histogram is constructed by shape signatures.
Retrieving experiments for the shape benchmark database have proven that
poisson shape histogram can achieve better performance than other similar
histogram-based shape representations.
Keywords: 3D shape matching, Pose-Insensitive, Poisson equation, Histogram.

1 Introduction
Recent development in modeling and digitizing techniques has led to a rapid increase
of 3D models. More and more 3D digital models can be accessed freely from Internet
or from other resources. Users can save the design time by reusing existing 3D models. As a consequence, the concept has changed from How do we generate 3D models? to How do we find them?[1]. An urgent problem right now is how to help
people find their desirable 3D models accurately and efficiently from the model databases or from the web. Content-based 3D retrieval aiming to retrieve 3D models by
shape matching has become a hot research topic.
In Content-based 3D retrieval, histogram based representation has been widely
used for constructing shape features[2]. For histogram based representation, it needs
to define shape signatures. The defined shape signature is the most important for
histogram descriptor. It should be invariant to affine transformations such as translation, scaling, rotation and rigid transform. Some rotation invariant shape signatures,
such as curvature, distance et al, have been used for content-based 3d retrieval. Those
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 2532, 2007.
Springer-Verlag Berlin Heidelberg 2007


X. Pan et al.

shape signatures are independent of 3d shape rotation. However, little researches are
focusing on extracting invariant shape signatures under rigid transform. Those existing rotation-invariant shape signatures are often sensitive to rigid transform.
In this paper, we propose a new kind of shape signature called poisson shape
measure. It can remain almost invariant under not only rotation transform, but also
rigid transform. The proposed shape signature is based on poisson theory. As one of
the most important PDE theory, it has been widely used for computer vision, computer graphics, analysis of anatomical structures and image processing[3-5]. However, it has not been used for defining 3d shape signature and then content based 3d
retrieval. The process of constructing poisson shape histogram can be concluded as
following: the given 3d model will be first converted into voxel representation. Then,
the poisson solver with dirichlet boundary condition is used to get shape signature for
each voxel. Finally, the poisson shape histogram is constructed by the shape signatures. The comparative study shows poisson shape histogram can achieve better retrieving performance than other similar histogram descriptors.
The remainder of the paper is organized as follows: Section 2 provides a brief review of the related work. Section 3 discusses the poison equation and the related
property. Section 4 discusses how to construct poisson shape histogram. Section 5
provides the experimental results for content-based 3D retrievals. Finally, Section 6
concludes the paper and recommends some future work.

2 Related Work
Previous shape descriptors can be classified into two groups by their characteristics:
namely structural representation and statistical representation. The method proposed
in this paper belongs to statistical representation. This section mainly gives a brief
review on statistical shape description for content-based 3D retrieval. For more details
about structure descriptors and content-based 3D retrieval, please refer to some
survey papers[6-8].
As for statistical representation, the most common approach is to compute geometry signatures of the given model first, such as normal, curvature, distance and so on.
Then, the extracted shape signatures are used to construct histogram. Existing shape
signatures for 3d shape retrieval can be grouped into two types: one is the rotation
invariant shape signatures, and the other is not. For the latter, rotation normalization is
performed prior to the extraction of shape signatures.
Rotation variant shape signatures
Extend Gaussian Image (EGI) defines shape feature by normal distribution over the
sphere[9]. An extension version of EGI is the Complex Extend Gaussian Image
(CEGI)[10], which combines distance and normal for shape descriptor. Shape
histograms defined on shells and sectors around a model centroid is to capture point
distribution[11]. Transform-based shape features can be seen as a post-process of the
original shape signatures. It often can achieve better retrieving accuracy than the

Pose Insensitive 3D Retrieval by Poisson Shape Histogram


original shape signatures. Vranic et al perform spherical harmonics transform for

point distribution of the given model[12]. While Chen et al considered the concept
that two models are similar if they look similar from different view angles. Hence
they extracted transform coefficients in 2D spaces instead of the 3D space[13]. Transform-based 3D retrievals often can achieve better retrieving performance than
histogram-based methods. but are more computational costly.
Rotation invariant shape signatures
This kind of shape signature is robust again rotation transform. Shape distribution
used some measures over the surfaces, such as distance, angle and area, to generate
histograms[14]. The angle and distance distribution (AD) is to integrate normal information into distance distribution[15]. The generalized shape distributions is to
combine local and global shape feature for 3d retrieval. Shape index defined by curvature is adopted as MPEG-7 3D shape descriptor[16]. Radius-Angle Histogram is to
extract the angle between radius and normal for histogram[17]. The local diameter
shape-function is to compute the distance from surface to medial axis[18]. It has the
similar characteristic with the poisson measure proposed by this paper. The extraction
of local diameter shape function, however, is very time-cost(It requires nearly 2 minutes in average for construing histogram).

3 Poisson Equation
Poissons equation arises in gravitation and electrostatics, and is the fundamental of
mathematical physics. Mathematically, Poissons equation is a second-order elliptic
partial differential equation defined as:

U = 1


where U is the laplacian operation. The poisson equation is to assign every internal
point a value. As for definition, the poisson equation is somewhat similar with distance transform. The distance transform will assign to every internal point a value that
depends on the relative position of that point within the given shape, which reflects its
minimal distance to the boundary. The poisson equation, however, has a huge difference with distance transform. The poisson is to place a set of particles at the point and
let them move in a random walk until they hit the contour. It measures the mean time
required for a particle to hit the boundaries. Thats to say, the poisson equation will
consider each internal point affected one more boundary points, and will be more
robust again distance transform.
The poisson equation has the potential property in shape analysis. Here we show
some of these properties.
1. Rotation invariant. Poisson equation is independent of the coordinate system over the
entire domain (volume in 3D, and region in 2D). It makes the signature defined by poisson
equation be robust again rotation.
2. Geometry structure related. The poisson equation is correlated to the geometry of the
structure. This correlation gives a mathematical meaning to the shape structure.


X. Pan et al.

3. Rigid-transform invariant. Similar with geodesic distance, the poisson equation has a
strong robustness over the rigid transform.

4 Poisson Shape Histogram and Matching

Followed by the definition of poisson equation, this section will discuss how to construct poisson shape histogram and similarity calculation.
The definition of poisson equation is to assign each internal point a value. Most 3D
models, however, will use boundary representation, such as the mesh model. The
given mesh model will be converted into 3d discrete grid(484848) first. The voxelization algorithm used in this paper is based on Z-buffer[19]. The efficiency of this
algorithm is independent of the object complexity, and can be implemented efficiently.
The voxelization also make a process of scale normalization for the given model.
Suppose the voxelization model can be represented by a finite voxel
set Vi , i = 1,2, "" N , where N is total voxel count. The tacus package[20] is then
used for poisson solver. After that, for each voxel Vi , we can get poisson shape signature, denoted by Pi . The construction of poisson shape histogram can be concluded as
the following steps:
1) For the signature set Pi , i

= 1,2," i, " N , compute its mean value and vari-

ance respectively.

2) For each Pi , perform Gaussian normalization by the following equation.

Pi =




3) For normalized set Pi , construct histogram containing 20 bins, denoted by:

H = {H 1 , H 2 , " H i , " H 20 }
For two histograms, we use L1-metric to measure their dissimilarity.

Dis1, 2 = H 1,i H 2,i , i = 1,2, "" N .


where H1 and H2 denote poisson shape histogram for two models. The bigger value
means two models are more dissimilar.
Section 3 discusses the property of poisson equation, and it shows the poisson
equation will be independent of rigid transform. Figure 1 gives poisson shape histogram for horses under different rigid transform. The poisson shape histogram remains
almost invariant for different rigid transform(the small difference due to the voxelization error). As a comparison, the D2 shape distribution, however, appears to be huge
difference for two models.

Pose Insensitive 3D Retrieval by Poisson Shape Histogram


Fig. 1. Histogram descriptors for above models(Upper: Horses under different rigid-transform.
Lower: The left is the poisson shape histogram for the above two models, and the right is the
D2 shape distribution as well. The difference between poisson shape histograms appear to be
very minor. While the difference of the D2 shape distributions appears to be very obvious).

5 Experiment
Experiments are carried out to test the retrieving performance of poisson shape histogram. All experiments are performed with the hardware Intel Pentium 1.86GHZ,
512M memory. The testing 3D models are Princeton Shape Benchmark database(PSB)[21]. It contains 1804 mesh models, and is classified into two groups. Each
group contains 907 models as well. One is the training set, which is used to get best
retrieving parameters. The other is the testing set for retrieving performance comparison of different shape descriptors. The benchmark also provides different evaluating
criterions for retrieving precision. Here we use Precision-Recall curve to measure the
retrieving accuracy, and the precision-recall measure has been widely used in information retrieval. We first show the time in constructing shape poisson histogram, and
then retrieving accuracy comparison with similar histograms.
As for content-based 3D retrieval, the feature extraction process should be performed quickly. This is very important, especially for practical applications. The
costing time for building poisson shape histogram consists of the following steps:
voxelization, poisson solver and histogram construction. The voxelization time is
almost 0.07s for each model, and the histogram construction is near to 0s. Notice the
time for poisson solver is related with the count of voxel. Table 1 shows the costing
time for different voxel models.
In average, the costing time for poisson shape histogram is about 0.6s. While for
D2 shape distribution, the generating time is about 0.8s.
Next, we will compare the retrieving performance of poisson shape histogram(PSH) with some other histogram based shape descriptors. They are 3D shape
spectrum(3DS), and D2 distance(D2). Figure 2 givens the precision-recall curve for


X. Pan et al.
Table 1. The costing time for poisson solver
Voxel models

Poisson solver (s)









Fig. 2. The Precision-Recall curves for different histogram-based descriptors

Fig. 3. Some retrieving results(For each row, the left model is the query model, and other three
models are the most similar with queried model. Notice the model under different rigid transform can be retrieved correctly).

Pose Insensitive 3D Retrieval by Poisson Shape Histogram


three kinds of shape descriptors. It shows the poisson shape histogram can achieve
best retrieving precision. Some retrieving results are also shown in Figure 3. Notice
those models under different rigid transform can be retrieved correctly.

6 Conclusion and Future Work

This paper proposed a new kind of 3d shape descriptor called poisson shape histogram. It uses poisson equation as the main mathematical theory. The encouraging
characteristic of poisson shape histogram is insensitive for rigid transform. It remains
rotation invariant as well. The retrieving experiments have shown that the poisson
shape histogram can achieve better retrieving precision than other similar histogrambased 3d shape descriptors.
As a kind of histogram, the main drawback of poisson shape histogram can only
capture global shape feature. It can not support partial matching. While for the definition of poisson equation, the poisson shape signature is only affected by local
neighbors. It shows the poisson shape measure can represent local shape feature as
well. As one of the future work, we will work for partial matching based on poisson
Acknowledgments. This work was supported by natural science foundation of Zhejiang Province(Grant No: Y106203, Y106329). It was also partially funded by the
Education Office of Zhejiang Province(Grant No 20051419) and the Education Office
of Sichuan Province(Grant No 2006B040).

1. T. Funkhouser, P. Min, and M. Kazhdan, A Search Engine for 3D Models. ACM Transactions on Graphics, (2003)(1): 83-105.
2. Ceyhun Burak Akgl, Blent Sankur, Ycel Yemez, et al., A Framework for HistogramInduced 3D Descriptors. European Signal Processing Conference (2006).
3. L. Gorelick, M. Galun, and E. Sharon Shape representation and classification using the
poisson equation. CVPR, (2004): 61-67.
4. Y. Yu, K. Zhou, and D. Xu, Mesh Editing with Poisson-Based Gradient Field Manipulation. ACM SIGGRAPH, (2005).
5. H. Haider, S. Bouix, and J. J. Levitt, Charaterizing the Shape of Anatomical Structures
with Poisson's Equation. IEEE Transactions on Medical Imaging, (2006). 25(10):
6. J. Tangelder and R. Veltkamp. A Survey of Content Based 3D Shape Retrieval Methods. in
International Conference on Shape Modeling. (2004).
7. N. Iyer, Y. Kalyanaraman, and K. Lou. A reconfigurable 3D engineering shape search
system Part I: shape representation. in CDROM Proc. of ASME 2003. (2003). Chicago.
8. Benjamin Bustos, Daniel Keim, Dietmar Saupe, et al., An experimental effectiveness comparison of methods for 3D similarity search. International Journal on Digital Libraries,
9. B. Horn, Extended Gaussian Images. Proceeding of the IEEE, (1984). 72(12): 1671-1686.


X. Pan et al.

10. S. Kang and K. Ikeuchi. Determining 3-D Object Pose Using The Complex Extended
Gaussian Image. in International Conference on Computer Vision and Pattern Recognition. (1991).
11. M. Ankerst, G. Kastenmuller, H. P. Kriegel, et al. 3D Shape Histograms for Similarity
Search and Classification in Spatial Databases. in International Symposium on Spatial
Databases. (1999).
12. D. Vranic, 3D Model Retrieval, PH. D Thesis, . 2004, University of Leipzig.
13. D. Y. Chen, X. P. Tian, and Y. T. Shen, On Visual Similarity Based 3D Model Retrieval.
Computer Graphics Forum (EUROGRAPHICS'03), (2003). 22(3): 223-232.
14. R. Osada, T. Funkhouser, B. Chazelle, et al. Matching 3D Models with Shape Distributions. in International Conference on Shape Modeling and Applications. (2001).
15. R. Ohbuchi, T. Minamitani, and T. Takei. Shape-Similarity Search of 3D Models by Using
Enhanced Shape Functions. in Theory and Practice of Computer Graphics. (2003).
16. T. Zaharia and F. Preteux. 3D Shape-based Retrieval within the MPEG-7 Framework. in
SPIE Conference on Nonlinear Image Processing and Pattern Analysis. (2001).
17. Xiang Pan, Yin Zhang, Sanyuan Zhang, et al., Radius-Normal Histogram and Hybrid
Strategy for 3D Shape Retrieval. International Conference on Shape Modeling and Applications, (2005): 374-379.
18. Ran Gal, Ariel Shamir, and Daniel Cohen-Or, Pose Oblivious Shape Signature. IEEE
Transactions of Visualization and Computer Graphics, (2005).
19. E. A. Karabassi, G. Papaioannou , and T. Theoharis, A Fast Depth-buffer-based Voxelization Algorithm. Journal of Graphics Tools, (1999). 4(4): 5-10.
20. S. Toledo, TAUCS: A Library of Sparse Linear Solvers. Tel-Aviv University, 2003.
21. P. Shilane, K. Michael, M. Patrick, et al. The Princeton Shape Benchmark. in International
Conference on Shape Modeling. (2004).

Point-Sampled Surface Simulation Based on

Mass-Spring System
Zhixun Su1,2 , Xiaojie Zhou1 , Xiuping Liu1 , Fengshan Liu2 , and Xiquan Shi2

Department of Applied Mathematics, Dalian University of Technology, Dalian

116024, P.R. China,,
Applied Mathematics Research Center, Delaware State University, Dover, DE
19901, USA

Abstract. In this paper, a physically based simulation model for pointsampled surface is proposed based on mass-spring system. First, a
Delaunay based simplication algorithm is applied to the original pointsampled surface to produce the simplied point-sampled surface. Then
the mass-spring system for the simplied point-sampled surface is constructed by using tangent planes to address the lack of connectivity information. Finally, the deformed point-sampled surface is obtained by
transferring the deformation of the simplied point-sampled surface. Experiments on both open and closed point-sampled surfaces illustrate the
validity of the proposed method.


Point based techniques have gained increasing attention in computer graphics.

The main reason for this is that the rapid development of 3D scanning devices has
facilitated the acquisition of the point-sampled geometry. Since point-sampled
objects do neither have to store nor to maintain globally consistent topological
information, they are more exible compared to triangle meshes for handling
highly complex or dynamically changing shapes. In point based graphics, point
based modeling is a popular eld [1,4,9,13,15,21], in which physically based modeling of point-sampled objects is still a challenging area.
Physically based modeling has been investigated extensively in the past two
decades. Due to the simplicity and eciency, mass-spring systems have been
widely used to model soft objects in computer graphics, such as cloth simulation. We introduce mass-spring system to point-sampled surface simulation. A
Delaunay based simplication algorithm is applied to the original point-sampled
surface to produce the simplied point-sampled surface. By using the tangent
plane and projection, mass-spring system is constructed locally for the simplied
point-sampled surface. Then the deformed point-sampled surface is obtained by
transferring the deformation of the simplied point-sampled surface.
The remaining of the paper is organized as follows. Related work is introduced
in Section 2. Section 3 explains the Delaunay based simplication algorithm.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 3340, 2007.
c Springer-Verlag Berlin Heidelberg 2007


Z. Su et al.

Section 4 describes the simulation of the simplied point-sampled surfaced based

on mass-spring system. Section 5 introduces the displacement transference to the
original point-sampled surface. Some experiments results are shown in Section
6. A brief discussion and conclusion is presented in Section 7.

Related Work

Point-sampled surfaces often consist of thousands or even millions of points sampled from an underlying surface. Reducing the complexity of such data is one of
the key processing techniques. Alexa et al [1] described the contribution value of
a point by estimating its distance to the MLS surface dened by the other sample
points, and the point with the smallest contribution will be removed repeatedly.
Pauly et al [14] extended mesh simplication algorithms to point clouds, and
presented the clustering, iterative, and particle simulation simplication algorithms. Moenning et al [12] devised a coarse-to-ne uniform or feature-sensitive
simplication algorithm with user-controlled density guarantee. We present a
projection based simplication algorithm, which is more suitable for the construction of mass-spring system.
Point based surface representation and editing are popular elds in point
based graphics. Alexa et al [1] presented the now standard MLS surface, in
which the surface is dened as the stationary set of a projection operator. Later
Shachar et al [4] proposed a robust moving least-squares tting with sharp features for reconstructing a piecewise smooth surface from a potentially noisy point
clouds. The displacement transference in our method is similar to moving least
squares projection. Zwicker [21] presented Pointshop3D system for interactive
editing of point-based surfaces. Pauly et al [15] introduced Boolean operations
and free-form deformation of point-sampled geometry. Miao et al [10] proposed
a detail- preserving local editing method for point-sampled geometry based on
the combination of the normal geometric details and the position geometric details. Xiao et al [19,20] presented ecient ltering and morphing methods for
point-sampled geometry. Since the pioneering work of Terzopoulos and his coworkers [18], signicant research eort has been done in the area of physically
based modeling for meshes [5,16]. Recently, Guo and Qin et al [2,6,7,8] proposed
the framework of physically based morphing, animation and simulation system.
uller et al [13] presented point based animation of elastic, plastic and melting
objects based on continuum mechanics. Clarenz et al [3] proposed framework
for processing point-based surfaces via PDEs. In this paper, mass-spring system
is constructed directly for the simplied point-sampled surface. The idea of the
present method is similar to [17]. They studied curve deformation, while we focus
on point-sampled surface simulation.

Simplication of Point-Sampled Surface

The point-sampled surface consists of n points P = {pi R3 , i = 1, . . . , n} sampled from an underlying surface, either open or closed. Since the normal at any

Point-Sampled Surface Simulation Based on Mass-Spring System


point can be estimated by the eigenvector of the covariance matrix that corresponds to the smallest eigen value [14], without loss of generality, we can assume
that the normal ni at point pi is known as input. Traditional simplication algorithms reserve more sample points in regions of high-frequency, whereas less
sample points are used to express the regions of low-frequency, which is called
adaptivity. However, adaptivity does not necessary bring good results for simulation. An example is shown in Fig. 1, 1a) shows the sine curve and the simplied
curve, force F is applied on the middle of the simplied curve, 1b) shows that
the simulation based on the simplied curve produce the wrong deformation. We
present a Delaunay based simplication algorithm, which is suitable for simulation and convenient for the construction of mass-spring system.

a) Sine curve (solid line) and the

simplied polyline(solid line)

b) Deformation of the simplied

polyline under an applied force F

Fig. 1. The eect of simplication to the simulation

For pi P , the index set of its k-nearest points is denoted by Nik = {i1 , . . . ,
ik }. These points are projected onto the tangent plane at point pi (the plane
passing through pi with normal ni ), and the corresponding projection points
are denoted by q ji , j = 1, . . . , k. 2D Delaunay triangulation is implemented on
the k + 1 projection points. There are two possible cases: 1) pi is not on the
boundary of the surface, 2) pi is on the boundary of the surface, shown as Fig. 2.
Suppose that there are m points {qji r , r = 1, . . . , m} which are connected with
pi in the triangle mesh, the union of the triangles that contain pi is denoted
by Ri whose diameter is di . In either case, if di is less than the user-dened
threshold, pi will be removed. This process is repeated until the desired number
of points is reached or the diameter di for each point exceeds the threshold. The
resulting simplied point set is denoted by S = {sj , j = 1, . . . , ns }, and sj is
called simulation point. It is important to select a proper value of k, too small
k may inuence the quality of simplication, while too big k will increase the
computational cost. In our experiments, preferable is in the interval [10 20].


Simulation Based on Mass-Spring System

Structure of the Springs

Since no explicit connectivity information is known for the simplied pointsampled surface, traditional mass-spring system [16] can not be applied directly.
Here the stretching and bending springs are constructed based on the region
Ri corresponding to si . For si S, the vertices of the region Ri are {q ji r , r =


Z. Su et al.

a) Delaunay triangulation for case 1)

b) Delaunay triangulation for case 2)

Fig. 2. Delaunay triangulation of the projection points on the tangent plane

1, . . . , m}, which are the projection points of {sji r , r = 1, . . . , m}. Assuming that
q ji r are sorted counter clockwise. The stretching springs link si and sji r and the
bending springs connect sji r and si r+2 (Fig. 3). This process is implemented on
each point on S, and the structure of the springs is obtained consequently.

a) Stretching springs for case 1) and 2) b) Bending springs for case 1) and 2)
Fig. 3. The spring structures (dashed lines)


Estimation of the Mass

The mass of si is needed for simulation. Note that in region of low sampling
density, a simulation point si will represent large mass, whereas smaller mass
in region of higher sampling density. Since the area of region Ri reects the
sampling density, the mass of si can be estimated by
mi =



where SRi is the area of region Ri , and is the mass density of the surface.


According to the Hookes law, the internal force F s (Si,j ) of spring Si,j linking
two mass points si and sj can be written as

I i,j
F s (Si,j ) = ki,j I i,j li,j
I i,j 
where ki,j
is the stiness of spring Si,j , I i,j = sj si , and li,j
is the natural
length of spring Si,j .

Point-Sampled Surface Simulation Based on Mass-Spring System


In dynamic simulation, a damping force is often introduced to increase the

stability. In our context, the damping force is represented as
F d (Si,j ) = ki,j
(v j v i )


where ki,j
is the coecient of damping, v j and v i are the velocities of sj and si .
Appling external forces to the mass-spring system yields realistic dynamics.
The gravitational force acting on a mass point si is given by

F g = mi g


where mi is the mass of the mass point si , g is the acceleration of gravity. A

force that connects a mass point to a point in world coordinates r0 is given by
F r = kr (r 0 si )


where kr is the spring constant. Similar to [18], other types of external forces,
such as the eect of viscous uid, can be introduced into our system.


The mass-spring system is governed by Newton s Law. For a mass point si ,

there exists the equation
F i = mi ai = mi

d2 xi


where mi , xi , and ai are the mass, displacement, acceleration of si respectively.

A large number of integration schemes can be used to Eq. (6). Explicit schemes
are easy to implement and computationally cheap, but stable only for small
time steps. In contrast, implicit schemes are unconditionally stable at the cost
of computational and memory consumption. We use explicit Euler scheme for
simplicity in our system.

Deformation of the Original Point-Sampled Surface

The deformation of the original point-sampled surface can be obtained by the

deformation of the simplied point-sampled surface. Let us consider the xcomponent u of the displacement eld u = (u, v, w). Similar to [13], we compute
the displacement of pi through the simulation points in its neighborhood. While
the simulation points sampled from an underlying surface, it may be singular
due to coplanarity if we use moving least square tting to compute the displacement directly. We treat the tangent plane at pi as the reference domain.
The simulation points sji , j = 1, . . . , k in the neighborhood of pi are projected
onto the reference plane, with corresponding projection points q ji , j = 1, . . . , k,
and (
xj , yj ) , j = 1, . . . , k are the coordinates of q ji , j = 1, . . . , k in the local
coordinate system with origin pi . Let the x-component u is given by
u (
x, y) = a0 + a1 x
+ a2 y



Z. Su et al.

The parameters al , l = 0, 1, 2 can be obtained by minimizing

Ei =


w (rj ) (uj a0 a1 x
j a2 yj )2



where rj is the distance between pi and q ji , w () is a Gaussian weighting function w (rj ) = exp (rj2 /h2 ). Then ui = u (0, 0) = a0 . Similarly, v and w can
be computed. Since the shape of the point-sampled surface is changed due to
the displacements of the sample points, the normal of the underlying surface
will change consequently. The normal can be computed by the covariance analysis as mentioned above. The point sampling density will be changed due to
the deformation, we use the resampling scheme of [1] to maintain the surface

Experimental Results

We implement the proposed method on a PC with Pentium IV 2.0GHz CPU

and 512MB RAM. Experiments are performed on both closed and open surfaces,
shown as Fig. 4. The sphere is downloaded from the website of PointShop3D and
composed of 3203 surfels. For the modeling of the hat, the original point-sampled
surface is sampled from the lower part of a sphere, and a stretching force acted
on the middle of the point-sampled surface produce the hat. We also produce
another interesting example, the logo CGGM 07, which are both produced by
applying force on the point-sampled surfaces (Fig. 5). The simplication and
the construction of mass-spring system can be performed as preprocess, and
the simulation points is much less in the simplied surface than the original
point-sampled surface, so the simulation is very ecient. The performance of
simulation is illustrated in Table 1. The main computational cost is the transference of the displacement from the simplied surface to the original pointsampled surface and the normal computation of the deformed point-sampled
surface. Compared to the global parameterization in [7], the local construction
of mass-spring system makes the simulation more ecient. The continuum-based
method [13] presented the modeling of volumetric objects, while our method can
deal with both volumetric objects using their boundary surface and sheet like
Table 1. The simulation time
Number of simulation points 85 273 327
Simulation time per step (s) 0.13 0.25 0.32

Point-Sampled Surface Simulation Based on Mass-Spring System

a) The deformation of a sphere


b) The modeling of a hat

Fig. 4. Examples of our method

Fig. 5. The CGGM 07 logo


As an easy implemented physically based method, mass-spring systems have

been investigated deeply and have been used widely in computer graphics. However, it can not be used to point-sampled surfaces due to the lack of connectivity
information and the diculty of constructing mass-spring system. We solve the
problem of the construction of mass-spring system for point-sampled surface
based on projection and present a novel mass-spring based simulation method
for point-sampled surface. A Delaunay based simplication algorithm facilitates
the construction of mass-spring system and ensures the eciency of the simulation method. Further study focuses on the simulation with adaptive topology.
And the automatic determination of the simplication threshold should be investigated to ensure suitable tradeo between accuracy and eciency in the

This work is supported by the Program for New Century Excellent Talents in
University grant (No. NCET-05-0275), NSFC (No. 60673006) and an INBRE
grant (5P20RR01647206) from NIH, USA.

1. Alexa M., Behr J., Cohen-Or D., Fleishman S., Levin D., Silva C. T.: Computing
and rendering point set surfaces. IEEE Transactions on Visualization and Computer Graphics 9(2003) 3-15
2. Bao Y., Guo X., Qin H.: Physically-based morphing of point-sampled surfaces.
Computer Animation and Virtual Worlds 16 (2005) 509 - 518


Z. Su et al.

3. Clarenz U., Rumpf M., Telea A.: Finite elements on point based surfaces, Proceedings of Symposium on Point-Based Graphics (2004)
4. Fleishman S., Cohen-Or D., Silva C. T.: Robust moving least-squares tting with
sharp features. ACM Transactions on Graphics 24 (2005) 544-552
5. Gibson S.F., Mirtich B.: A survey of deformable models in computer graphics.
Technical Report TR-97-19, MERL, Cambridge, MA, (1997)
6. Guo X., Hua J., Qin H.: Scalar-function-driven editing on point set surfaces. IEEE
Computer Graphics and Applications 24 (2004) 43 - 52
7. Guo X., Li X., Bao Y., Gu X., Qin H.: Meshless thin-shell simulation based on
global conformal parameterization. IEEE Transactions on Visualization and Computer Graphics 12 (2006) 375-385
8. Guo X., Qin H.: Real-time meshless deformation. Computer Animation and Virtual
Worlds 16 (2005) 189 - 200
9. Kobbelt L., Botsch M.: A survey of point-based techniques in computer graphics.
Computer & Graphics, 28 (2004) 801-814
10. Miao Y., Feng J., Xiao C., Li H., Peng Q., Detail-preserving local editing for pointsampled geometry. H.-P Seidel, T. Nishita, Q. Peng (Eds), CGI 2006, LNCS 4035
(2006) 673-681
11. Miao L., Huang J., Zheng W., Bao H. Peng Q.: Local geometry reconstruction
and ray tracing for point models. Journal of Computer-Aided Design & Computer
Graphics 18 (2006) 805-811
12. Moenning C., Dodgson N. A.: A new point cloud simplication algorithm. Proceedings 3rd IASTED Conference on Visualization, Imaging and Image Processing,
adena, Spain (2003) 1027-1033
13. M
uller M., Keiser R., Nealen A., Pauly M., Pauly M., Gross M., Alexa M.: Point
based animation of elastic, plastic and melting objects, Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation (2004) 141-151
14. Pauly M., Gross M., Kobbelt L.: Ecient simplication of point-sampled surfaces.
Proceedings of IEEE Visualization (2002) 163-170
15. Pauly M., Keiser R., Kobbelt L., Gross M.: Shape modeling with point-sampled
geometry. ACM Transactions on Graphics 22(2003) 641-650
16. Provot X.: Deformation constraints in a mass-spring model to describe rigid cloth
behavior. Proc of Graphics Interface (1995) 147-154.
17. Su Z., Li L., Zhou X.: Arc-length preserving curve deformation based on subdivision. Journal of Computational and Applied Mathematics 195 (2006) 172-181
18. Terzopoulos D., Platt J., Barr A., Fleischer K.: Elastically deformable models.
Proc. SIGGRAPH (1987) 205-214
19. Xiao C., Miao Y., Liu S., Peng Q., A dynamic balanced ow for ltering pointsampled geometry. The Visual Computer 22 (2006) 210-219
20. Xiao C., Zheng W., Peng Q., Forrest A. R., Robust morphing of point-sampled
geometry. Computer Animation and Virtual Worlds 15 (2004) 201-210
21. Zwicker M., Pauly M., Knoll O., Gross M.: Pointshop3d: An interactive system for
point-based surface editing. ACM Transactions on Graphics 21(2002) 322- 329

Sweeping Surface Generated by a Class of

Generalized Quasi-cubic Interpolation Spline
Benyue Su1,2 and Jieqing Tan1

Institute of Applied Mathematics, Hefei University of Technology,

Hefei 230009, China
Department of Mathematics, Anqing Teachers College, Anqing 246011, China

Abstract. In this paper we present a new method for the model of

interpolation sweep surfaces by the C 2 -continuous generalized quasicubic interpolation spline. Once given some key position, orientation
and some points which are passed through by the spine and initial
cross-section curves, the corresponding sweep surface can be constructed
by the introduced spline function without calculating control points inversely as in the cases of B-spline and Bezier methods or solving equation system as in the case of cubic polynomial interpolation spline. A
local control technique is also proposed for sweep surfaces using scaling function, which allows the user to change the shape of an object
intuitively and eectively. On the basis of these results, some examples
are given to show how the method is used to model some interesting


Sweeping is a powerful technique to generate surfaces in CAD/CAM, robotics motion design and NC machining, etc. There has been abundant research
in the modeling of sweeping surfaces and their applications. Hu and Ling ([2],
1996) considered the swept volume of a moving object which can be constructed
from the envelope surfaces of its boundary. In this study, these envelope surfaces
are the collections of the characteristic curves of the natural quadric surfaces.
Wang and Joe ([13], 1997) presented sweep surface modeling by approximating
a rotation minimizing frame. The advantages of this method lie in the robust
computation and smoothness along the spine curves. J
uttler and M
aurer ([5],
1999) constructed rational representations of sweeping surfaces with the help

This work was completed with the support by the National Natural Science Foundation of China under Grant No. 10171026 and No. 60473114, and in part by the Research Funds for Young Innovation Group, Education Department of Anhui Province
under Grant No. 2005TD03, and the Anhui Provincial Natural Science Foundation
under Grant No. 070416273X, and the Natural Science Foundation of Anhui Provincial Education Department under Grant No. 2006KJ252B, and the Funds for Science
& Technology Innovation of the Science & Technology Department of Anqing City
under Grant No. 2003-48.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 4148, 2007.
c Springer-Verlag Berlin Heidelberg 2007


B. Su and J. Tan

of the associated rational frames of PH cubic curves and presented sucient

conditions ensuring G1 continuity of the sweeping surfaces. Schmidt and Wyvill
([9], 2005) presented a technique for generating implicit sweep objects that support direct specication and manipulation of the surface with no topological
limitations on the 2D sweep template. Seong, Kim et. al ([10], 2006) presented
an ecient and robust algorithm for computing the perspective silhouette of
the boundary of a general swept volume. In computer graphics, many advanced
techniques using sweeping surfaces ([1], [3], [4]) have been applied to the deformation, NC simulation, motion traced and animation, including human body
modeling and cartoon animation, etc. Yoon and Kim ([14], 2006) proposed a approach to the freeform deformation(FFD) using sweeping surfaces, where a 3D
object was approximated with sweep surfaces and it was easy to control shape
deformations using a small number of sweep parameters. Li, Ge and Wang ([6],
2006) introduced a sweeping function and applied it to the surface deformation
and modeling, where the surface can be pulled or pushed along a trajectory
In the process of constructing sweep surface, the hard work in modeling are
to present simple objects and rene them towards the desired shapes, where the
construction of the spine and across-section curves and the design of the moving
frame ([8]) are very important. Frenet frame, generalization translation frame
and rotation-minimizing frame et. al ([4], [5], [7], [13], [14]) can all be applied to
solve these problem thoroughly.
In general, the spine curve can be presented by Bezier and B-spline methods.
But they have many diculties in calculating the data points conversely in order
to interpolate given points. The main contribution of this paper is the development of a new method based on a class of generalized quasi-cubic interpolation
spline. This approach has the following features:
The spine and across-section curves are C 2 continuous and pass through some
given points by the user without calculating the control points conversely as in
the cases of B-spline and Bezier methods or solving equation system as in the
case of cubic polynomial interpolation spline.
A local control technique is proposed by the dened spline. It is implemented
exibly and eectively on the computer-human interaction.
The moving frame is smoothness and can be established associated with the
spine curve uniformly using our method.
The rest of this paper is organized as follows: A C 2 -continuous generalized
quasi-cubic interpolation spline is introduced in Sect. 2. We present a new
method for the sweep surface modeling by the generalized quasi-cubic interpolation spline in Sect. 3. Some examples of shape modeling by the introduced
method are given in Sect. 4. Finally, we conclude the paper in Sect. 5.

Sweeping Surface


C 2 -Continuous Generalized Quasi-cubic Interpolation


Denition 1. [11] Let b0 , b1 , b2 , , bn+2 , (n 1), be given control points. Then

a generalized quasi-cubic piecewise interpolation spline curve is dened to be
pi (t) =


Bj,3 (t)bi+j , t [0, 1], i = 0, 1, , n 1 ,



B0,3 (t) =


B1,3 (t) =


B2,3 (t) =


B3,3 (t) =



sin 2 t

cos t +


cos t

sin t ,



+ sin 2 t


cos t


sin t ,


cos 2 t +


cos t +


sin t .



+ cos



sin t ,

From (2), we know that Bi,3 (t), (i = 0, 1, 2, 3), possess properties similar to
those of B-spline base functions except the positive property. Moreover, pi (t)
interpolates the points bi+1 and bi+2 . That is,
pi (0) = bi+1 , pi (1) = bi+2 ,


From (1) and (2), we can also get

pi (0) = ( 2 1)(bi+2 bi ),
pi (0) =

4 (bi

pi (1) = ( 2 1)(bi+3 bi+1 ) ,

2bi+1 + bi+2 ), pi (1) =


4 (bi+1

2bi+2 + bi+3 ) .


pi (1) = pi+1 (0), pi (1) = pi+1 (0), l = 1, 2, i = 0, 1, , n 2 .



Therefore, the continuity of the quasi-cubic piecewise interpolation spline curves

is established up to second derivatives. Besides this property, the quasi-cubic
piecewise interpolation spline curves also possess symmetry, geometric invariability and other properties, the details of these properties can be found in our
another paper ([11]).

Sweep Surface Modeling

Given a spine curve P (t) in space and a cross-section curve C(), a sweep surface
W (t, ) can be generated by
W (t, ) = P (t) + R(t)(s(t) C()),


where P (t) is a spine curve, R(t) is an orthogonal matrix representing a moving

frame along P (t), s(t) is scaling function. Geometrically, a sweep surface W (t, )


B. Su and J. Tan

is generated by sweeping C() along P (t) with moving frame R(t). cross-section
curve C() is in the 2D or 3D space which passes through the spine curve P (t)
during sweeping.
So the key problems in sweep surface generation are to construct the spine
and cross-section curves P (t), C() and to determine the moving frame R(t).
Given a initial cross-sections Cj () moving along a spine curve Pi (t). Each
given position is associated with a local transformation Ri (t) on the Cj (). The
sweep surface is generated by interpolating these key cross-sections Ci () at
these special positions by the user:
Wi,j (t, ) =
Pi (t) +
Ri (t)(s
i (t) Cj ())

xi (t)
r11,i (t) r12,i (t) r13,i (t)
sxi (t)Cxj ()
= yi (t) + r21,i (t) r22,i (t) r23,i (t) syi (t)Cyj () ,
zi (t)
r31,i (t) r32,i (t) r33,i (t)


where the s(t) is scaling function, which can be used to change the shapes of
cross-sections to achieve local deformations.

The Construction of Spine and Cross-Section Curves

From the above discussions, we know that once some places that the crosssections will pass through are given, a spine curve can be constructed to interpolate these places (points) as follows:
Pi (t) = (xi (t), yi (t), zi (t))T =


Bj,3 (t)bi+j , t [0, 1] ,



i = 0, 1, , n 1 ,
where bi , i = 0, 1, , n + 2, (n 1), are given points (positions) by user, and
Bj,3 (t), (j = 0, 1, 2, 3), are generalized quasi-cubic piecewise interpolation spline
base functions.
Similarly, if the explicit expression of cross-section curves are unknown in
advance. But we know also the cross-section curves pass through some given
points, then we can dene the cross-section curves by
Cj () = (Cxj (), Cyj (), 0)T =
j = 0, 1, , m 1 ,


Bk,3 ()qj+k , [0, 1] ,



where qj , j = 0, 1, , m + 2, , (m 1), are given points (positions) by user.

In order to improve the exibility and local deformation of the interpolation
sweeping surfaces, we introduce scaling functions dened by
si (t) = (sxi (t), syi (t), 0)T =



Bj,3 (t)si+j , t [0, 1] ,


i = 0, 1, , n 1 ,
where si = (
si , si , 0)T , i = 0, 1, , n+2, (n 1) . si and si are n+3 nonnegative
real numbers respectively, which are called scaling factors. Bj,3 (t), (j = 0, 1, 2, 3),
are generalized quasi-cubic piecewise interpolation spline base functions.

Sweeping Surface



The Moving Frame

In order to interpolate their special orientations of key cross-sections, we can

nd a proper orthogonal matrix sequence R(t) as a series of moving frame, such
that R(t) interpolate the given orthogonal matrices at the time t = ti . Therefore,
the interpolation problem lie in R(ti ) = Ri , where Ri are the given orthogonal
matrices at t = ti .
For the given positions of the moving frames (Pi , Rxi , Ryi , Rzi ), i = 0, 1, ,
n 1, we interpolate the translation parts Pi by generalized quasi-cubic interpolation spline introduced in the above section, and we can also interpolate
three orthogonal coordinates (Rxi , Ryi , Rzi ) homogeneously by the generalized
quasi-cubic interpolation spline (Fig.1(a)). Namely,
Ri (t) = (Rxi (t), Ryi (t), Rzi (t))T =


Bj,3 (t)(Rxi+j , Ryi+j , Rzi+j )T ,



t [0, 1], i = 0, 1, , n 1 ,




Fig. 1. (a) The moving frame on the dierent position, dashed line is spine curve. (b)
The sweep surface associated with open cross-section curve.

Notes and Comments. Since (Rxi (t), Ryi (t), Rzi (t)) dened by Eq.(11) usually
does not form an accurate orthogonal coordinate system at t = ti , we shall renew
it by the Schmidt orthogonalization or an approximation of the orthogonal one
with a controllable error. We can also convert the corresponding orthogonal
matrices into the quaternion forms, then interpolate these quaternions by the
(11) similarly, at last, the accurate orthogonal coordinate system can be obtained
by the conversion inversely.
From the (7), (8) and (11), we know that for the xed = ,
Wi,j (t, ) =


Bk,3 (t)(bi+k + Ri+k (si+k qj )) ,


where qj = qj ( ) , and for the xed t = t ,



B. Su and J. Tan

Wi,j (t , ) = Pi + Ri


Bk,3 (t)(si qj+k ) ,



where Pi = Pi (t ), Ri = Ri (t ) and si = si (t ) .
Since qj are constant vectors, we get that Wi,j (t, ) are C 2 -continuous and
the points on curves Wi,j (t, ) can be obtained by doing the transformation of
stretching, rotation and translation on the point qj .
The cross-section curves Wi,j (t , ) at the t = t can also be attained by the
stretching, rotation and translation transformation on the initial cross-section
curves Cj ().
Moveover, by computing the rst and second partial derivatives of Wi,j (t, ),
we get
Wi,j (t, )

= Pi (t) +

l Wi,j (t, )

= Ri (t)(si (t) Cj ()) ,


(Ri (t)(si (t)

Cj ())) ,

l = 1, 2 .



Then Wi,j (t, ) are C 2 -continuous with respect to t and by the (5) and (14).

The Modeling Examples

Example 1. Given interpolating points of spine curve by b0 = (0, 0, 1),b1 =

(0, 0, 1),b2 = (1, 0, 2.5),b3 = (2, 0, 3),b4 = (3, 0, 3),b5 = (4, 0, 2) and b6 = (4, 0, 2).
Suppose the initial cross-section curves pass through the points (cos (i1)
sin 6 ), i = 1, 2, , 13. The rotation angle at the four positions is 0, /3,
/2 and 2/3 respectively. Scaling factors are selected by si = si 1. Then we
get sweeping interpolation surface as in the Fig.2(a) and Fig.3.



Trajectory curves


Object curves










Fig. 2. The four key positions of cross-section curve during sweeping. (a) is the gure
in example 1 and (b) is the gure in example 2.

Example 2. Given interpolation points of spine curve by b0 = (0, 0, 0),b1 =

(0, 0, 1),b2 = (2, 0, 2.5),b3 = (4, 0, 3),b4 = (6, 0, 3),b5 = (8, 0, 2) and b6 = (8, 0, 2).
The initial cross-section curve interpolates the points (cos (i1)
, sin (i1)
), i =
1, 2, , 13. The rotation angle at the four positions is 0, /6, /4 and /2 respectively. The scaling factors are chosen to be si = si = {1.4, 1.2, 1, 0.8, 0.6, 0.4, 0.2}.
Then we get sweeping interpolation surface as in the Fig.2(b) and Fig.4.

Sweeping Surface




Fig. 3. The sweep surface modeling in example 1, (b) is the section plane of gure (a)



Fig. 4. The sweep surface modeling in example 2, (b) is the section plane of gure (a)

Example 3. The interpolation points of spine curve and rotation angles are
adopted as in the example 2. The initial cross-section curve interpolates the
points q0 = (3, 1), q1 = (2, 2), q2 = (1, 1), q3 = (1, 2), q4 = (2, 1), q5 = (3, 2).
The scaling factors are chosen to be si = si 1. Then we get the sweeping interpolation surface by open cross-section curve as in the Fig.1(b).

Conclusions and Discussions

As mentioned above, we have described a new method for constructing interpolation sweep surfaces by the C 2 continuous generalized quasi-cubic interpolation
spline. Once given some key position and orientation and some points which are
passed through by the spine and initial cross-section curves, we can construct
corresponding sweep surface by the introduced spline function. We have also proposed a local control technique for sweep surfaces using scaling function, which
allows the user to change the shape of an object intuitively and eectively.
Note that, in many other applications of sweep surface, the cross-section
curves are sometimes dened on circular arcs or spherical surface, etc. Then
we can construct the cross-section curves by the circular trigonometric Hermite
interpolation spline introduced in our another paper ([12]).
On the other hand, in order to avoid a sharp acceleration of moving frame,
we can use the chord length parametrization in the generalized quasi-cubic interpolation spline.


B. Su and J. Tan

In future work, we will investigate the real-time applications of the surface

modeling based on the sweep method and interactive feasibility of controlling
the shape of freeform 3D objects .

1. Du, S. J., Surmann, T., Webber, O., Weinert, K. : Formulating swept proles for
ve-axis tool motions. International Journal of Machine Tools & Manufacture 45
(2005) 849861
2. Hu, Z. J., Ling, Z. K. : Swept volumes generated by the natural quadric surfaces.
Comput. & Graphics 20 (1996) 263274
3. Hua, J., Qin, H. : Free form deformations via sketching and manipulating the scalar
elds. In: Proc. of the ACM Symposium on Solid Modeling and Application, 2003,
pp 328333
4. Hyun, D. E., Yoon, S. H., Kim, M. S., J
uttler, B. : Modeling and deformation of
arms and legs based on ellipsoidal sweeping. In: Proc. of the 11th Pacic Conference
on Computer Graphics and Applications (PG 2003), 2003, pp 204212
5. J
uttler, B., M
aurer C. : Cubic pythagorean hodograph spline curves and applications to sweep surface modeling. Computer-Aided Design 31 (1999) 7383
6. Li, C. J., Ge, W. B., Wang, G. P. : Dynamic surface deformation and modeling
using rubber sweepers. Lecture Notes in Computer Science 3942 (2006) 951961
7. Ma, L. Z., Jiang, Z. D., Chan, Tony K.Y. : Interpolating and approximating moving frames using B-splines. In: Proc. of the 8th Pacic Conference on Computer
Graphics and Applications (PG 2000), 2000, pp 154164
8. Olver, P. J. : Moving frames. Journal of Symbolic Computation 36 (2003) 501512
9. Schmidt, R., Wyvill, B. : Generalized sweep templates for implicit modeling. In:
Proc. of the 3rd International Conference on Computer Graphics and Interactive
Techniques in Australasia and South East Asia, 2005, pp 187196
10. Seong, J. K., Kim, K. J., Kim, M. S., Elber, G. : Perspective silhouette of a general
swept volume. The Visual Computer 22 (2006) 109116
11. Su, B. Y., Tan, J. Q. : A family of quasi-cubic blended splines and applications. J.
Zhejiang Univ. SCIENCE A 7 (2006) 15501560
12. Su, B. Y., Tan, J. Q. : Geometric modeling for interpolation surfaces based on
blended coordinate system. Lecture Notes in Computer Science 4270 (2006)
13. Wang, W. P., Joe, B. : Robust computation of the rotation minimizing frame for
sweep surface modeling. Computer-Aided Design 23 (1997) 379391
14. Yoon, S. H., Kim, M. S. : Sweep-based Freeform Deformations. Computer Graphics
Forum (Eurographics 2006) 25 (2006) 487496

An Artificial Immune System Approach for B-Spline

Surface Approximation Problem
Erkan lker1 and Veysi ler2

Seluk University, Department of Computer Engineering, 42075 Konya, Turkey
Middle East Technical University, Department of Computer Engineering, 06531 Ankara,

Abstract. In surface fitting problems, the selection of knots in order to get an

optimized surface for a shape design is well-known. For large data, this
problem needs to be dealt with optimization algorithms avoiding possible local
optima and at the same time getting to the desired solution in an iterative
fashion. Many computational intelligence optimization techniques like
evolutionary optimization algorithms, artificial neural networks and fuzzy logic
have already been successfully applied to the problem. This paper presents an
application of another computational intelligence technique known as
Artificial Immune Systems (AIS) to the surface fitting problem based on BSplines. Our method can determine appropriate number and locations of knots
automatically and simultaneously. Numerical examples are given to show the
effectiveness of our method. Additionally, a comparison between the proposed
method and genetic algorithm is presented.

1 Introduction
Since B-spline curve fitting for noisy or scattered data can be considered as a nonlinear optimization problem with high level of computational complexity [3, 4, 6],
non-deterministic optimization strategies should be employed. Here, methods taken
from computational intelligence offers promising results for the solutions of this
problem. By the computational intelligence techniques as is utilized in this paper, we
mean the strategies that are inspired from numerically based Artificial Intelligence
systems such as evolutionary algorithms and neural networks. One of the most
conspicuous and promising approaches to solve this problem is based on neural
networks. Previous studies are mostly focused on the traditional surface
approximation [1] and the first application of neural networks to this field is taken
place in [15]. Later on, the studies that include Kohonen networks [8, 9, and 12], SelfOrganized Maps [13, 14] and Functional networks [5, 7, and 10] provided extension
of studies of surface design. Evolutionary algorithms are based on natural selection
for optimization with multi-aims. Most of the evolutionary optimization techniques
such as Genetic Algorithm (GA) [3, 6, and 17], Simulated Annealing [16] and
Simulated Evolution [17, 18, and 19] are applied to this problem successfully.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 4956, 2007.
Springer-Verlag Berlin Heidelberg 2007


E. lker and V. ler

This paper presents the application of one of the computational intelligence

techniques called Artificial Immune System (AIS) to the surface fitting problem by
using B-Splines. Individuals are formed by treating knot placement candidates as
antibody and the continuous problem is solved as a discrete problem as in [3] and [6].
By using Akaike Information Criteria (AIC), affinity criterion is described and the
search is implemented from the good candidate models towards the best model in
each generation. The proposed method can describe the placement and numbers of the
knot automatically and concurrently. In this paper, the numerical examples are given
to show the effectiveness of the proposed method. Moreover, a comparison between
the proposed method and the genetic algorithm is presented.

2 B-Spline Surface Approximation

Geometry fitting can be formulized as the minimization of the fitting error under
some accuracy constraints in the jargon of mathematics. A typical error scale for
parametrical surface fitting is as follows;
Q2 =

i =1

j =1

w i , j {S (x i , y

) F }

i, j


Surface fitting from sample points is also known as surface reconstruction. This
paper applies a local fitting and blending approach to this problem. The readers can
refer to [1] and [2] for details. A B-Spline curve, C (u), is a vector valued function
which can be expressed as:

C (u ) = N i ,k (u ) Pi , u [u k 1 , u m +1 ]


i =0

where Pi represents control points (vector) and Ni,k is the normal k-degree B-spline
basis functions and Ni,k can be defined as a recursive function as follows.
1 if u [u i , u i +1 ),
N i ,1 (u ) =
0 otherwise

N i , k (u ) =

u ui
u u
N i , k 1 (u ) + i + k
N i +1, k 1 (u )
u i + k 1 u i
u i + k u i +1


where ui represents knots that are shaped by a knot vector and U= {u0,u1,,um}. Any
B-Spline surface is defined as follows:

S (u, v ) = N i ,k (u )* N j ,l (v )* Pi , j


i = 0 j =0

As can be seen from upper equations, B-spline surface is given as unique with the
degree, knot values and control points. Then, surface is formed by these parameters.
Input is a set of unorganized points in surface reconstruction problems so the degree
of the surface, knots and control points are all unknown. In equation (3), the knots
not only exist in the dividend but also exist in the denominator of the fractions. Thus a
spline surface given as in equation (4) is a function of nonlinear knots. Assume that

An Artificial Immune System Approach for B-Spline Surface Approximation Problem


the data to be fitted are given as if they are on the mesh points of the D=[a,b]x[c,d]
rectangular field on the x-y plane. Then the expression like below can be written [3]:
Fi , j = f (xi , y j ) + i , j ,

(i = 1,2,", N

; j = 1,2,", N y ).


In this equation, f(x,y) is the function exists at the base of data, Nx and Ny
represent the number of the data points in x and y directions respectively and i,j is a
measurement error. Equation (4) is adjusted or fitted to given data by least square
methods with equation (5). For parameterization of B-spline curve in Equation (2) and
surface in Equation (4) must be produced by preferring one of the parameterization
methods among uniform, chordal or centripetal methods. Then, the squares of
remainders or residuals are calculated by Equation (1). The lower index of the Q2
means the dimension of the data. The objective that will be minimized in B-spline
surface fitting problem is the function in Equation (1) and the variables of the object
function are B-spline coefficients and interior knots. B-spline coefficients are linear
parameters. However, interior knots are nonlinear parameters since S(u,v) function is
a nonlinear knots function. This minimization problem is known as multi-modal
optimization problem [4].

3 B-Spline Surface Approximation by Artificial Immune System

AIS were emerged as a new system that combines varieties of biological based
computational methods such as Artificial Neural Network and Artificial Life in
1990s. AIS were used in very diverse areas such as classification, learning,
optimization, robotic and computer security [11]. We need the following components
to construct the AIS: (i) The representation of system parts, (ii) A mechanism to
compute the interaction of system parts with each other and with the environment,
(iii) Adoption procedures. Different methods were employed for each of these
components in the algorithms that have been developed until now. We decided that
Clonal Selection algorithm was the best for the purpose of our study.
Distance criterion is used for determination of mutual interaction degree of
Antigen-Antibody as a scale. If antibody and antigen are represented as
Ab=<Ab1,Ab2,...,AbL> and Ag=<Ag1,Ag2,...,AgL>, respectively, Euclid distance
between Ab and Ag is calculated as follows:
D =

( Ab
i =1




B-spline surface fitting problem is to approximate the B-spline surface that

approximate a target surface in a certain tolerance interval. Assume that object surface
is defined as Nx x Ny grid type with ordered and dense points in 3D space and the
knots of the B-spline surface that will be fitted are nx x ny grid that is subset of Nx x Ny
grid. Degrees of curves, mx and my, will be entered from the user. Given number of
points Nx x Ny is assigned to L that is dimensions of the Antigen and Antibody. Each
bit of Antibody and Antigen is also called their molecule and is equivalent to a data
point. If the value of a molecule is 1 in this formulation then a knot is placed to a
suitable data point otherwise no knot is placed. If the given points are lied down


E. lker and V. ler

between [a,b] and [c,d] intervals, nx x ny items of knots are defined in this interval
and called as interior knots. Initial population includes k Antibody with L numbers of
molecules. Molecules are set as 0 and 1, randomly.
For recognizing (response against Antigen) process, affinity of Antibody against
Antigen were calculated as in Equation (7) that uses the distance between AntibodyAntigen and AIC that is preferred as fitness measure in [3] and [6] references.
Affinity = 1 (AIC Fitnessavrg )


In Equation (7), Fitnessavrg represents the arithmetical average of AIC values of all
Antibodies in the population and calculated as follow. If the AIC value of any of the
individual is greater than Fitnessavrg then Affinity is accepted as zero (Affinity=0) in
Equation (7).

Fitnessavrg = AICi K
i =1


Where, K is the size of the population and AICi is the fitness measure of the ith
antibody in the population. AIC is given as below.

AIC2 = N x N y log e Q2 + 2{(n x + m x )(n y + m y ) + n x + n y }


The antibody which is ideal solution and the exact complementary of the Antigen
is the one whose affinity value is nearest to 1 among the population (in fact in
memory). Euclid distance between ideal Antibody and Antigen is equal to zero. In
this case the problem becomes not surface approximation but surface interpolation.
In order to integrate clonal selection algorithm to this problem some modification
must be carried out on the original algorithm. The followings are step by step
explanation of the modifications made on the algorithm and how these modifications
applied to above mentioned algorithm.

Enter the data points to be fitted (In a form of Nx and Ny grid).

Enter the control parameters.
Build initial Antibody population with random molecules.
If the population is built as the first time, create memory array (save all
Otherwise, update antibody population and memory cells and develop the
For each of antibody calculate B-spline and fit it to the given curve. Later on
calculate the sum of squares of residuals (Q2).
For each of antibody in the population calculate its AIC value and calculate the
average AIC value of population.
For each of antibody calculate the affinity.
Choose the best antibody according to the affinity and in request antigen and
interactions of every antibody. The number of the clones will be K-varieties.
Produce matured antibody population by making the molecules change rational
with affinity values of clones.
Implement mutation according to the mutation rate.
Produce new antibody according to the variety ratio.
If iteration limit is not reached or antigen is not recognized fully go to step 5.

An Artificial Immune System Approach for B-Spline Surface Approximation Problem


4 Experimental Results
In order to evaluate the proposed AIS based Automatic Knot Placement algorithm
five bivariate test functions were used (see Table 1). These functions were constructed
to have a unit standard deviation and a non-negative range. Since the antibodies with
highest affinity values are kept in the memory in AIS architecture, the antibody of the
memory with the highest affinity for each generation was presented in the results. To
see the performance evaluation and approximation speed, genetic algorithm suggested
by Sarfaz at al. [6, 17] and proposed algorithm in this study were compared. Knot
ratio and operation of making important points immobilize at the knot chromosome
are discarded in their algorithm. The developed program has the flexibility of entering
the B-spline surface orders from the user. To test the quality of the proposed model,
Root Mean Square (RMS) error were calculated for M and N values from 5 to 10 for
400 (20x20) training data points from five surfaces defined as above. Initial
population is fed till 500 generations. Increase in the number of generation makes
increase in fitting (error decreases). The slot of the approximated curves indicates the
probability of having still approximation in next generation. Table 2 shows the
statistics of GA and AIS optimization execution. RMS errors between point clouds and
modeled surface based on the best chromosome in GA and antibodies in population of
memory in AIS were given in Table 3 for four test functions (Surface I Surface IV).
The analyses are implemented for all surfaces in Table 1. M x N mesh is
determined as randomly for Surface II Surface IV. At Surface II shown in Table 3,
the choices for M and N are fitting to MxN=8x8. Similarly, the choices of M and N for
Surface III and IV are fitting to MxN=9x9 and MxN=10x10, respectively.
Table 1. Five test functions for the bivariate setting

f ( x1 , x 2 ) = 10 .391{( x1 0.4 )( x 2 0.6 ) + 0.36}

{ (


f ( x1 , x2 ) = 24.234 r 2 0.75 r 2 , r 2 = (x1 0.5) + ( x2 0.5)


f ( x1 , x2 ) = 42.659 0.1 + x1 0.05 + x1 10x1 x2 + 5x4 , x1 = x1 0.5, x2 = x2 0.5

f (x1 , x2 ) = 1.3356 1.5(1 x1 ) + e

2 x1 1

f ( x1 , x2 ) = 1.9 1.35 + e Sin 13( x1 0.6) + e Sin(7 x2 )


Sin 3 (x1 0.6) + e 3( x2 0.5 ) Sin 4 (x2 0.9)




Table 2. Parameter Set

Mesh Size
Population size
String length
Mutation Rate
Memory Size
B-splines order

200 (Antibody cell length)
6 (30%)
Random and user defined

200 (chromosome gen-length)
6 (30%)
Random and user defined

E. lker and V. ler


Table 3. RMS (x10-2) values of AIS and GA methods for 400 data points from Surface I to
Surface IV for different MxN

Surface I (7x7)
Surface II (8x8)
G.A. A.I.S.
G.A. A.I.S.

Surface III (9x9)

G.A. A.I.S.

Surface IV (10x10)
G.A. A.I.S.

Table 4. RMS (x10-2) values of AIS and GA methods for 400 data points from Surface V for
different MxN. (a) AIS, and (b) GA.










Table 5. Fitness and RMS statistics of GA and AIS for Surface V



A.I.S. (x10-2)
Best Best Max.
8.84 806 27.98
7.70 695 8.811
7.37 660 7.793
6.74 589 7.357
6.03 500 6.711
5.92 485 6.085
5.91 484 5.965
5.86 477 5.918
5.79 467 8.490

Avrg. Avrg.
Fitn. RMS
1226 16.3
767 8.43
682 7.57
641 7.20
511 6.12
492 5.97
488 5.94
481 5.89
488 5.95

G.A. (x10-2)
Best Best Max Avrg. Avrg.
RMS Fitn. RMS Fitn. RMS
8.82 804 26.70 1319 17.6
7.96 722 12.43 961 10.8
9.69 879 30.38 1085 12.9
7.93 719 22.33 940 10.6
8.01 727 10.86 891 9.87
9.26 843 12.58 925 10.2
7.69 694 29.60 1043 12.3
8.47 772 11.95 922 10.2
7.93 719 13.31 897 9.95

Table 4 points out Surface V. RMS error was calculated for some M and N values
from 5 to 10 for 400 (20x20) training data points. As the reader can evaluate, the error
approaches depending on the M and N values are reasonable and the best choice fits to
the MxN=9x10 as shown italic in Table 4. According to the knots MxN that offer best

An Artificial Immune System Approach for B-Spline Surface Approximation Problem


fitting, the proposed algorithm was also compared with GA based algorithm by Safraz
et al. regarding to approximation speed. The outputs of the programs were taken for
some generations in the training period. Best and average fitness values of individuals
and antibodies according to the related generations of the program outputs were given
in Table 5. The graphics of approximations of the proposed AIS approach and GA
approach for all generations are given in Fig. 1. The bold line and dotted line
represent maximum fitness and average fitness values respectively.













































Fig. 1. Parameter optimization based on GA and AIS regarding to the generations

5 Conclusion and Future Work

This paper presents an application of another computational intelligence technique
known as Artificial Immune Systems (AIS) to the surface fitting problem using Bsplines. In this study, original problem like in [3] and [6] was converted to a discrete
combinational optimization problem and solved as the strategy of this study. In this
paper, it has been clearly shown that proposed AIS algorithm is very useful for the
recognition of the good knot automatically. The suggested method can describe the
numbers and placements of the knots concurrently. None of the subjective parameters
such as error tolerance or a regularity (order) factor and initial places of the knots that
are chosen well are not required.
There are two basic requirements on each of B-spline surface in iterations for
guaranteeing the appropriate approximation in B-spline surface fitting. (1) Its shape
must be similar to the target or object surface; (2) its control points must be scattered
or its knot points must be determined appropriately. The technique presented in this
paper is proved to reduce the necessity of the second requirement.
In this study, Clonal Selection Algorithm of AIS is applied to the surface
reconstruction problem and various new ways of surface modeling is developed. The
big potential of this approach has been shown. For a given set of 3D data points, AIS
assists to choice the most appropriate B-spline surface degree and knot points. The
authors of this manuscript will use other AIS technique to improve the proposed
method in their future studies. The positive or negative effects of other techniques
will tried to be obtained and comparisons will be done in the future studies.
Additionally, NURBS surfaces will be used to improve the suggested algorithm. This
extension is especially important regarding to have the complex optimization of
weight of NURBS.


E. lker and V. ler

This study has been supported by the Scientific Research Projects of Selcuk
University (in Turkey).

1. Weiss, V., Andor, L., Renner, G., Varady, T., Advanced surface fitting techniques,
Computer Aided Geometric Design Vol. 19, p. 19-42, (2002).
2. Piegl, L., Tiller, W., The NURBS Book, Springer Verlag, Berlin, Heidelberg, (1997).
3. Yoshimoto, F., Moriyama, M., Harada, T., Automatic knot placement by a genetic
algorithm for data fitting with a spline, Proc. of the International Conference on Shape
Modeling and Applications, IEEE Computer Society Press, pp. 162-169, (1999).
4. Goldenthal, R., Bercovier, M. Design of Curves and Surfaces by Multi-Objective
Optimization, April 2005, Leibniz Report 2005-12.
5. Iglesias, A., Echevarra, G., Galvez, A., Functional networks for B-spline surface
reconstruction, Future Generation Computer Systems, Vol. 20, pp. 1337-1353, (2004).
6. Sarfraz, M., Raza, S.A., Capturing Outline of Fonts using Genetic Algorithm and Splines,
Fifth International Conference on Information Visualisation (IV'01) , pp. 738-743, (2001).
7. Iglesias, A., Glvez, A., A New Artificial Intelligence Paradigm for Computer-Aided
Geometric Design, Lecture Notes in Artificial Intelligence Vol. 1930, pp. 200-213, (2001).
8. Hoffmann, M., Kovcs E., Developable surface modeling by neural network,
Mathematical and Computer Modelling, Vol. 38, pp. 849-853, (2003)
9. Hoffmann, M., Numerical control of Kohonen neural network for scattered data
approximation, Numerical Algorithms, Vol. 39, pp. 175-186, (2005).
10. Echevarra, G., Iglesias, A., Glvez, A., Extending Neural Networks for B-spline Surface
Reconstruction, Lecture Notes in Computer Science, Vol. 2330, pp. 305-314, (2002).
11. Engin, O., Dyen, A., Artificial Immune Systems and Applications in Industrial Problems,
Gazi University Journal of Science 17(1): pp. 71-84, (2004).
12. Boudjema, F., Enberg, P.B., Postaire, J.G., Surface Modeling by using Self Organizing Maps
of Kohonen, IEEE Int. Conf. on Systems, Man and Cybernetics, vol. 3, pp. 2418-2423, (2003).
13. Barhak, J., Fischer, A., Adaptive Reconstruction of Freeform Objects with 3D SOM Neural
Network Grids, Journal of Computers & Graphics, vol. 26, no. 5, pp. 745-751, (2002).
14. Kumar, S.G., Kalra, P. K. and Dhande, S. G., Curve and surface reconstruction from points: an
approach based on SOM, Applied Soft Computing Journal, Vol. 5(5), pp. 55-66, (2004).
15. Hoffmann, M., Vrady, L., and Molnar, T., Approximation of Scattered Data by Dynamic
Neural Networks, Journal of Silesian Inst. of Technology, pp, 15-25, (1996).
16. Sarfraz, M., Riyazuddin, M., Curve Fitting with NURBS using Simulated Annealing,
Applied Soft Computing Technologies: The Challenge of Complexity, Series: Advances in
Soft Computing, Springer Verlag, (2006).
17. Sarfraz, M., Raza, S.A., and Baig, M.H., Computing Optimized Curves with NURBS Using
Evolutionary Intelligence, Lect. Notes in Computer Science, Volume 3480, pp. 806-815,
18. Sarfraz, M., Sait, Sadiq M., Balah, M., and Baig, M. H., Computing Optimized NURBS Curves
using Simulated Evolution on Control Parameters, Applications of Soft Computing: Recent
Trends, Series: Advances in Soft Computing, Springer Verlag, pp. 35-44, (2006).
19. Sarfraz, M., Computer-Aided Reverse Engineering using Simulated Evolution on NURBS, Int.
J. of Virtual & Physical Prototyping, Taylor & Francis, Vol. 1(4), pp. 243 257, (2006).

Implicit Surface Reconstruction from Scattered

Point Data with Noise
Jun Yang1,2, Zhengning Wang1, Changqian Zhu1, and Qiang Peng1

School of Information Science & Technology Southwest Jiaotong

University, Chengdu, Sichuan 610031 China
School of Mechanical & Electrical Engineering Lanzhou Jiaotong
University, Lanzhou, Gansu 730070 China, {znwang, cqzhu, pqiang}

Abstract. This paper addresses the problem of reconstructing implicit function

from point clouds with noise and outliers acquired with 3D scanners. We
introduce a filtering operator based on mean shift scheme, which shift each
point to local maximum of kernel density function, resulting in suppression of
noise with different amplitudes and removal of outliers. The clean data points
are then divided into subdomains using an adaptive octree subdivision method,
and a local radial basis function is constructed at each octree leaf cell. Finally,
we blend these local shape functions together with partition of unity to
approximate the entire global domain. Numerical experiments demonstrate
robust and high quality performance of the proposed method in processing a
great variety of 3D reconstruction from point clouds containing noise and
Keywords: filtering, space subdivision, radial basis function, partition of unity.

1 Introduction
The interest for point-based surface has grown significantly in recent years in
computer graphics community due to the development of 3D scanning technologies,
or the riddance of connectivity management that greatly simplifies many algorithms
and data structures. Implicit surfaces are an elegant representation to reconstruct 3D
surfaces from point clouds without explicitly having to account for topology issues.
However, when the point sets data generated from range scanners (or laser scanners)
contain large noise, especially outliers, some established methods often fail to
reconstruct surfaces or real objects.
There are two major classes of surface representations in computer graphics:
parametric surfaces and implicit surfaces. A parametric surface [1, 2] is usually given
by a function f (s, t) that maps some 2-dimensional (maybe non-planar) parameter
domain into 3-space while an implicit surface typically comes as the zero-level
isosurface of a 3-dimensional scalar field f (x, y, z). Implicit surface models are
popular since they can describe complex shapes with capabilities for surface and
volume modeling and complex editing operations are easy to perform on such models.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 5764, 2007.
Springer-Verlag Berlin Heidelberg 2007


J. Yang et al.

Moving least square (MLS) [3-6] and radial basis function (RBF) [7-15] are two
popular 3D implicit surface reconstruction methods.
Recently, RBF attracts more attention in surface reconstruction. It is identified as
one of most accurate and stable methods to solve scattered data interpolation
problems. Using this technique, an implicit surface is constructed by calculating the
weights of a set of radial basis functions such they interpolate the given data points.
From the pioneering work [7, 8] to recent researches, such as compactly-supported
RBF [9, 10], fast RBF [11-13] and multi-scale RBF [14, 15], the established
algorithms can generate more and more faithful models of real objects in last twenty
years, unfortunately, most of them are not feasible for the approximations of
unorganized point clouds containing noise and outliers.
In this paper, we describe an implicit surface reconstruction algorithm for noise
scattered point clouds with outliers. First, we define a smooth probability density
kernel function reflecting the probability that a point p is a point on the surface S
sampled by a noisy point cloud. A filtering procedure based on mean shift is used to
move the points along the gradient of the kernel functions to the maximum probability
positions. Second, we reconstruct a surface representation of clean point sets
implicitly based on a combination of two well-known methods, RBF and partition of
unity (PoU). The filtered domain of discrete points is divided into many subdomians
by an adaptively error-controlled octree subdivision, on which local shape functions
are constructed by RBFs. We blend local solutions together using a weighting sum of
local subdomains. As you will see, our algorithm is robust and high quality.

2 Filtering
2.1 Covariance Analysis
Before introducing our surface reconstruction algorithm, we describe how to perform
eigenvalue decomposition of the covariance matrix based on the theory of principal
component analysis (PCA) [24], through which the least-square fitting plane is
defined to estimate the kernel-based density function.
Given the set of input points {pi}i[1,L], pi R3, the weighted covariance matrix
C for a sample point pi is determined by

C = p j pi
j =1

)(p j pi )

p j pi

h ,


where pi is the centroid of the neighborhood of pi, is a monotonically decreasing

weight function, and h is the adaptive kernel size for the spatial sampling density.
Consider the eigenvector problem

C el = l el .


Since C is symmetric and positive semi-define, all eigenvalues l are real-valued and
the eigenvectors el form an orthogonal frame, corresponding to the principal
components of the local neighborhood.

Implicit Surface Reconstruction from Scattered Point Data with Noise


Assuming 012, it follows that the least square fitting plane H(p):
( p pi ) e0 = 0 through pi minimizes the sum of squared distances to the neighbors
of pi. Thus e0 approximates the surface normal ni at pi, i.e., ni = e0. In other words, e1
and e2 span the tangent plane at pi.

2.2 Mean Shift Filtering

Mean shift [16, 17] is one of the robust iterative algorithms in statistics. Using this
algorithm, the samples are shifted to the most likely positions which are local maxima
of kernel density function. It has been applied in many fields of image processing and
visualization, such as tracing, image smoothing and filtering.
In this paper, we use a nonparametric kernel density estimation scheme to estimate
an unknown density function g(p) of input data. A smooth kernel density function
g(p) is defined to reflect the probability that a point p R3 is a point on the surface S
sampled by a noisy point cloud . Inspired by the previous work of Schall et al. [21],
we measure the probability density function g(p) by considering the squared distance
of p to the plane H(p) fitted to a spatial k-neighborhood of pi as

g ( p ) = gi ( p ) = i ( p p pro ) Gi ( p pro pi ) 1 ( p pi ) ni h

i =1

i =1



where i and Gi are two monotonically decreasing weighting functions to measure the
spatial distribution of point samples from spatial domain and range domain, which are
more adaptive to the local geometry of the point model. The weight function could be
either a Gaussian kernel or an Epanechnikov kernel. Here we choose Gaussian
function e x / 2 . The ppro is an orthogonal projection of a certain sample point p on
the least-square fitting plane. The positions p close to H(p) will be assigned with a
higher probability than the positions being more distant.
The simplest method to find the local maxima of (3) is to use a gradient-ascent
process written as follows:

g ( p ) = g i ( p )
i =1

2 L
i ( p ppro ) Gi (ppro pi ) ( p pi ) ni ni .
h2 i =1


Thus the mean shift vectors are determined as

m(p) = p i ( p ppro ) Gi ( ppro pi ) ( p pi ) ni ni
i =1

(p p ) G (p

i =1



pi ) .


Combining equations (4) and (5) we get the resulting iterative equations of mean
shift filtering

pij +1 = m(p ij ) , pio = pi ,


where j is the number of iteration. In our algorithm, g(p) satisfies the following
g ( p2 ) g ( p1 ) >g ( p1 )( p2 p1 )

p1 0,p2 0 ,



J. Yang et al.

thus g(p) is a convex function with finite stable points in the set U = {pi | g ( pi ) g ( p1i )}

resulting in the convergence of the series {pij , i = 1,..., L, j = 1, 2,...} . Experiments show
that we stop iterative process if p ij +1 pij 5 103 h is satisfied. Each sample
usually converges in less than 8 iterations. Due to the clustering property of our
method, groups of outliers usually converge to a set of single points sparsely
distributed around the surface samples. These points can be characterized by a very
low spatial sampling density compared to the surface samples. We use this criteria for
the detection of outliers and remove them using a simple threshold.

3 Implicit Surface Reconstruction

3.1 Adaptive Space Subdivision
In order to avoid solving a dense linear system, we subdivide the whole input points
filtered by mean shift into slightly overlapping subdomains. An adaptive octree-based
subdivision method introduced by Ohtake et al. [18] is used in our space partition.
We define the local support radius R= di for the cubic cells which are generated
during the subdivision, di is the length of the main diagonal of the cell. Assume each
cell should contain points between Tmin and Tmax. In our implementation, =0.6, Tmin
=20 and Tmax =40 has provided satisfying results.
A local max-norm approximation error is estimated according to the Taubin
distance [19],
= max f ( pi ) / f ( pi ) .


pi ci < R

If the is greater than a user-specified threshold 0, the cell is subdivided and a local
neighborhood function fi is built for each leaf cell.
3.2 Estimating Local Shape Functions
Given the set of N pairwise distinct points ={pi}i[1,N], pi R3, which is filtered by
mean shift algorithm, and the set of corresponding values {vi}i[1,N], vi R, we want to
find an interpolation f : R3R such that
f ( p i ) = vi .


We choose the f(p) to be a radial basis function of the form


f ( p ) = ( p ) + i ( p p i
i =1


where (p)= kk(p) with {k(p)}k[1,Q] is a basis in the 3D null space containing all
real-value polynomials in 3 variables and of order at most m with Q = { 3m + 3 } depending
on the choice of , is a basis function, i are the weights in real numbers, and | . |
denotes the Euclidean norm.

Implicit Surface Reconstruction from Scattered Point Data with Noise


There are many popular basis functions for use: biharmonic (r) = r, triharmonic
(r) = r3, multiquadric (r) = (r2+c2)1/2, Gaussian (r) = exp(-cr2), and thin-plate
spline (r) = r2log(r), where r = |p-pi|.
As we have an under-determined system with N+Q unknowns and N equations, socalled natural additional constraints for the coefficient i are added in order to ensure
orthogonality, so that

i =1

i =1

i =1

=0 .


The equations (9), (10) and (11) may be written in matrix form as

v ,
0 0


where A=(|pi-pj|), i,j =1,,N, =k(pi), i=1,,N, k=1,,Q, =i, i=1,,N and
=k, k=1,,Q. Solving the linear system (14) determines i and k, hence the f(p).

Fig. 1. A set of locally defined functions are blent by the PoU method. The resulting function
(solid curve) is constructed from four local functions (thick dashed curves) with their associated
weight functions (dashed dotted curves).

3.3 Partition of Unity

After suppressing high frequency noise and removing outliers, we divide the global
domain ={pi}i[1,N] into M lightly overlapping subdomains {i}i[1,M] with i i
using an octree-based space partition method. On this set of subdomains {i}i[1,M], we
construct a partition of unity, i.e., a collection of non-negative functions {i}i[1,M]
with limited support and with i=1 in the entire domain . For each subdomain i
we construct a local reconstruction function fi based on RBF to interpolate the
sampled points. As illustrated in Fig. 1, four local functions f1(p), f2(p), f3(p) and f4(p)
are blended together by weight functions 1, 2, 3 and 4. The solid curve is the
final reconstructed function.
Now an approximation of a function f(p) defined on is given by a combination
of the local functions

f (p ) = fi (p ) i (p ) .


i =1

The blending function is obtained from any other set of smooth functions by a
normalization procedure


J. Yang et al.

i ( p ) = wi ( p )

w (p )


The weight functions wi must be continuous at the boundary of the subdomains i.

Tobor et al. [15] suggested that the weight functions wi be defined as the composition
of a distance function Di:Rn[0,1], where Di(p)=1 at the boundary of i and a decay
function : [0,1][0,1], i.e. wi(p)= Di(p). More details about Di and can be
found in Tobors paper.
Table 1. Computational time measurements for mean shift filtering and RBF+PoU surface
reconstructing with error bounded at 10-5. Timings are listed as minutes:seconds.


Dragon head





Fig. 2. Comparison of implicit surface reconstruction based on RBF methods. (a) Input noisy
point set of Stanford bunny (362K). (b) Reconstruction with Carrs method [11]. (c)
Reconstruction with our method in this paper.

4 Applications and Results

All results presented in this paper are performed on a 2.8GHz Intel Pentium4 PC with
512M of RAM running Windows XP.
To visualize the resulting implicit surfaces, we used a pure point-based surface
rendering algorithm such as [22] instead of traditionally rendering the implicit
surfaces using a Marching Cubes algorithm [23], which inherently introduces heavy
topological constraints.
Table 1 presents computational time measurements for filtering and reconstructing
of three scan models, bunny, dragon head and dragon, with user-specified error
threshold 10-5 in this paper. In order to achieve good effects of denoising we choose a
large number of k-neighborhood for the adaptive kernel computation, however, more
timings of filtering are spent . In this paper, we set k=200. Note that the filtered points
are less than input noisy points due to the clustering property of our method.
In Fig. 2 two visual examples of the reconstruction by Carrs method [11] and our
algorithm are shown. Carr et al. use polyharmonic RBFs to reconstruct smooth,

Implicit Surface Reconstruction from Scattered Point Data with Noise


manifold surfaces from point cloud data and their work is considered as an excellent
and successful research in this field. However, because of sensitivity to noise, the
reconstructed model in the middle of Fig. 2 shows spurious surface sheets. The
quality of the reconstruction is highly satisfactory, as be illustrated in the right of
Fig. 2, since a mean shift operator is introduced to deal with noise in our algorithm.
For the purpose of illustrating the influence of error thresholds on reconstruction
accuracy and smoothness, we set two different error thresholds on the reconstruction
of the scanned dragon model, as demonstrated by Fig. 3.





Fig. 3. Error threshold controls reconstruction accuracy and smoothness of the scanned dragon
model consisting of 2.11M noisy points. (a) Reconstructing with error threshold at 8.4x10-4. (c)
Reconstructing with error threshold at 2.1x10-5. (b) and (d) are close-ups of the rectangle areas
of (a) and (c) respectively.

5 Conclusion and Future Work

In this study, we have presented a robust method for implicit surface reconstruction
from scattered point clouds with noise and outliers. Mean shift method filters the raw
scanned data and then the PoU scheme blends the local shape functions defined by
RBF to approximate the whole surface of real objects.
We are also investigating various other directions of future work. First, we are trying
to improve the space partition method. We think that the Volume-Surface Tree [20], an
alternative hierarchical space subdivision scheme providing efficient and accurate
surface-based hierarchical clustering via a combination of a global 3D decomposition at
coarse subdivision levels, and a local 2D decomposition at fine levels near the surface
may be useful. Second, we are planning to combine our method with some feature
extraction procedures in order to adapt it for processing very incomplete data.

1. Weiss, V., Andor, L., Renner, G., Varady, T.: Advanced Surface Fitting Techniques.
Computer Aided Geometric Design, 1 (2002) 19-42
2. Iglesias, A., Echevarra, G., Glvez, A.: Functional Networks for B-spline Surface
Reconstruction. Future Generation Computer Systems, 8 (2004) 1337-1353
3. Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin D., Silva, C. T.: Point Set
Surfaces. In: Proceedings of IEEE Visualization. San Diego, CA, USA, (2001) 21-28
4. Amenta, N., Kil, Y. J.: Defining Point-Set Surfaces. ACM Transactions on Graphics, 3
(2004) 264-270


J. Yang et al.

5. Levin, D.: Mesh-Independent Surface Interpolation. In: Geometric Modeling for Scientific
Visualization, Spinger-Verlag, (2003) 37-49
6. Fleishman, S., Cohen-Or, D., Silva, C. T.: Robust Moving Least-Squares Fitting with
Sharp Features. ACM Transactions on Graphics, 3 (2005) 544-552
7. Savchenko, V. V., Pasko, A., Okunev, O. G., Kunii, T. L.: Function Representation of
Solids Reconstructed from Scattered Surface Points and Contours. Computer Graphics
Forum, 4 (1995) 181-188
8. Turk, G., OBrien, J.: Variational Implicit Surfaces. Technical Report GIT-GVU-99-15,
Georgia Institute of Technology, (1998)
9. Wendland, H.: Piecewise Polynomial, Positive Definite and Compactly Supported Radial
Functions of Minimal Degree. Advances in Computational Mathematics, (1995) 389-396
10. Morse, B. S., Yoo, T. S., Rheingans, P., Chen, D. T., Subramanian, K. R.: Interpolating
Implicit Surfaces from Scattered Surface Data Using Compactly Supported Radial Basis
Functions. In: Proceedings of Shape Modeling International, Genoa, Italy, (2001) 89-98
11. Carr, J. C., Beatson, R. K., Cherrie, J. B., Mitchell, T. J., Fright, W. R., McCallum, B. C.,
Evans, T. R.: Reconstruction and Representation of 3D Objects with Radial Basis
Functions. In: Proceedings of ACM Siggraph 2001, Los Angeles, CA , USA, (2001) 67-76
12. Beatson, R. K.: Fast Evaluation of Radial Basis Functions: Methods for Two-Dimensional
Polyharmonic Splines. IMA Journal of Numerical Analysis, 3 (1997) 343-372
13. Wu, X., Wang, M. Y., Xia, Q.: Implicit Fitting and Smoothing Using Radial Basis
Functions with Partition of Unity. In: Proceedings of 9th International Computer-AidedDesign and Computer Graphics Conference, Hong Kong, China, (2005) 351-360
14. Ohtake, Y., Belyaev, A., Seidel, H. P.: Multi-scale Approach to 3D Scattered Data
Interpolation with Compactly Supported Basis Functions. In: Proceedings of Shape
Modeling International, Seoul, Korea, (2003) 153-161
15. Tobor, I., Reuter, P., Schlick, C.: Multi-scale Reconstruction of Implicit Surfaces with
Attributes from Large Unorganized Point Sets. In: Proceedings of Shape Modeling
International, Genova, Italy, (2004) 19-30
16. Comaniciu, D., Meer, P.: Mean Shift: A Robust Approach toward Feature Space Analysis.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 5 (2002) 603-619
17. Cheng, Y. Z.: Mean Shift, Mode Seeking, and Clustering. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 8 (1995) 790-799
18. Ohtake, Y., Belyaev, A., Alexa, M., Turk, G., Seidel, H. P.: Multi-level Partition of Unity
Implicits. ACM Transactions on Graphics, 3 (2003) 463-470
19. Taubin, G.: Estimation of Planar Curves, Surfaces and Nonplanar Space Curves Defined
by Implicit Equations, with Applications to Edge and Range Image Segmentation. IEEE
Transaction on Pattern Analysis and Machine Intelligence, 11 (1991) 1115-1138
20. Boubekeur, T., Heidrich, W., Granier, X., Schlick, C.: Volume-Surface Trees. Computer
Graphics Forum, 3 (2006) 399-406
21. Schall, O., Belyaev, A., Seidel, H-P.: Robust Filtering of Noisy Scattered Point Data. In:
IEEE Symposium on Point-Based Graphics, Stony Brook, New York, USA, (2005) 71-77
22. Rusinkiewicz, S., Levoy, M.: Qsplat: A Multiresolution Point Rendering System for Large
Meshes. In: Proceedings of ACM Siggraph 2000, New Orleans, Louisiana, USA, (2000)
23. Lorensen, W. E., Cline, H. F.: Marching Cubes: A High Resolution 3D Surface
Construction Algorithm. Computer Graphics, 4 (1987) 163-169
24. Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., Stuetzle, W.: Surface Reconstruction
from Unorganized Points. In: Proceedings of ACM Siggraph92, Chicago, Illinois, USA,
(1992) 71-78

The Shannon Entropy-Based Node Placement

for Enrichment and Simplication of Meshes
Vladimir Savchenko1, Maria Savchenko2, Olga Egorova3, and Ichiro Hagiwara3

Hosei University, Tokyo, Japan
InterLocus Inc.Tokyo, Japan
Tokyo Institute of Technology, Tokyo, Japan,

Abstract. In this paper, we present a novel simple method based on

the idea of exploiting the Shannon entropy as a measure of the interinuence relationships between neighboring nodes of a mesh to optimize
node locations. The method can be used in a pre-processing stage for
subsequent studies such as nite element analysis by providing better
input parameters for these processes. Experimental results are included
to demonstrate the functionality of our method.
Keywords: Mesh enrichment, Shannon entropy, node placement.


Construction of a geometric mesh from a given surface triangulation has been

discussed in many papers (see [1] and references therein). Known approaches
are guaranteed to pass through the original sample points that are important in
computer aided design (CAD). However, results of triangulations drastically depend on uniformity and density of the sampled points as it can be seen in Fig.1.
Surface remeshing has become very important today for CAD and computer
graphics (CG) applications. Complex and detailed models can be generated by
3D scanners, and such models have found a wide range of applications in CG
and CAD, particularly in reverse engineering. Surface remeshing is also very
important for technologies related to engineering applications such as nite element analysis (FEA). Various automatic mesh generation tools are widely used
for FEA. However, all of these tools may create distorted or ill-shaped elements,
which can lead to inaccurate and unstable approximation. Thus, improvement
of the mesh quality is an almost obligatory step for preprocessing of mesh data
in FEA. Recently, sampled point clouds have received much attention in the CG
community for visualization purposes (see [2], [3]) and CAD applications (see [4],
[5], [6]). A noise-resistant algorithm [6] for reconstructing a watertight surface
from point cloud data presented by Kolluri et al. ignores undersampled regions;
nevertheless, it seems to us that some examples show that undersampled areas
need an improvement by surface retouching or enrichment algorithms. In some
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 6572, 2007.
c Springer-Verlag Berlin Heidelberg 2007


V. Savchenko et al.

applications, it is useful to have various, for instance, simpler versions of original

complex models according to the requirements of the applications, especially, in
FEA. In addition to the deterioration in the accuracy of calculations, speed may
be sacriced in some applications. Simplication of a geometric mesh considered
here involves constructing a mesh element which is optimized to improve the
elements shape quality using an aspect ratio (AR) as a measure of the quality.



Fig. 1. Surface reconstruction of a technical data set. (a) Cloud of points (4100
scattered points are used). (b)Triangulation produced by Delaunay-based method (N
triangular elements: 7991, N points: 4100).

In this paper, we present an attempt of enrichment of mesh vertices according

to AR-based entropy which is the analog of the Shannon entropy [7]. Further it
is called A-entropy. We progressively adapt the new coming points by performing elementary interpolation operations proposed by Shepard [8] (see also [9] for
more references) for generating the new point instances until an important function If (in our case, a scalar which species the ideal sampling density) matches
some user-specied values. To optimize node coordinates during simplication
process (edge collapsing), A-entropy is also implemented.
Recently, a wide scope of papers addressed the problem of remeshing of triangulated surface meshes, see, for instance, [10], [11] and references therein where
surface remeshing based on surface parameterization and subsequent lifting of
height data were applied. However, the main assumption used is that geometric
details are captured accurately in the given model. Nevertheless, as it can be
seen from the Darmstadt benchmark model (technical data set shown in Fig. 1),
a laser scanner often performs non-uniform samples that leads to under-sampling
or the mesh may have holes corresponding to deciencies in point data. In theory, the problem of surface completion does not have a solution when the plane
of the desirable triangulation is not planar; and, especially, begins to go wrong
when the plane of triangulation is orthogonal to features in the hole boundary
(so called, crenellations features). See a good discussion of the problem in [12].
Let us emphasize that our approach is dierent from methods related to reconstruction of surfaces from scattered data by interpolation methods based, for
instance, on minimum-energy properties (see, for example, [13]). In our case, an

The Shannon Entropy-Based Node Placement


approximation of the original surface (triangular mesh) is given. In some applications, it is important to preserve the initial mesh topology. Thus, our goal
is to insert new points in domains where the If function does not satisfy the
user-specied value. The main contribution of the paper is a novel algorithm of
a vertex placement which is discussed in details in Section 2.

Vertex Placement Algorithm

The approach is extremely simple and, in theory, is user dependent. In an analogy

to a hole lling, the user denes an area, where enrichment may be done, that
is, the user selects a processing area with no crenellations. In practice, all object
surface can be considered as the processing area (as it has been done in the
example shown in Fig. 1).

Fig. 2. Scheme of a new point generation. p1 , p2 , pi are points of the neighborhood.

The dashed line is a bisector of the empty sector, g is the generated point.

After that the algorithm works as follows:

1. Dene a radius of sampling Rs as an analog of the point density; for instance,
for the technical data set the radius equal to 1. It can be done automatically
by calculating average point density.
2. For each point of the user-dened domain, select K nearest points p that are
in Rs neighborhood. If K is less (or equal) than the user-predened number of
the neighborhood points (in our experiments - 6) and the maximum angle of the
empty area is larger (or equal) the user-predened angle (in our experiments
- 900 ), generate a new point g with the initial guess provided by a bisector of
the empty sector as shown in Fig. 2.
3. Select a new neighborhood of the point g; it can be slightly dierent from the
initial set of points. This is done in the tangent plane (projective plane) dened
by neighborhood points.
4. Perform a local Delaunay triangulation.
5. Find points forming a set of triangles with the common node g (a star) as
shown in Fig. 3(a). Calculate the new placement of the center of the star g using
technique described below (Fig. 3(b)).


V. Savchenko et al.

6. And on the lifting stage, calculate the local z-coordinates of g by Shepard

interpolation. In our implementation, we use the compactly supported radial
basis function [14] as the weight function.



Fig. 3. (a) An initial star. (b) The nal star.

The key idea of the fth step is to progressively adapt the newly created points
throw a few iterations. That is, an area with low sampling density will be lled
in accordance with points generated on the previous steps. In order to obtain
a good set of the new (approximated) points coordinates, we need a measure
of a goodness of triangulations arising from randomly coming points. It is
natural to use a mesh quality parameter, AR of the elements of a star, for such
a measure. In the case of a triangular mesh, AR can be dened as a ratio of the
maximum edge length to the minimum edge length of an element. Nevertheless,
according to our experiments it is much better to use an information Mi (Mi is
the AR of the i-th triangle of the star) associated with respect to a point g (Fig.
3) of the star in an analogy with the Shannon entropy [8], which denes the
uncertainty of a random variable, and can be a natural measure for the criterion
used in the enrichment algorithm. Shannon dened the entropy of an ensemble
of messages: if there are N possible messages that can be sent in one package,
and message m is being transmmited with probability pm , then the entropy is
as follows


pm log (pm ) .


Intuitively, we can use AR-based entropy, with respect to the point g as follows


Mi /Mt log (Mi /Mt ) ,



where Mt is the summarized AR value of a star, N is the number of faces of

the star. From the statistical point of view, a strict denition of the Shannon
entropy for a mesh, which we denoted as A-entropy and used in our algorithm,
is provided as follows: consider discrete random variable with distribution:

The Shannon Entropy-Based Node Placement

x1 x2 ... xn
p1 p2 ... pn



where probabilities pi =P{=xi }. Then divide an interval 0 x < 1 into such

intervals i that the length of i equals to pi . Random variable is dened as
= xi , when i has distribution (). Suppose we have a set of empirically
received numbers 1 = a1 ,...,n = an written in its increasing order, where ai
is the AR of the i-th element of the neighborhood with the point g as a center.
Let these numbers dene a division of an interval a1 x < an into i = ai
- ai1 . In our case, the parameter ai has its minimal value equal to 1, which
is not necessarily achieved in given sampling data. Constructing the one-to-one
correspondence between 1 x < an and 0 x < 1 , the following probabilities
can be written:
p1 =

a1 1
a2 a 1
an an1
, p2 =
, pn =
an 1
an 1
an 1

Thus, we can dene the random variable with the distribution as follows

a1 a2 ... an
p1 p2 ... pn
Its probability values are used in formula (3) for A-entropy:


pi log (pi ) , pi =

ai ai1
, p0 = 1.
an 1


The value A of A-entropy depends on the coordinates of the center of the star
(point g in Fig. 3). Thus, the problem of maximization of the value A is reduced
to the problem of nding the new coordinates of this center (Fig. 3(b)) and is
considered as the optimization problem. For solving this optimization problem,
we use the downhill simplex method of Nelder and Mead [15].

Experimental Results

In practice, as it can be seen in Fig. 4, implementation of the algorithm discussed

above leads to a reasonable surface reconstruction of areas with initially low sampling density (see Fig. 1). The number of scattered points in the initial model
is 4100, after enrichment the number of points was increased up to 12114. For
decreasing the number of points we simplify this model and the nal number of
points is 5261.
Our triangular mesh simplication method uses predictor-corrector steps for
predicting candidates for edge collapsing according to a bending energy [16] with
the consequent correction of the point placement in simplied mesh. Let us notice that the main idea of our simplication approach is to provide minimal local
surface deformation during an edge collapse operation.


V. Savchenko et al.




Fig. 4. The mechanical data set. (a) Mesh after enrichment. (b) Mesh after simplication. (c) Shaded image of the nal surface.

At each iteration step:

- candidate points for an edge collapse are dened according to a local decimation cost of points belonging to a shaped polygon.
- after all candidates have been selected, we produce a contraction of the edge
with choosing an optimal vertex position by using A-entropy according to the
fth step of the algorithm (see Section 2).
To detail the features of the proposed point placement scheme, Fig. 5 presents
results of applying well known or even classical surface simplication algorithms
(tools can be found in [17]) and our method. We show fragment of a Horse
model (the initial AR value is equal to 1.74; here and further, the average value
of the AR is used) after the mesh simplication produced by the dierent simplication techniques.





Fig. 5. Mesh fragments of the Horse model after simplication (13% of original
elements) by using: (a) Progressive simplication method; (b) Method based on a
global error bound; (c) Method based on a quadric error metric; (d) Our method

The volume dierence between the initial model and simplied one by our
technique is 0.8%; the nal AR value is equal to 1.5. The global error bound
method demonstrates the worst results; the volume dierence is 1.3%, the nal
AR value is equal to 2.25. At a glance of the model visualization and the volume preservation, the best method, without any doubt, is the method based on
the quadric error metric, see [18]. However, there is a tradeo between attaining a

The Shannon Entropy-Based Node Placement


high quality surface reconstruction and minimization of AR. As a result, the

nal AR value is equal to 2 and many elongated and skinny triangles can be
observed in the mesh.

Concluding Remarks

In this paper, we introduce the notion of AR-based entropy (A-entropy) which

is the analog of the Snannon entropy. We consider the enrichment technique and
the technique for improving the mesh quality which are based on this notion.
The mesh quality improvement in presented simplication technique can be compared with smoothing methods based on the averaging of the coordinates, such
as Laplacian [19] or an angle based method of Zhou and Shimada [20]. These
methods have an intrinsic drawback such as a possibility of creating inverted triangles. In some non-convex domains, nodes can be pulled outside the boundary.
Implementation of the entropy-based placement in simplication algorithm decreases a possibility that a predicted point does not create an inverted triangle,
but does not guarantee that such event does not occur at all. However, producing operations in the tangent plane allows suciently easy avoiding creation of
inverted triangles. Interpolation based on the Shepard method produces excessive bumps. In fact, it is a well known feature of the original Shepard method.
More sophisticated local interpolation schemes such as [21] and others can be
implemented to control the quality of interpolation. Matters related to feature
preserving shape interpolation have to be considered in the future. We have
mentioned that it might be natural to use AR (the mesh quality parameter) of
the elements of a star as a measure for providing a reasonable vertex placement.
Nevertheless, we would like to emphasize that according to our experiments, in
many cases it does not lead at all to a well-founded estimate of a point g. It
might be a rational approach to use the Shannon entropy as a measure of the
inter-inuence relationships between neighboring nodes of a star to calculate optimal positions of vertices. We can see in Fig. 5 that shapes of mesh elements
after implementation of our method dier signicantly from results of applying
other simplication methods. Meshes in Fig. 5(a, b, c) are more suitable for
visualization than for numerical calculations. Improvement of meshes of a very
low initial quality, for instance, the Horse model simplied by the global error bound method, takes many iteration steps to attain AR value close to our
result and after implementation of Laplacian smoothing to the model shown in
Fig. 5(b) the shape of the model is strongly deformed. After implementation of
Laplacian smoothing (300 iteration steps) to the Horse model, simplied by
the quadric error metric method, AR and volume dierence between the original model and improved one become 1.6 and 5.2%, correspondingly. Examples
demonstrated above show that the mesh after applying our method is closer to
a computational mesh and can be used for FEA in any eld of study dealing
with isotropic meshes.


V. Savchenko et al.

1. Frey P. J.: About Surface Remeshing. Proc.of the 9th Int.Mesh Roundtable (2000)
2. Alexa, M., Behr, J.,Cohen-Or, D., Fleishman, S., Levin, D., Silvia, C. T.: Point
Set Surfaces. Proc. of IEEE Visualization 2001 (2002) 21-23
3. Pauly, M., Gross, M., Kobbelt, L.: Ecient Simplication of Point-Sampled Surfaces. Proc. of IEEE Visualization 2002(2002) 163-170
4. Hoppe, H., DeRose, T., Duchamp, T., McDonald, J.,Stuetzle, W.: Surface Reconstruction from Unorganized Points. Proceedings of SIGGRAPH 92 (1992) 71-78
5. Amenta, N., Choi, S., Kolluri, R.: The Powercrust. Proc. of the 6th ACM Symposium on Solid Modeling and Applications (1980) 609-633
6. Kolluri, R., Shewchuk, J.R., OBrien, J.F.: Spectral Surface Reconstruction From
Noisy Point Clouds. Symposium on Geometry Processing (2004) 11-21
7. Blahut, R.E.: Principles and Practice of Information Theory. Addison-Wisley
8. Shepard, D.: A Two-Dimensional Interpolation Function for Irregularly Spaced
Data. Proc. of the 23th Nat. Conf. of the ACM (1968) 517-523
9. Franke, R., Nielson, G.,: Smooth Interpolation of Large Sets of Scattered Data.
Journal of Numerical Methods in Engineering 15 (1980) 1691-1704
10. Alliez, P., de Verdiere, E.C., Devillers, O., Isenburg, M.: Isotropic Surface Remeshing. Proc.of Shape Modeling International (2003)49-58
11. Alliez, P., Cohen-Steiner, D., Devillers, O., Levy, B., Desburn, M.: Anisotropic
Polygonal Remeshing. Inria Preprint 4808 (2003)
12. Liepa, P.: Filling Holes in Meshes. Proc. of 2003 Eurographics/ACM SIGGRAPH
symp.on Geometry processing 43 200-205
13. Carr, J.C., Mitchell, T.J., Beatson, R.K., Cherrie, J.B., Fright, W.R., McCallumn,
B.C., Evans, T.R.: Filling Holes in Meshes. Proc.of SIGGRAPH01 (2001) 67-76
14. Wendland, H.: Piecewise Polynomial, Positive Dened and Compactly Supported
Radial Functions of Minimal Degree. AICM 4 (1995) 389-396
15. Nelder, J.A., Mead, R.: A simplex Method for Function Minimization. Computer
J. 7 (1965) 308-313
16. Bookstein, F.L.: Morphometric Tools for Landmarks Data. Cambridge University
Press (1991) Computer J. 7 (1965) 308-313
17. Schroeder, W., Martin, K., Lorensen,B.: The Visualization Toolkit. Ed.2 Prentice
Hall Inc. (1998)
18. Garland, M.: A Multiresolution Modeling: Survey and Future Opportunities. Proc.
of EUROGRAPHICS, State of the Art Reports (1999)
19. Bossen, F.J., Heckbert, P.S.: A Pliant Method for Anisotropic Mesh Generation.
Proc. of the 5th International Meshing Roundtable (1996) 63-74
20. Zhou, T., Shimada, K.: An Angle-Based Approach to Two-dimensional Mesh
Smoothing. Proc.of the 9th International Meshing Roundtable (2000) 373-384
21. Krysl, P., Belytchko, T.: An Ecient Linear-precision Partition of Unity Basis
for Unstructured Meshless Methods. Communications in Numerical Methods in
Engineering 16 (2000) 239-255

Parameterization of 3D Surface Patches by

Straightest Distances
Sungyeol Lee and Haeyoung Lee
Hongik University, Dept. of Computer Engineering,
72-1 Sangsoodong Mapogu, Seoul Korea 121-791
{leesy, leeh}

Abstract. In this paper, we introduce a new piecewise linear parameterization of 3D surface patches which provides a basis for texture mapping, morphing, remeshing, and geometry imaging. To lower distortion
when atting a 3D surface patch, we propose a new method to locally
calculate straightest distances with cutting planes. Our new and simple
technique demonstrates competitive results to the current leading parameterizations and will help many applications that require one-to-one


A 3D mesh parameterization provides a piecewise linear mapping between a 3D

surface patch and an isomorphic 2D patch. It is a widely used or required operation for texture-mapping, remeshing, morphing or geometry imaging. Guaranteed one-to-one mappings that only requires a linear solver have been researched
and many algorithms [4,5,11,8,10] were proposed. To reduce inevitable distortions when attening, a whole object is usually partitioned into several genus
0 surface patches. Non-linear techniques [19] are also presented with good results in some applications but they require more computational time than linear
Geodesics on meshes have been used in various graphics applications such as
parameterization [10], remeshing [14,20], mesh segmentation [20,6], and simulations of natural phenomena [16,9]. Geodesics provide a distance metric between
vertices on meshes while the Euclidean metric can not. Straightest geodesic path
on meshes was introduced by Polthier and Schmies [15] and used for parameterization by [10]. However their straightest geodesics may not be dened between a
source and a destination and require special handling of the swallow tails created
by conjugate vertices [16] and triangles with obtuse angles [9].
In this paper we present a new linear parameterization of 3D surface patches.
Our parameterization is improved upon [10] by presenting a new way to locally
calculate straightest geodesics. Our method demonstrates visually and statistically competitive results to the current leading methods [5,10] as shown in
Figure 1, 3, 5, and Table 1.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 7380, 2007.
c Springer-Verlag Berlin Heidelberg 2007


S. Lee and H. Lee

(a) By Floaters
(dist. 1.26)

(b) By Ours
(dist. 1.20)

(c) By Ours
with a fixed

(d) By Ours
with a measured

Fig. 1. Comparisons with texture-mapped models, Hat and Nefertiti: (a) is resulted
by Floaters [5] with a distortion of 1.26. (b) is by our new parameterization with a
distortion of 1.20, less than by Floaters. The distortion is measured by the texture
stretch metric [19]. (c) is by ours with a xed boundary and (d) is also by ours with a
measured boundary.We can see much less distortion in (d) than (c).


Related Work

Parameterization. There has been an increased need for a parameterization for

texture-mapping, remeshing, morphing or geometry imaging. Many piecewise
linear parameterization algorithms [4,5,11,8,10] were proposed. Generally the
rst step for parameterization is mapping boundary vertices to a xed position.
Usually the boundary is mapped to a square, a circle, or any convex shape
while respecting the 3D-to-2D length ratio between adjacent boundary vertices.
The positions of the interior vertices in the parameter space are then found
by solving a linear system. The linear system is generated with coecients in a
convex combination of 1-ring neighbors for each interior vertex. These coecients
characterize geometric properties such as angle and/or area preserving.
Geodesic Paths. There are several algorithms for geodesic computations on
meshes, mostly based on shortest paths [13,1,7] and have been used for remeshing
and parameterization [20,14]. However, still special processing for triangles with
obtuse angles is required. A detailed overview of this approach can be seen in [12].
Another approach is to compute the straightest geodesic path. Polthier and
Schmies rst introduced an algorithm for the straightest geodesic path on a
mesh [15]. Their straightest geodesic path is uniquely dened with the initial
condition i.e., a source vertex and direction but not with boundary conditions
i.e., a source and a destination. A parameterization by straightest geodesics was
rst introduced in [10]. They used locally calculated straightest geodesic distances for a piecewise linear parameterization. Our parameterization is improved
upon [10] by presenting a new way to calculate straightest geodesics.

Our Parameterization by Straightest Distances

A 3D mesh parameterization provides a piecewise linear mapping between a

3D surface patch and an isomorphic 2D patch. Generally the piecewise linear

Parameterization of 3D Surface Patches by Straightest Distances


parameterization is accomplished as follows: for every interior vertex Vi of a

mesh, a linear relation between the (ui , vi ) coordinates of this point and the
(uj , vj ) coordinates of its 1-ring neighbors {Vj }jN (i) , is set of the form:

aij (Uj Ui ) = 0
jN (i)

where Ui = (ui , vi ) are the coordinates of vertex Vi in the parameter space,

and aij are the non-negative coecients of matrix A. The boundary vertices are
assigned to a circle, or any convex shape while respecting the 3D-to-2D length
ratio between adjacent boundary vertices. The parameterization is then found
by solving the resulting linear system AU = B. A is sparse because each line in
the matrix A contains only a few non-zero elements (as many as the number of
its neighbors). A preconditioned bi-conjugate gradient (PBCG) method [17] is
used to iteratively solve this sparse linear system.
As long as the boundary vertices are mapped onto a convex shape, the resulting mapping is guaranteed to be one-to-one. The core of this shape-preserving
parameterization is how to determine non-negative coecients aij . In this paper,
we propose a new algorithm to determine these coecients.

Our Local Straightest Distances

The core of this piecewise linear parameterization is nding nonnegative coefcients aij in the equation 1. Our new parameterization proposes to determine
these coecients by using locally straightest paths and distances with local cutting planes. The work by Lee et. al. [10] uses local straightest geodesics by
Polthier and Schmiess [15] for these coecients, however the tangents of the
straightest geodesics by this previous method are determined by gaussian curvatures at vertices and may not be intuitively straightest especially when the
gaussian curvature is not equal to 2. In Figure 2, Vps is determined by having
the same left and right angle at Vi by [10], while Vour is determined intuitively
straightest by our local cutting plane.
Our new method for local straightest paths and distances is determined as
follows. As shown in Figure 2, a base plane B is created locally at each interior
vertex. To preserve shape better, the normal N ormalB of the base planeB is
calculated by area-weighted averaging of neighboring face normals of Vi as shown
in equation 2 and normalized later.

N ormalB =
wj N ormalj
jN (i)

In this way, we found that the distortion is lower than a simple averaged
normal of neighboring faces. A local cutting plane P passing with Vi , Vj is also
calculated. Two planes intersect in a line as long as they are not parallel. Our
cutting plane P pierces a neighboring face (for example j-th neighboring face)
on the mesh. Therefore there is a line segment which is the straightest path by


S. Lee and H. Lee












Vk Vps






Fig. 2. Our new local straightest path: For each interior vertex Vi , a local base B
and a cutting plane P with Vi , Vj is created. A local straightest path is computed by
cutting the face Vi Vk Vl with P. The intersection Vj  is computed on the edge Vk Vl and
connected to Vi to form a local straightest path. Vps is determined by the Polthier and
Schimess [15] and Vour is determined by our new method.

Fig. 3. Results by our new parameterization: models are Nefertiti, Face, Venus, Man,
Mountain from the left to the right

our method. There may be multiple line intersections where the plane P may
pierce multiple neighboring faces. As a future work, we will explore how to select
a line segment.
A local straightest path is computed by intersecting the face Vi Vk Vl and the
cutting plane P. The tangent a for this intersecting line segment Vj Vj  can be
easily calculated from the normal N ormalj of the face Vi Vk Vl and the normal
N ormalp of the cutting plane P as follows:
a = N ormalj XN ormalc


Then, the intersection vertex Vj  is computed on the edge Vk Vl and connected

to Vi for the local straightest path Vj Vi Vj  . Finally barycentric coordinates for

Parameterization of 3D Surface Patches by Straightest Distances


the weights of Vj , Vk , Vl are computed, summed, normalized and then used to

ll up the matrix A. Figure 3 shows the results of our new parameterization.


Floaters [5] is considered as the widely used parameterization and LTDs [10]
also used a straightest geodesic path algorithm by [15]. Therefore we compare
our method to the two existing parameterizations.
The visual results achieved by our new parameterization are shown in
Figure 3. The distortion with the texture-stretch metric in [19] is also measured
and shown in Table 1. Notice that our parameterization produces competitive
results to the current leading linear parameterizations. With measured boundary
The previous algorithms and the distortion metric (L2 -norm, the mean stretch
over all directions) are all implemented by us.

Measured Boundary for Parameterization

As shown in Figure 4 (b) and (c), and the 1st and 2nd gures in Figure 3, high
distortion always occurs near the boundary. To reduce this high distortion, we
attempt to derive a boundary by our straightest geodesic path algorithm.
An interior source vertex S can be specied by a user or calculated as a center
vertex of the mesh from the boundary vertices. A virtual edge is dened as an
edge between S and a vertex on the boundary. Straightest paths and distances
of virtual edges to every vertex on the boundary will be measured as follows:
1. Make virtual edges connecting from S to every boundary vertex of the mesh.
2. Map each virtual edge onto the base plane B by a polar map, which preserves
angles between virtual edges such as [4]. The normal of the base plane B is
calculated as previously mentioned in 2.
3. Measure the straightest distance for each virtual edge on B from S to each
boundary vertices with corresponding cutting planes.
4. Position each boundary vertex at the corresponding distance from S on B.
5. If the resulted boundary is non-convex shaped, change it to a convex. Find
the edges having minimum angle with the consecutive edge (i.e., concaved
part of the boundary) and move the boundary vertex to form a convex.
In the linear system AU = B, the boundary vertices in B is simply set to the
measured position (ui , vi ) and (0, 0) for inner vertices. Then PBCG as mentioned
in 2 is used to nd positions in the parameterized space.
Figure 4 (d) and (e) clearly shows the utility of our straightest geodesic paths
with the simple models Testplane on the top and Testplane2 on the bottom.
With a circular boundary, previous parameterizations [5,10] produce the same
results in (b) for two dierent models. In (c), there is also a high distortion in
the texture-mapping by using (b). Our straightest path algorithm contributes to
deriving two distinct measured boundaries and results in very low distortion in
(d) and much better texture-mapping in (e).


S. Lee and H. Lee

(a) Models

(b) Circular

(c) Textured
by (b)

(d) Measured

(e) Textured
by (d)

Fig. 4. Comparisons between parameterizations with a xed boundary and a measured boundary by our new method: With a circular boundary, previous parameterizations [5,10] produce the same results in (b) for two dierent models in (a). Notice
in (c) that there are a lot of distortion in the texture-mapping by the results in (b).
Our straightest path algorithm contributes to creating a measured boundary to reduce
distortion by distinct results in (d) and much better texture-mapping in (e).

Fig. 5. Results by our new parameterization with dierent boundaries. Models are
Face in the two left and Venus on the two right columns. The tip of the nose on each
model is chosen as S.

Results with more complex models are demonstrated in Figure 5. Notice that
there is always a high level of distortion near the xed boundary but a low level
of distortion near the measured boundary by using our method. The straightest

Parameterization of 3D Surface Patches by Straightest Distances


distances to the boundary vertices are actually dependent on the selection on the
source vertex S. We simply use a vertex centered on the mesh from the boundary
as for the source S. As a future work, we will explore how to select the vertex S.


The visual results by our method are shown in Figure 1, 3, and 5. The statistical
results comparing our parameterization with other methods are listed in Table 1.
Notice that visually and statistically our methods produce competitive results
than the previous methods.
Table 1. Comparisons of distortion measured by the texture stretch metric [19]: The
boundary is xed to a circle. Combined with measured boundaries by our straightest
path algorithm, our new parameterization in the 6th column produces better results
than the current leading methods.

No. of Floaters [5] LTDs [10] Our Param.

Our Param.
Vertices xed bound. xed bound. xed bound. measured bound.





The performance complexity of our algorithm is all linear to the number of

vertices, i.e., O(V ). The longest processing time among our models in Table 1 is
0.53 sec, required for the Mountain having the highest number of vertices. The
processing time is measured on a laptop with a Pentium M 2.0GHz 1GB RAM.

Conclusion and Future Work

In this paper, we introduce a new linear parameterization by locally straightest

distances. We also demonstrate the utility of our straightest path algorithm to
derive a measured boundary for parameterizations with better results.
Future work will extend the utility of our straightest path algorithm by applying it to other mesh processing techniques such as remeshing, subdivision, or

This work was supported by grant No. R01-2005-000-10120-0 from Korea Science
and Engineering Foundation in Ministry of Science & Technology.


S. Lee and H. Lee

1. Chen J., Han Y.: Shortest Paths on a Polyhedron; Part I: Computing Shortest
Paths, Int. J. Comp. Geom. & Appl. 6(2), 1996.
2. Desbrun M., Meyer M., Alliez P.: Intrinsic Parameterizations of Surface Meshes,
Eurographics 2002 Conference Proceeding, 2002.
3. Floater M., Gotsman C.: How To Morph Tilings Injectively, J. Comp. Appl.
Math., 1999.
4. Floater M.: Parametrization and smooth approximation of surface triangulations, Computer Aided Geometric Design, 1997.
5. Floater M.: Mean Value Coordinates, Comput. Aided Geom. Des., 2003.
6. Funkhouser T., Kazhdan M., Shilane P., Min P.,Kiefer W., Tal A., Rusinkiewicz
S., Dobkin D.: Modeling by example, ACM Transactions on Graphics, 2004.
7. Kimmel R., Sethian J.A.: Computing Geodesic Paths on Manifolds, Proc. Natl.
Acad. Sci. USA Vol.95 1998, 1998.
8. Lee Y., Kim H., Lee S.: Mesh Parameterization with a Virtual Boundary, Computer and Graphics 26 (2002), 2002.
9. Lee H., Kim L., Meyer M., Desbrun M.: Meshes on Fire, Computer Animation
and Simulation 2001, Eurographics, 2001.
10. Lee H., Tong Y. Desbrun M.: Geodesics-Based One-to-One Parameterization of
3D Triangle Meshes, IEEE Multimedia January/March (Vol. 12 No. 1), 2005.
11. Meyer M., Lee H., Barr A., Desbrun M.: Generalized Barycentric Coordinates to
Irregular N-gons, Journal of Graphics Tools, 2002.
12. Mitchell J.S.B.: Geometric Shortest Paths and network optimization, In Handbook of Computational Geometry, J.-R. Sack and J. Urrutia, Eds. Elsevier Science
13. Mitchell J.S.B., Mount D.M., Papadimitriou C.H.: The Discrete Geodesic Problem, SIAM J. of Computing 16(4), 1987.
14. Peyre G., Cohen L.: Geodesic Re-meshing and Parameterization Using Front
Propagation, In Proceedings of VLSM03, 2003.
15. Polthier K., Schmies M.: Straightest Geodesics on Polyhedral Surfaces, Mathematical Visualization, 1998.
16. Polthier K., Schmies M.: Geodesic Flow on Polyhedral Surfaces, Proceedings of
Eurographics-IEEE Symposium on Scientic Visualization 99, 1999.
17. Press W., Teuklosky S., Vetterling W., Flannery B.: Numerical Recipes in C,
second edition, Cambridge University Press, New York, USA, 1992.
18. Riken T., Suzuki H.: Approximate Shortest Path on a Polyhedral Surface Based
on Selective Renement of the Discrete Graph and Its Applications, Geometric
Modeling and Processing 2000 (Hongkong), 2000.
19. Sander P.V., Snyder J., Gortler S.J., Hoppe H.: Texture Mapping Progressive
Meshes, Proceedings of SIGGRAPH 2001, 2001.
20. Sifri O., Sheer A., Gotsman C. : Geodesic-based Surface Remeshing, In Proceedings of 12th Intnl. Meshing Roundtable, 2003.

Facial Expression Recognition Based on Emotion

Dimensions on Manifold Learning
Young-suk Shin
School of Information and telecommunication Engineering, Chosun University,
#375 Seosuk-dong, Dong-gu, Gwangju, 501-759, Korea

Abstract. This paper presents a new approach method to recognize facial expressions in various internal states using manifold learning (ML). The manifold
learning of facial expressions reflects the local features of facial deformations
such as concavities and protrusions. We developed a representation of facial
expression images based on manifold learning for feature extraction of facial
expressions. First, we propose a zero-phase whitening step for illuminationinvariant images. Second, facial expression representation from locally linear
embedding (LLE) was developed. Finally, classification of facial expressions in
emotion dimensions was generated on two dimensional structure of emotion
with pleasure/displeasure dimension and arousal/sleep dimension. The proposed
system maps facial expressions in various internal states into the embedding
space described by LLE. We explore locally linear embedding space as a facial
expression space in continuous dimension of emotion.

1 Introduction
A challenging study in automatic facial expression recognition is to detect the change
of facial expressions in various internal states. Facial expressions are continuous because the expression image varies smoothly as the expression is changed. The variability of expression images can be represented as subtleties of manifolds such as
concavities and protrusions in the image space. Thus automatic facial expression
recognition has to be detected subtleties of manifolds in the expression image space,
and it is also required continuous dimensions of emotion because the expression images consist of several other emotions and many combinations of emotions.
The dimensions of emotion can overcome the problem of discrete recognition
space because the discrete emotions can be treated as regions in a continuous space.
The two most common dimensions are arousal (calm/excited), and valence (negative/positive). Russell who argued that the dimensions of emotion can be applied to
emotion recognition [1]. Peter Lang has assembled an international archives of imagery rated by arousal and valence with image content [2]. To recognize facial expressions in various internal states, we worked with dimensions of emotion instead of
basic emotions or discrete emotion categories. The dimensions of emotion proposed
are pleasure/displeasure dimension and arousal/sleep dimension.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 8188, 2007.
Springer-Verlag Berlin Heidelberg 2007


Y.-s. Shin

Many studies [3, 4, 5, 6, 7] for representing facial expression images have been
proposed such as Optic flow, EMG(electromyography), Geometric tracking method,
Gabor representation, PCA (Principal Component Analysis) and ICA (Independent
Component Analysis). At recently study, Seung and Lee [8] proposed generating
image variability as low-dimensional manifolds embedded in image space. Roweis
and Saul [9] showed that locally linear embedding algorithm is able to learn the
global structure of nonlinear manifolds, such as the pose and expression of an individuals faces. But there have been no reports about how to contribute the intrinsic
features of the manifold based on various internal states on facial expression
We explore the global structure of nonlinear manifolds on various internal states
using locally linear embedding algorithm. This paper developed a representation of
facial expression images on locally linear embedding for feature extraction of various
internal states. This representation consists of two steps in section 3. Firstly, we present a zero-phase whitening step for illumination-invariant images. Secondly, facial
expression representation from locally linear embedding was developed. A classification of facial expressions in various internal states was presented on emotion dimension having pleasure/displeasure dimension and arousal/sleep dimension using 1nearest neighborhood. Finally, we discuss locally linear embedding space and facial
expression space on dimensions of emotion.

2 Database on Dimensions of Emotion

The face expression images used for this research were a subset of the Korean facial
expression database based on dimension model of emotion [10]. The dimension
model explains that the emotion states are not independent one another and related to
each other in a systematic way. This model was proposed by Russell [1]. The dimension model also has cultural universals and it was proved by Osgood, May & Morrison and Russell, Lewicka & Niit [11, 12].
The data set with dimension structure of emotion contained 498 images, 3 females
and 3 males, each image using 640 by 480 pixels. Expressions were divided into two
dimensions according to the study of internal states through the semantic analysis of
words related with emotion by Kim et al. [13] using 83 expressive words. Two
dimensions of emotion are Pleasure/Displeasure dimension and Arousal/Sleep dimension. Each expressor of females and males posed 83 internal emotional state expressions when 83 words of emotion are presented. 51 experimental subjects rated
pictures on the degrees of expression in each of the two dimensions on a nine-point
scale. The images were labeled with a rating averaged over all subjects. Examples of
the images are shown in figure 1. Figure 2 shows a result of the dimension analysis of
44 emotion words related to internal emotion states.

Fig. 1. Examples from the facial expression database in various internal states

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning


Fig. 2. The dimension analysis of 44 emotion words related to internal emotion states

3 Facial Expression Representation from Manifold Learning

This section develops a representation of facial expression images based on locally
linear embedding for feature extraction. This representation consists of two steps. In
the first step, we perform a zero-phase whitening step for illumination-invariant images. Second step, facial expression representation from locally linear embedding was
3.1 Preprocessing
The face images used for this research were centered the face images with coordinates
for eye and mouth locations, and then cropped and scaled to 20x20 pixels. The luminance was normalized in two steps. First, the rows of the images were concatenated to
produce 1 400 dimensional vectors. The row means are subtracted from the dataset,
X. Then X is passed through the zero-phase whitening filter, V, which is the inverse
square root of the covariance matrix:

V = E { XX

} 2 , Z = XV


This indicates that the mean is set to zero and the variances are equalized as unit
variances. Secondly, we subtract the local mean gray-scale value from the sphered
each patch. From this process, Z removes much of the variability due to lightening.
Fig. 3(a) shows original images before preprocessing and Fig. 3(b) shows images
after preprocessing.


Y.-s. Shin



Fig. 3. (a) original images before preprocessing (b) images after preprocessing

3.2 Locally Linear Embedding Representation

Locally linear embedding algorithm[9] is to preserve local neighbor structure of data
in both the embedding space and the observation space and is to map a given set of
high-dimensional data points into a surrogate low-dimensional space.
Similar expressions on continuous dimension of emotion can be existed in the local
neighborhood on the manifold. And the mapping from the high-dimensional data
points to the low dimensional points on the manifold is very important for dimensionality reduction. LLE can overcome the problem of nonlinear dimensionality reduction, and its algorithm does not involve local minima [9]. Therefore, we applied the
locally linear embedding algorithm to feature extraction of facial expressions.
LLE algorithm is used to obtain the corresponding low-dimensional data Y of the
training set X. D by N matrix, X consists of N data item in D dimensions. Y, d by N
matrix, consists of d < D dimensional embedding data for X. LLE algorithm can be
described as follow.
Step 1: compute the neighbors of each data point, X
Step 2: compute the weights W that best reconstruct each data point from its
neighbors, minimizing the cost in eq. (2) by two constraints.

(W ) = xi Wij xij


j =1

First, each data point


xi is reconstructed only from its neighbors, enforcing Wij = 0

xi and x j are not in the same neighbor. Second, the rows of the weight matrix

have sum-to-one constraint


= 1 . These constraints compute the optimal

j =1


Wij according to the least square. K means nearest neighbors per data point.

Step 3: compute the vectors Y best reconstructed by the weights W, minimizing the
quadratic form in eq.(3) by its bottom nonzero eigenvectors.

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning


(Y ) = yi Wij yij


j =1

This optimization is performed subjected to constraints. Considering that the cost

(Y ) is invariant to translation in Y,

= 0 is to remove the degree of freedom

by requiring the coordinates to be centered on the origin. Also,

yi yiT = I is to

N i

avoid degenerate solutions of Y=0. Therefore, eq.(3) can be described to an eigenvector decomposition problem as follow.

(Y ) = yi Wij yij
j =1

= arg min ( I W )Y


= arg min Y T ( I W )T ( I W )Y

The optimal solution of


is the smallest

eigenvectors of


( I W ) ( I W ) . The eigenvalues which are zero is discarded because discarding


eigenvectors with eigenvalue zero enforces the constraint term. Thus we need to compute the bottom (d+1) eigenvectors of the matrix.
Therefore we obtain the corresponding low-dimensional data set Y in embedding
space from the training set X. Figure 4 shows facial expression images reconstructed
from bottom (d+1) eigenvectors corresponding to the d+1 smallest eigenvalues discovered by LLE, with K=3 neighbors per data point. Especially, the first eight components d=8 discovered by LLE represent well features of facial expressions. Facial
expression images of various internal states mapped into the embedding space described by the first two components of LLE (See Fig. 5). From figure 5, we can
explore the structural nature of facial expressions in various internal states on embedding space modeled by LLE.




Fig. 4. Facial expression images reconstructed from bottom (d+1) eigenvectors (a) d=1,
(b) d=3, and (c) d=8


Y.-s. Shin

Fig. 5. 318 facial expression images of various internal states mapped into the embedding space
described by the first two components of LLE

The further a point is away from the center point, the higher is the intensity of displeasure and arousal dimensions. The center points coexists facial expression images
of various internal states.

4 Result and Discussion

Facial expression recognition in various internal states with features extracted by LLE
algorithm was evaluated by 1-nearest neighborhood on two dimensional structure of
emotion having pleasure/displeasure dimension and arousal/sleep dimension. 252
images for training and 66 images excluded from the training set for testing are used.
The 66 images for test include 11 expression images of each six people. The class
label which is recognized consists of four sections on two dimensional structure of
emotion. Fig. 6 shows the sections of each class label.
Table 1 gives a result of facial expression recognition recognized by proposed algorithm on two dimensions of emotion and indicates a part of all. The recognition
result in the Pleasure/Displeasure dimension of test set showed 90.9% and 56.1% in
the Arousal/Sleep dimension. In Table 1, the first column indicates the emotion words
of 11 expression images used for testing, the second and third columns include each
dimension value on bipolar dimensions of test data. The fourth column in Table 1
indicates the class label(C1,C2,C3,C4) of test data and the classification results recognized by proposed algorithm are shown in the fifth column.

Facial Expression Recognition Based on Emotion Dimensions on Manifold Learning









Fig. 6. The class region on two dimensional structure of emotion

Table 1. A result data of facial expression recognition recognized by proposed algorithm (Abbreviation: P-D, pleasure/displeasure; A-S, arousal/sleep;)

word Test data

Class label Recognized class
label on proposed
P D A S of test data
pleasantness (a) 1.40 5.47
depression (a)
6.00 4.23
7.13 6.17
5.90 3.67
6.13 6.47
2.97 5.17
2.90 4.07
7.80 5.67
6.00 1.93
2.07 4.27
1.70 5.70
gloomy( b )
6.60 3.83
strangeness( b ) 6.03 5.67
proud( b )
2.00 4.53
confident( b )
2.47 5.27
despair (b )
6.47 5.03
sleepiness(b )
6.50 3.80
1.83 4.97
2.10 5.63
boredom( b )
6.47 5.73
tedious( b)
6.73 4.77
Jealousy(b )
6.87 6.80

This paper explores two problems. One is to explore a new approach method to
recognize facial expressions in various internal states using locally linear embedding
algorithm. The other is to explore the structural nature of facial expressions in various
internal states on embedding space modeled by LLE.


Y.-s. Shin

As a result of the first problem, the recognition results of each dimension through
1-nearest neighborhood were significant 90.9% in Pleasure/Displeasure dimension
and 56.1% in the Arousal/Sleep dimension. The two dimensional structure of emotion
in the facial expression recognition appears as a stabled structure for the facial expression recognition. Pleasure-Displeasure dimension is analyzed as a more stable dimension than Arousal-Sleep dimension. In second case, facial expressions in continuous
dimension of emotion was showed a cross structure on locally linear embedding
space. The further a point is away from the center point, the higher is the intensity of
displeasure and arousal dimensions. From these results, we can know that facial expression structure on continuous dimension of emotion is very similar to structure
represented by the manifold model.
Thus our result may be analyzed that the relationship of facial expressions in various internal states can be facilitated on the manifold model. In the future work, we
will consider learning invariant manifolds of facial expressions.
Acknowledgements. This work was supported by the Korea Research Foundation
Grant funded by the Korean Government (KRF-2005-042-D00285).

1. Russell, J. A.: Evidence of convergent validity on the dimension of affect. Journal of Personality and Social Psychology, 30, (1978) 1152-1168
2. Peter J. L.: The emotion probe: Studies of motivation and attention. American Psychologist, 50(5) (1995) 372-385
3. Donato, G., Bartlett, M., Hager, J., Ekman, P. and Sejnowski, T.: Classifying facial actions, IEEE PAMI, 21(10) (1999) 974-989
4. Schmidt, K., Cohn, J. :Dynamics of facial expression:Normative characteristics and individual difference, Intl. Conf. On Multimedia and Expo, 2001
5. Pantic, M., Rothkrantz, L.J.M.: Towards an Affect-Sensitive Multimodal Human Computer Interaction, Proc. Of IEEE. 91 1370-1390
6. Shin, Y., An, Y.: Facial expression recognition based on two dimensions without neutral
expressions, LNCS(3711) (2005) 215-222
7. Bartlett, M.: Face Image analysis by unsupervised learning, Kluwer Academic Publishers
8. Seung, H. S., Lee, D.D.:The manifold ways of perception, Science (290), (2000) 22682269
9. Roweis, S.T., Saul, L.K..:Nonlinear Dimensionality reduction by locally linear embedding,
Science (290), (2000) 2323-2326
10. Bahn, S., Han, J., Chung, C.: Facial expression database for mapping facial expression
onto internal state. 97 Emotion Conference of Korea, (1997) 215-219
11. Osgood, C. E., May, W.H. and Miron, M.S.: Cross-curtral universals of affective meaning.
Urbana:University of Illinoise Press, (1975)
12. Russell, J. A., Lewicka, M. and Nitt, T.: A cross-cultural study of a circumplex model of
affect. Journal of Personality and Social Psychology, 57, (1989) 848-856
13. Kim, Y., Kim, J., Park, S., Oh, K., Chung, C.: The study of dimension of internal states
through word analysis about emotion. Korean Journal of the Science of Emotion and Sensibility, 1 (1998) 145-152

AI Framework for Decision Modeling in

Behavioral Animation of Virtual Avatars
A. Iglesias1 and F. Luengo2

Department of Applied Mathematics and Computational Sciences, University of

Cantabria, Avda. de los Castros, s/n, 39005, Santander, Spain
Department of Computer Science, University of Zulia, Post Oce Box #527,
Maracaibo, Venezuela,

Abstract. One of the major current issues in Articial Life is the decision modeling problem (also known as goal selection or action selection).
Recently, some Articial Intelligence (AI) techniques have been proposed
to tackle this problem. This paper introduces a new based-on-ArticialIntelligence framework for decision modeling. The framework is applied
to generate realistic animations of virtual avatars evolving autonomously
within a 3D environment and being able to follow intelligent behavioral
patterns from the point of view of a human observer. Two examples of
its application to dierent scenarios are also briey reported.


The realistic simulation and animation of the behavior of virtual avatars emulating human beings (also known as Articial Life) has attracted much attention
during the last few years [2,5,6,7,8,9,10,11,12,13]. A major goal in behavioral
animation is the construction of an intelligent system able to integrate the
dierent techniques required for the realistic simulation of the behavior of virtual humans. The challenge is to provide the virtual avatars with a high degree
of autonomy, so that they can evolve freely, with a minimal input from the animator. In addition, this animation is expected to be realistic; in other words,
the virtual avatars must behave according to reality from the point of view of a
human observer.
Recently, some Articial Intelligence (AI) techniques have been proposed to
tackle this problem [1,3,4,8]. This paper introduces a new based-on-ArticialIntelligence framework for decision modeling. In particular, we apply several
AI techniques (such as neural networks, expert systems, genetic algorithms,
K-means) in order to create a sophisticated behavioral system that allows the
avatars to take intelligent decisions by themselves. The framework is applied to
generate realistic animations of virtual avatars evolving autonomously within a
3D environment and being able to follow intelligent behavioral patterns from
the point of view of a human observer. Two examples of the application of this
framework to dierent scenarios are briey reported.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 8996, 2007.
c Springer-Verlag Berlin Heidelberg 2007


A. Iglesias and F. Luengo

The structure of this paper is as follows: the main components of our behavioral system are described in detail in Section 2. Section 3 discusses the
performance of this approach by means of two simple yet illustrative examples.
Conclusions and future lines in Section 4 close the paper.

Behavioral System

In this section the main components of our behavioral system are described.

Environment Recognition

At the rst step, a virtual world is generated and the virtual avatars are placed
within. In the examples described in this paper, we have chosen a virtual park
and a shopping center, carefully chosen environments that exhibit lots of potential objects-avatars interactions. In order to interact with the 3D world, each
virtual avatar is equipped with a perception subsystem that includes a set of
individual sensors to analyze the environment and capture relevant information.
This analysis includes the determination of distances and positions of the dierent objects of the scene, so that the agent can move in this environment, avoid
obstacles, identify other virtual avatars and take decisions accordingly. Further,
each avatar has a predened vision range (given by a distance threshold value
determined by the user), and hence, objects far away from the avatar are considered to be visible only if the distance from the avatar to the object is less than
such threshold value; otherwise, the object becomes invisible.
All this information is subsequently sent to an analyzer subsystem, where it
is processed by using a representation scheme based on genetic algorithms. This
scheme has proved to be extremely useful for pattern recognition and identication. Given a pair of elements A and B and a sequence j, there is a distance function that determines how near these elements are. It is dened as

dist(j, A, B) = k1
|Aji Bij |, where Aji denotes the ith gene at sequence j for

the chromosome A, and k denotes the number of genes of such a sequence. Note
that we can think of sequences in terms of levels in a tree. The sequence j is
simply the level j down the tree at which it appears, with the top of the tree as
sequence 1. A and B are similar at sequence (or at level) j if dist(j, A, B) = 0.
Note that this hierarchical structure implies that an arbitrary object is nearer
to that minimizing the distance at earlier sequences. This simple expression
provides a quite accurate procedure to classify objects at a glance, by simply
comparing them sequentially at each depth level.

Knowledge Acquisition

Once new information is attained and processed by the analyzer, it is sent to the
knowledge motor. This knowledge motor is actually the brain of our system. Its
main components are depicted in Figure 1(left). Firstly, the current information

AI Framework for Decision Modeling in Behavioral Animation


Fig. 1. (left) Knowledge motor scheme; (right) goal selection subsystem scheme

is temporarily stored into the knowledge buer, until new information is attained.
At that time, previous information is sent to the knowledge updater (KU), the
new one being stored into this knowledge buer and so on. This KU updates
both the memory area and the knowledge base.
The memory area is a neural network applied to learn from data (in our
problem, the information received from the environment through the perception
subsystem). In this paper we consider the unsupervised learning, and hence we
use an autoassociative scheme, since the inputs themselves are used as targets.
To update the memory area, we employ the K-means least-squares partitioning
algorithm for competitive networks, which are formed by an input and an output
layer, connected by feed forward connections. Each input pattern represents a
point in the conguration space (the space of inputs) where we want to obtain
classes. This type of architecture is usually trained with a winner takes all algorithm, so that only those weights associated with the output neuron with largest
value (the winner) are updated. The basic algorithm consists of two main steps:
(1) compute cluster centroids and use them as new cluster seeds and (2) assign
each chromosome to the nearest centroid. The basic idea behind this formulation
is to overcome the limitation of having more data than neurons by allowing each
neuron to store more than one data at the same time.
The knowledge base is actually a based-on-rules expert system, containing
both concrete knowledge (facts) and abstract knowledge (inference rules). Facts
include complex relationships among the dierent elements (relative positions,
etc.) and personal information about the avatars (personal data, schedule, hobbies or habits), i.e. what we call avatars characteristic patterns. Additional subsystems for tasks like learning, coherence control, action execution and others
have also been incorporated. This deterministic expert system is subsequently
modied by means of probabilistic rules, for which new data are used in order
to update the probability of a particular event. Thus, the neuron does not exhibit a deterministic output but a probabilistic one: what is actually computed
is the probability of a neuron to store a particular data at a particular time. This
probability is continuously updated in order to adapt our recalls to the most recent data. This leads to the concept of reinforcement, based on the fact that the
repetition of a particular event over time increases the probability to recall it.


A. Iglesias and F. Luengo

Of course, some particular data are associated with high-relevance events whose
inuence does not decrease over time. A learning rate parameter introduced in
our scheme is intended to play this role.
Finally, the request manager is the component that, on the basis of the information received from the previous modules, provides the information requested
by the goal selection subsystem described in next section.

Decision Modeling

A central issue in behavioral animation is the adequate choice of appropriate

mechanisms for decision modeling. Those mechanisms will take a decision about
which is the next action to be carried out from a set of feasible actions. The
fundamental task of any decision modeling module is to determine a based-onpriority sorted list of goals to be performed by the virtual agent. The goals
priority is calculated as a combination of dierent avatars internal states (given
by mathematical functions not described in this paper because of limitations of
space) and external factors (which will determine the goals feasibility).
Figure 1(right) shows the architecture of our goal selection subsystem, comprised of three modules and a goal database. The database stores a list of arrays
(associated with each of the available goals at each time) comprised of: the goal
ID, its feasibility rate (determined by the analyzer subsystem), the priority of
such a goal, the wish rate (determined by the emotional analyzer), the time at
which the goal is selected and its success rate.
The emotional analyzer (EA) is the module responsible to update the wish
rate of a goal (regardless its feasibility). Such a rate takes values on the interval [0, 100] according to some mathematical functions (not described here) that
simulate human reactions in a very realistic way (as shown in Section 3).
The intention planning (IP) module determines the priority of each goal. To
this aim, it uses information such as the factibility and wish rate. From this point
of view, it is rather similar to the intention generator of [13] except by the
fact that decision for that system is exclusively based on rules. This module also
comprises a buer to store temporarily those goals interrupted for a while, so
that the agent exhibits a certain persistence of goals. This feature is specially
valuable to prevent avatars from the oscillatory behavior appearing when the
current goal changes continuously.
The last module is the action planning (AP), a based-on-rules expert system
that gets information from the environment (via the knowledge motor), determines the sequence of actions to be carried out in order to achieve a particular
goal and updates the goals status accordingly.

Action Planning and Execution

Once the goals and priorities are dened, this information is sent to the motion
subsystem to be transformed into motion routines (just as the orders of our brain
are sent to our muscles) and then animated in the virtual world. Currently, we

AI Framework for Decision Modeling in Behavioral Animation


Fig. 2. Example 1: screenshots of the virtual park environment

have implemented routines for path planning and obstacle avoidance. In particular, we have employed a modication of the A* path nding algorithm, based
on the idea to prevent path recalculation until a new obstacle is reached. This
simple procedure has yielded substantial savings in time in all our experiments.
In addition, sophisticated collision avoidance algorithms have been incorporated
into this system (see the examples described in Section 3).

Two Illustrative Examples

In this section, two illustrative examples are used to show the good performance of our approach. The examples are available from Internet at the URLs: (x = 1, 2).
Figure 2 shows some screenshots from the rst movie. In picture (a) a woman
and her two children go into the park. The younger kid runs following some
birds. After failing to capture them, he gets bored and joins his brother. Then,
the group moves towards the wheel avoiding the trees and the seesaw (b). Simultaneously, other people (the husband and a girl) enter into the park. In (c) a
kid is playing with the wheel while his brother gets frustrated after expecting to
play with the seesaw (in fact, he was waiting for his brother besides the seesaw).
After a while, he decides to join his brother and play with the wheel anyway.
Once her children are safely playing, the woman relaxes and goes to meet her
husband, who is seated on a bench (d). The girl is seated in front of them, reading
a newspaper. Two more people go into the park: a man and a kid. The kid goes
directly towards the playground, while the man sees the girl, becomes attracted
by her and decides to sit down on the same bench, looking for a chat. As she
does not want to chat with him, she stands up and leaves. The new kid goes
to play with the wheel while the two brothers decide to play with the seesaw.
The playground has two seesaws, so each brother goes towards the nearest one
(e). Suddenly, they realize they must use the same one, so a brother changes his
trajectory and moves towards the other seesaw. The mother is coming back in
order to take after her children. Her husband also comes behind her and they


A. Iglesias and F. Luengo

Fig. 3. Temporal evolution of the internal states (top) and available goals wishes
(bottom) for the second example in this paper

start to chat again (f). The man on a bench is now alone and getting upset so he
decides to take a walk and look for the girl again. Simultaneously, she starts to
make physical exercises (g). When the man realizes shes busy and hence will not
likely pay attention on him, he changes his plans and walks towards the couple,
who are still chatting (g). The man realizes they are not interested to chat with
him either, so he nally leaves the park.
It is interesting to point out that the movie includes a number of remarkable motion and behavioral features. For instance, pictures (a)-(b)-(g) illustrate
several of our motion algorithms: persecution, obstacle avoidance, path nding,
interaction with objects (wheel, seesaw, bench) and other avatars, etc. People in
the movie exhibit a remarkable ability to capture information from the environment and change their trajectories in real time. On the other hand, they also
exhibit a human-like ability to realize about what is going on about others and
change their plans accordingly. Each virtual avatar has previous knowledge on
neither the environment nor other avatars, as it might happen in real life when
people enter for the rst time into a new place or know new people.
The second scene consists of a shopping center at which the virtual avatars can
perform a number of dierent actions, such as eat, drink, play videogames, sit
down to rest and, of course, do shopping. We consider four virtual avatars: three
kids and a woman. The pictures in Figure 3 are labelled with eight numbers
indicating the dierent simulations milestones (the corresponding animation
screenshots for those time units are displayed in Figure 4): (1) at the initial

AI Framework for Decision Modeling in Behavioral Animation


Fig. 4. Example 2: screenshots of the shopping center environment

step, the three kids go to play with the videogame machines, while the woman
moves towards the eating area (indicate by the tables in the scene). Note that the
internal state with the highest value for the avatar analyzed in this work is the
energy, so the avatar is going to perform some kind of dynamic activity, such as to
play; (2) the kid keeps playing (and their energy level going down) until his/her
satisfaction reaches the maximum value. At that time, the anxiety increases, and
avatars wish turns into performing a dierent activity. However, the goal play
videogame has still the highest wish rate, so it will be in progress for a while;
(3) at this simulation step, the anxiety reaches a local maximum again, meaning
that the kid is getting bored about playing videogames. Simultaneously, the goal
with the highest value is drink water, so the kid stops playing and looks for
a drink machine; (4) at this time, the kid gets the drink machine, buys a can
and drinks. Consequently, the internal state function thirsty decreases as the
agent drinks until the status of this goal becomes goal attained; (5) Once this
goal is satised, the goal play videogames is the new current goal. So, the kid
comes back towards the videogame machines; (6) however, the energy level is
very low, so the goal play videogames is interrupted, and the kid looks for a
bench to sit down and have a rest; (7) once seated, the energy level turns up
and the goal have a rest does not apply anymore; (8) since the previous goal
play videogames is still in progress, the agent comes back and plays again.
Figure 3 shows the temporal evolution of the internal states (top) and the
goals wishes (bottom) for one of the kids. Similar graphics can be obtained for
the other avatars (they are not included here because of limitations of space).
The picture on the top displays the temporal evolution of the ve internal state
functions (valued onto the interval [0, 100]) considered in this example, namely,
energy, shyness, anxiety, hunger and thirsty. On the bottom, the wish rate
(also valued onto the interval [0, 100]) of the feasible goals (have a rest, eat
something, drink water, take a walk and play videogame) is depicted.


A. Iglesias and F. Luengo

Conclusions and Future Work

The core of this paper is the realistic simulation of the human behavior of virtual
avatars living in a virtual 3D world. To this purpose, the paper introduces a
behavioral system that uses several Articial Intelligence techniques so that the
avatars can behave in an intelligent and autonomous way. Future lines of research
include the determination of new functions and parameters to reproduce human
actions and decisions and the improvement of both the interaction with users
and the quality of graphics. Financial support from the Spanish Ministry of
Education and Science (Project Ref. #TIN2006-13615) is acknowledged.

1. Funge, J., Tu, X. Terzopoulos, D.: Cognitive modeling: knowledge, reasoning and
planning for intelligent characters, SIGGRAPH99, (1999) 29-38
2. Geiger, C., Latzel, M.: Prototyping of complex plan based behavior for 3D actors,
Fourth Int. Conf. on Autonomous Agents, ACM Press, NY (2000) 451-458
3. Granieri, J.P., Becket, W., Reich, B.D., Crabtree, J., Badler, N.I.: Behavioral control for real-time simulated human agents, Symposium on Interactive 3D Graphics,
ACM, New York (1995) 173-180
4. Grzeszczuk, R., Terzopoulos, D., Hinton, G.: NeuroAnimator: fast neural network
emulation and control of physics-based models. SIGGRAPH98 (1998) 9-20
5. Iglesias A., Luengo, F.: New goal selection scheme for behavioral animation of
intelligent virtual agents. IEICE Trans. on Inf. and Systems, E88-D(5) (2005)
6. Luengo, F., Iglesias A.: A new architecture for simulating the behavior of virtual
agents. Lectures Notes in Computer Science, 2657 (2003) 935-944
7. Luengo, F., Iglesias A.: Framework for simulating the human behavior for intelligent virtual agents. Lectures Notes in Computer Science, 3039 (2004) Part I:
Framework architecture. 229-236; Part II: Behavioral system 237-244
8. Monzani, J.S., Caicedo, A., Thalmann, D.: Integrating behavioral animation techniques. EUROGRAPHICS2001, Computer Graphics Forum 20(3) (2001) 309-318
9. Raupp, S., Thalmann, D.: Hierarchical model for real time simulation of virtual
human crowds. IEEE Trans. Visual. and Computer Graphics. 7(2) (2001) 152-164
10. Sanchez, S., Balet, O., Luga, H., Dutheu, Y.; Autonomous virtual actors. Lectures
Notes in Computer Science 3015 (2004) 68-78
11. de Sevin, E., Thalmann, D.: The complexity of testing a motivational model of action selection for virtual humans, Proceedings of Computer Graphics International,
IEEE CS Press, Los Alamitos, CA (2004) 540-543
12. Thalmann, D., Monzani, J.S.: Behavioural animation of virtual humans: what kind
of law and rules? Proc. Computer Animation 2002, IEEE CS Press (2002)154-163
13. Tu, X., Terzopoulos, D.: Articial shes: physics, locomotion, perception, behavior.
Proceedings of ACM SIGGRAPH94 (1994) 43-50

Studies on Shape Feature Combination and Efficient

Categorization of 3D Models
Tianyang Lv1,2, Guobao Liu1, Jiming Pang1, and Zhengxuan Wang1

College of Computer Science and Technology, Jilin University, Changchun, China

College of Computer Science and Technology, Harbin Engineering University, Harbin, China

Abstract. In the field of 3D model retrieval, the combination of different kinds

of shape feature is a promising way to improve retrieval performance. And the
efficient categorization of 3D models is critical for organizing models. The paper proposes a combination method, which automatically decides the fixed
weight of different shape features. Based on the combined shape feature, the
paper applies the cluster analysis technique to efficiently categorize 3D models
according to their shape. The standard 3D model database, Princeton Shape
Benchmark, is adopted in experiment and our method shows good performance
not only in improving retrieval performance but also in categorization.
Keywords: Shape-based 3D model retrieval; feature combination; categorization; clustering.

1 Introduction
With the proliferation of 3D models and their wide spread through internet, 3D model
retrieval emerges as a new field of multimedia retrieval and has great application
value in industry, military etc.. [1]. Similar to the studies in image or video retrieval,
researches in 3D model retrieval concentrate on the content-based retrieval way [2],
especially the shape-based retrieval. The major problem of shape-based retrieval is
extracting models shape feature, which should satisfy the good properties, such as
rotation invariant, representing various kinds of shape, describing similar shape with
similar feature, etc
Although many methods for extracting shape feature have been proposed [3], researches show that none is the best for all kinds of shapes [4, 5, 6, 7]. To solve this problem, it is an effective way to combine different shape features [5, 6, 7]. The critical step
of the combination is determining the weights of shape features. For instance, ref. [5]
determines the fixed weight due to users experience, which is based on numerous
experiments; meanwhile it decides the dynamic weight based on the retrieval result
and the categorization of 3D models.
However, the shortcomings of these methods are: need user experience to decide
the appropriate fixed weight, and cannot appoint weight for new feature; it is too time
consuming to compute the dynamic weight, while its performance is just a little better
than the fix-weighted way.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 97104, 2007.
Springer-Verlag Berlin Heidelberg 2007


T. Lv et al.

Moreover, it is still an open problem to categorize 3D models. Nowadays, the

categorization of 3D models depends on manual work, such as Princeton Shape
Benchmark (PSB) [4]. Even if the drawback of time consuming is not taken into consideration, the manual way also results in the following mistakes: first, models with
similar shape are classified into different classes, like the One Story Home class and
Barn class of PSB; second, models with apparently different shapes are classified into
the same class, like the Potted Plant class and Stair class. Table 1 states the detail. It
is because that human categorize 3D models according to their semantics in the real
life, instead of their shape.
Table 1. Mistakes of manual categorization of PSB

One Story Home


Potted Plant


To solve these problems, the paper conducts researches in two aspects: first, we
analyzes the influence of the value of weight on the combination performance, and
proposes an method, which automatically decide the value of the fixed weight; second, we introduces an efficient way for categorizing 3D models based on clustering
The rest of the paper is organized as follows: Section 2 introduces the automatic
combination method; Section 3 states the categorization based on clustering result;
Section 4 gives the experimental results of PSB; and Section 5 summarizes the paper.

2 An Automatic Decision Method of Features Fixed-Weight

When combining different shape features with fixed weight, the distance dcom between
model q and model o is computed as follows:

d com ( q, o) = wi
i =1

d i ( q, o )
max(d i (q))


Where l is the number of the different shape features, wi is the fixed weight of the ith
shape feature, di(q, o) is the distance between q and o under the ith shape feature vector, and max(di(q)) is maximum distance of q and the others. Previous researches
show that the Manhattan distance performs better than the Euclidian distance, thus the
paper adopts the Manhattan distance in computing di(q, o).

Studies on Shape Feature Combination and Efficient Categorization of 3D Models


In this paper, four kinds of feature extraction methods are adopted and 5 sets of
feature vectors are obtained from PSB. The detail is stated as follows: (1) the shape
feature extracting method based on depth-buffer [11], termed DBD, which obtains the
feature vector with 438 dimensions; (2) the method based on EDT [12], termed NEDT,
which obtains the vector with 544 dimensions; (3) the method based on spherical
harmonic [13], which obtains two sets of vectors with 32 dimensions and 136 dimensions, termed RSH-32 and RSH-136 respectively; (4) the method performing the
spherical harmonic transformation on the voxelized models, termed SHVF, which
obtains the feature vector with 256 dimensions.
We conduct experiment on PSB to analyze the influence of the value of wi on the
combination performance. Table 2 evaluates the performance of combining any two
out of 5 different features of PSB. The weight of each feature is equal and the criterion R-precision is adopted [8]. It can be seen that there co-exist the good cases, like
combining DBD and NEDT, and the bad cases, like combining DBD and RSH-32.
But if the fixed weight (4:4:2:1:1) decided according to our experience is adopted, the
performance is much better.
Table 2. Combination performance comparision under different fixed weights










-- --

-- --

-- --

-- --


-- --

-- --

-- --

-- --




-- --

-- --

-- --



-- --

-- --

-- --

+RSH-136 0.364



-- --

-- --




-- --

-- --




0.258 0.168

-- --



0.279 0.168




0.286 0.204




0.299 0.204 0.201

-- --

This experiment shows that the appropriate value of wi can greatly improve the
combination performance. Although wi decided due to experience performs well, it
has the limitations like time consuming and hard to popularize.
Thus, it is necessary to automatically decide wi. To accomplish this task, we suppose that if a feature is the best for most models, its weight should be the highest. And
if one feature is the best for a model, its weight should be summed by 1/N, where N is
the total number of models. As for a set of classified 3D models, we follow the winner-take-all rule. It means that if the ith feature is the best for the jth class Cj of models, wi is summed by nj/N, where nj is the size of Cj.
Finally, states the automatic decision formula of the fixed-weights wi as follows:

f (C )* n


wi =

j =1



T. Lv et al.

Where nClass is the number of the classes of models; fi(Cj)=1, iff the R-precision of the

ith feature is the highest for Cj, otherwise fi(Cj)=0. And

i =1

= 1.

Obviously, the proposed method can automatically decide the fixed weight for a
new shape feature by re-computing the Formula (2). During this process, the weights
of the existing features are also adjusted.

3 Efficient Categorization of 3D Models Based on Clustering

As an unsupervised technique, cluster analysis technique is a promising candidate for
categorizing 3D models. It is good at discovering the concentration situation of the
feature vectors without prior knowledge. Since the models with similar feature are
grouped together and their feature reflects their shape, the clustering result can be
considered as a categorization of 3D models based on their shape. Ref. [10] performs
the research in this field. However, it relies on just one kind of shape feature, thus the
clustering result is highly sensitive to the performance of shape feature.
In contrast, the paper adopts the proposed fix-weighted feature combination
method and achieves a much better and more stable shape descriptor of 3D model.
The distance among models is computed according to Formula (1) and (2).
The X-means algorithm is selected to analyze the shape feature set of 3D models.
X-means is an important improvement of the well known method K-means. To overcome the highly dependency of K-means on the pre-decided number k of clusters, Xmeans requires but not restricts to the parameter k. Its basic idea is that: in an iteration
of clustering, it splits the center of selected clusters into two children and decides
whether a child survives. During clustering, the formula BIC(c x)=L(x c)k(m+1)/2*logn is used to decide the appropriate opportunity to stop clustering, where
L(x c) is the log-likelihood of dataset x according to the model c, and m is the dimensionality of the data. In this way, the appropriate number of clusters can be automatically decided.
Although X-means is efficient in classifying models according to their shape, there
still exist mistakes in the clustering result for two reasons:

(1) Due to the complexity and diversity of models shape, it is very difficult to describe all shapes. The combination of different shape features can partially solve this
problem, but still has its limit.
(2) X-means may make clustering mistakes. Up to now, it seems that the clustering
process ensure most data is clustered into the right groups, but not every data.
Thus, we import human correction to correct the mistakes lies in the clustering result. To avoid mistakes caused by manual intervene, like those in Table 1, we make
the restriction that a user can just delete some models from a cluster or delete the
whole cluster. And the pruned models are considered as the wrongly categorized and
are labeled as unclassified.
Finally, the refined clustering result is treated as the categorization of 3D models.

Studies on Shape Feature Combination and Efficient Categorization of 3D Models


In comparison with the pure manual work, the categorization base on clustering result is much more efficient and objective. The clustering technique not only shows the
number of classes according to models shape, but also states the member of a class.

4 Experiment and Analysis

We conduct series experiments on the standard 3D model database, Princeton Shape
Benchmark, which owns 1814 models. And the 5 sets of shape feature vector introduced in Section 2 are used for combination.
First, we analyze the performance of the automatic fix-weighted combination. According to Formula (2), the automatic fixed weight for 5 features are DBD=0.288,
NEDT=0.373, RSH-136=0.208, RSH-32=0.044 and SHVF=0.087. And Table 3 states
the R-Precision and improvement after combining any two features for PSB. Compared with Table 2, the performance of the automatic fix-weighted combination is
much better. The highest improvement is 24%=((0.208-0.168)/ 0.168), while the best
combination improves by 9.6%=((0.388-0.354)/0.354).
Table 3. The performance of combining two features based on the automatic fixed weight







-- --

-- --

-- --

-- --




-- --

-- --

-- --





-- --

-- --






-- --







Fig. 1. states the Precision-Recall curves along with R-Precision of 5 features, the
combination of 5 features based on equal fixed weight (Equal Weight), the combination using fixed weight (4:4:2:1:1) decided by experience (Experience Weight), and
the combination adopting the proposed automatic fixed weight (Automatic Weight).
It can be seen that the proposed method is the best under all criterions. It achieves
the best R-Precision 0.4046, which is much better than that of the Equal Weight
0.3486 and is also slightly better than the Experience Weight 0.4021. And its performance improved by 14.5% than the best single feature DBD.
After combining 5 features based on the proposed method, we adopt X-means to
analyze the PSB, and 130 clusters are finally obtained. In scanning these clusters, we
found that most clusters are formed by the models with similar shape, like the cluster
C70, C110, C112, C113 in Table 4. However, there also exist mistakes, such as C43 in
Table 4. After analyzing the combined feature of those wrong models, we find that
the mistakes are mainly caused by the shape feature, instead of clustering.


T. Lv et al.

Fig. 1. Performance comparison adopting Precision-Recall and R-Precision

Table 4. Detail of some result clusters of PSB




Studies on Shape Feature Combination and Efficient Categorization of 3D Models


Table 4. (continued)



Then, we select 3 students that never contact these models to refine the clustering
result. At least two of them must reach an agreement for each deletion. In less than 2
hours, including the time costs on arguments, they labeled 202 models as the unclassified out of 1814, viz. 11.13%, and 6 clusters out of 130 are pruned, viz. 4.6%.
Obviously, the clustering result is a valuable reference for categorizing 3D models.
Even if the refinement time is included, the categorization based on clustering result
is much faster than the pure manual work, which usually costs days and is exhaustive.

5 Conclusions
The paper proposes a combination method, which automatically decides the fixed
weights of different shape features. Based on the combined feature, the paper categorizes 3D models according to their shape. Experimental result shows that the proposed
method shows good performance not only in improving retrieval performance but also
in categorization. Future work will concentrate on the study of clustering ensemble to
achieve a much stable clustering result of 3D models.

This work is sponsored by Foundation for the Doctoral Program of the Chinese Ministry of Education under Grant No.20060183041 and the Natural Science Research
Foundation of Harbin Engineering University under the grant number HEUFT05007.

[1] T.Funkhouser, et al. A Search Engine for 3D Models. ACM Transactions on Graphics.22
(1), (2003) 85-105.
[2] Yubin Yang, Hui Li, Qing Zhu. Content-Based 3D Model Retrieval: A Survey. Chinese
Journal of Computer. (2004), Vol. 27, No. 10, Pages: 1298-1310.


T. Lv et al.

[3] Chenyang Cui, Jiaoying Shi. Analysis of Feature Extraction in 3D Model Retrieval.
Journal of Computer-Aided Design & Computer Graphics. Vol.16, No.7, July (2004).
pp. 882-889.
[4] Shilane P., Min P., Kazhdan M., Funkhouser T.. The Princeton Shape Benchmark. In
Proceedings of the Shape Modeling International 2004 (SMI'04), Genova, Italy, June
2004. pp. 388-399.
[5] Feature Combination and Relevance Feedback for 3D Model Retrieval. The 11th International Conference on Multi Media Modeling (MMM 2005), 12-14 January 2005, Melbourne, Australia. IEEE Computer Society 2005. pp. 334-339.
[6] Ryutarou Ohbuchi, Yushin Hata,Combining Multiresolution Shape Descriptors for Effective 3D Similarity Search Proc. WSCG 2006, Plzen, Czech Republic, (2006).
[7] Atmosukarto I., Wee Kheng Leow, Zhiyong Huang. Feature Combination and Relevance
Feedback for 3D Model Retrieval. Proceedings of the 11th International Multimedia
Modelling Conference, (2005).
[8] R. Baeza-Yates, B. Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley,
[9] Dan Pelleg, Andrew Moore. X-means: Extending K-means with Efficient Estimation of
the Number of Clusters. In Proc. 2000 Int. Conf. on Data Mining. (2000).
[10] Tianyang Lv, etc. An Auto-Stopped Hierarchical Clustering Algorithm for Analyzing 3D
Model Database. The 9th European Conference on Principles and Practice of Knowledge
Discovery in Databases. In: Lecture Notes on Artificial Intelligent, Vol. 3801, pp.
601 608.
[11] M. Heczko, D. Keim, D. Saupe, and D. Vranic. Methods for similarity search on 3D databases. Datenbank-Spektrum, 2(2):5463, (2002). In German.
[12] H. Blum. A transformation for extracting new descriptors of shape. In W. Wathen-Dunn,
editor, Proc. Models for the Perception of Speech and Visual Form, pages 362{380,
Cambridge, MA, November 1967. MIT Press.
[13] Kazhdan Michael , Funkhouser Thomas. Harmonic 3D shape matching [A]. In : Computer Graphics Proceedings Annual Conference Series , ACM SIGGRAPH Technical
Sketch , San Autonio , Texas , (2002)

A Generalised-Mutual-Information-Based Oracle
for Hierarchical Radiosity
Jaume Rigau, Miquel Feixas, and Mateu Sbert
Institut dInform`
atica i Aplicacions
Campus Montilivi P-IV, 17071-Girona, Spain

Abstract. One of the main problems in the radiosity method is how to

discretise a scene into mesh elements that allow us to accurately represent
illumination. In this paper we present a new renement criterion for
hierarchical radiosity based on the continuous and discrete generalised
mutual information measures between two patches or elements. These
measures, derived from the generalised entropy of Harvda-Charv
express the information transfer within a scene. The results obtained
improve on the ones based on kernel smoothness and Shannon mutual


The radiosity method solves the problem of illumination in an environment with

diuse surfaces by using a nite element approach [1]. The scene discretisation
has to represent the illumination accurately by trying to avoid unnecessary subdivisions that would increase the computation time. A good meshing strategy
will balance the requirements of accuracy and computational cost.
In the hierarchical radiosity algorithms [2] the mesh is generated adaptively:
when the constant radiosity assumption on a patch is not valid for the radiosity
received from another patch, the renement algorithm will subdivide it in a set
of subpatches or elements. A renement criterion, called oracle, informs us if a
subdivision of the surfaces is needed, bearing in mind that the cost of the oracle
should remain acceptable. In [3,4], the diculty in obtaining a precise solution
for the scene radiosity has been related to the degree of dependence between all
the elements of the adaptive mesh. This dependence has been quantied by the
mutual information, which is a measure of the information transfer in a scene.
In this paper, a new oracle based on the generalised mutual information [5],
derived from the generalised entropy of Harvda-Charvat-Tsallis [6], is introduced. This oracle is obtained from the dierence between the continuous and
discrete generalised mutual information between two elements of the adaptive
mesh and expresses the loss of information transfer between two patches due to
the discretisation. The results obtained show that this oracle improves on the
kernel smoothness-based [7] and the mutual information-based [8,9] ones, conrming the usefulness of the information-theoretic approach in dealing with the
radiosity problem.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 105113, 2007.
c Springer-Verlag Berlin Heidelberg 2007



J. Rigau, M. Feixas, and M. Sbert


The radiosity method uses a nite element approach, discretising the diuse
environment into patches and considering the radiosities, emissivities and reectances constant over them. With these assumptions, the discrete radiosity
equation [1] is given by

Bi = Ei + i
Fij Bj ,

where S is the set of patches of the scene, Bi , Ei , and i , are respectively the
radiosity, emissivity, and reectance of patch i, Bj is the radiosity of patch j,
and Fij is the patch-to-patch form factor, dened by
Fij =
F (x, y)dAy dAx ,
Ai Si Sj
where Ai is the area of patch i, Si and Sj are, respectively, the surfaces of patches
i and j, F (x, y) is the point-to-point form factor between x Si and y Sj , and
dAx and dAy are, respectively, the dierential areas at points x and y. Using
Monte Carlo computation with area-to-area sampling, Fij can be calculated:
Fij Aj

|Sij |

F (x, y),



where the computation accuracy depends on the number of random segments

between i and j (|Sij |).
To solve the system (1), a hierarchical renement algorithm is used. The
eciency of this algorithm depends on the election of a good renement criterion. Many renement oracles have been proposed in the literature (see [10]
for details). For comparison purposes, we review here the oracle based on kernel smoothness (KS), proposed by Gortler et al. [7] in order to drive hierarchical
renement with higher-order approximations. When applied to constant approximations, this renement criterion is given by
i max{Fijmax Fijavg , Fijavg Fijmin }Aj Bj < ,


where Fijmax = max{F (x, y) | x Si , y Sj } and Fijmin = min{F (x, y) | x

Si , y Sj } are, respectively, the maximum and minimum radiosity kernel values
estimated by taking the maximum and minimum value computed between pairs
of random points on both elements, and Fijavg = Fij /Aj is the average radiosity
kernel value.

HCT Entropy and Generalised Mutual Information

In 1967, Harvda and Charv

at [6] introduced a new generalised denition of entropy. In 1988, Tsallis [11] used this entropy in order to generalise the BoltzmannGibbs entropy in statistical mechanics.

A Generalised-Mutual-Information-Based Oracle


Denition 1. The Harvda-Charv

at-Tsallis entropy (HCT entropy) of a discrete
random variable X, with |X| = n and pX as its probability distribution, is dened
1 i=1 p
H (X) = k
where k is a positive constant (by default k = 1) and \{1} is called entropic

nentropy recovers the Shannon discrete entropy when 1, H1 (X)
i=1 pi ln pi , and fulls good properties such as non-negativity and concavity.
On the other hand, Taneja [5] and Tsallis [12] introduced the generalised
mutual information.
Denition 2. The generalised mutual information between two discrete random
variables (X, Y ) is dened by


I (X, Y ) =
1 1
i=1 j=1
where |X| = n, |Y | = m, pX and qY are the marginal probability distributions,
and pXY is the joint probability distribution between X and Y .
The transition of I (X, Y ) to the continuous generalised mutual information is
straightforward. Using entropies, an alternative form is given by
I (X, Y ) = H (X) + H (Y ) (1 )H (X)H (Y ) H (X, Y ).


Shannon mutual information (MI) is obtained when 1. Some alternative

ways for the generalised mutual information can be seen in [13].

Generalised Mutual Information-Based Oracle

We will see below how the generalised mutual information can be used to build
a renement oracle within a hierarchical radiosity algorithm. Our strategy will
be based on the estimate of the discretisation error from the dierence between
the continuous and discrete generalised mutual information (6) between two
elements of the adaptive mesh. The discretisation error based on Shannon mutual information was introduced by Feixas et al. [8] and applied to hierarchical
radiosity with good results.
In the context of a discrete scene information channel [4], the marginal probabilities are given by pX = qY = {ai } (i.e., the distribution of the relative area
of patches: AATi , where AT is the total area of scene) and the joint probability is
given by pXY = {ai Fij }. Then,
Denition 3. The discrete generalised mutual information of a scene is given


i ij
I =
(ai Fij , ai aj ),
ai aj
iS jS

iS jS


J. Rigau, M. Feixas, and M. Sbert

where, using 1 =



ai aj and (p, q) =

1 q p
1 q1 ,

the last equality is

This measure quanties the discrete information transfer in a discretised scene.

The term (ai Fij , ai aj ) can be considered as an element of the generalised
mutual information matrix I , representing the information transfer between
patches i and j.
To compute I , the Monte Carlo area-to-area sampling (3) is used, obtaining
for each pair of elements

1 a
i aj ai Fij
1 a1

1 Ai Aj

1 A
F (x, y) .
|Sij |

Iij = (ai Fij , ai aj ) =



The information transfer between two patches can be obtained more accurately using the continuous generalised mutual information between them. From
the discrete form (8) and using the pdfs p(x) = q(y) = A1T and p(x, y) =
AT F (x, y), we dene
Denition 4. The continuous generalised mutual information of a scene is
given by

I =

F (x, y), 2 dAy dAx .

This represents the continuous information transfer in a scene. We can split the
integration domain and for two surface elements i and j we have

Iij =

F (x, y), 2 dAy dAx

Si S j
that, analogously to the discrete case, expresses the information transfer between
two patches.
Both continuous expressions, (10) and (11), can be solved by Monte Carlo integration. Taking again area-to-area sampling (i.e., pdf Ai1Aj ), the last expression
(11) can be approximated by

F (x, y), 2
|Sij |

1 Ai Aj

1 A
F (x, y) .
|Sij |

Ic ij Ai Aj


Now, we dene


A Generalised-Mutual-Information-Based Oracle

Denition 5. The generalised discretisation error of a scene is given by

= Ic I =
ij ,



iS jS

where ij = Ic ij Iij .
While expresses the loss of information transfer in a scene due to the discretisation, the term ij gives us this loss between two elements i and j. This
dierence is interpreted as the benet to be gained by rening and can be used
as the base of the new oracle.
From (13), using (9) and (12), we obtain
ij Ai Aj A2


ij =

|Sij |

1 ij

F (x, y)


|Sij |


F (x, y) .



Accordingly to the radiosity equation (1) and in analogy to classic oracles,

like KS, we consider the oracle structure i Bj < , where is the geometric
kernel [14]. Now, we propose to take the generalised discretisation error between
two patches as the kernel ( = ij ) for the new oracle based on generalised
mutual information (GMI ). To simplify the expression of this oracle, we multiply
the inequality by the scene constant AT 2 (1 ).
Denition 6. The hierarchical radiosity oracle based on the generalised mutual
information is dened by
i Ai Aj ij Bj < .


In this section, the GMI oracle is compared with the KS and MI ones. Other
comparisons, with a more extended analysis, can be found in [14]. All oracles
have been implemented on top of the hierarchical Monte Carlo radiosity method.
In Fig. 1 we show the results obtained for the KS (a) and GMI oracles with
their Gouraud shaded solutions and meshes. In the GMI case, we show the
results obtained with the entropic indexes 1 (b) (i.e., note that GMI1 = MI) and
0.5 (c). For the sake of comparison, adaptive meshes of identical size have been
generated with the same cost for the power distribution: around 19,000 patches
and 2,684,000 rays, respectively. To estimate the form factor, the number of
random lines has been xed to 10.
In Table 1, we show the Root Mean Square Error (RMSE) and Peak Signal
Noise Ratio (PSNR) measures for KS and GMI (for 5 dierent entropic indexes)
oracles for the test scene. These measures have been computed with respect
to the corresponding converged image, obtained with a path-tracing algorithm


J. Rigau, M. Feixas, and M. Sbert

(a.i) KS

(a.ii) KS

(b.i) GMI1.00

(b.ii) GMI1.00

(c.i) GMI0.50

(c.ii) GMI0.50

Fig. 1. (a) KS and GMI (entropic indexes (b) 1 and (c) 0.5) oracles. By columns, (i)
Gouraud shaded solution of view1 and (ii) mesh of view2 are shown.

with 1,024 samples per pixel in a stratied way. For each measure, we consider a
uniform weight for every colour channel (RMSEa and PSNRa ) and a perceptual
one (RMSEp and PSNRp ) in accordance with the sRGB system.
Observe in the view1 , obtained with GMI (Fig. 1.i.b-c), the ner details of
the shadow cast on the wall by the chair on the right-hand side and also the
better-dened shadow on the chair on the left-hand side and the one cast by the
desk. In view2 (Fig. 1.ii) we can also see how our oracle outperforms the KS,
especially in the much more dened shadow of the chair on the right. Note the
superior quality mesh created by our oracle.

A Generalised-Mutual-Information-Based Oracle


Table 1. The RMSE and PSNR measures of the KS and GMI oracles applied to the
test scene of Fig. 1, where the KS and GMI{0.5,1} results are shown. The oracles have
been evaluated with 10 random lines between each two elements.


13.128 25.339
11.280 26.628
10.173 27.405
9.232 28.133
8.786 28.526
8.568 28.696

25.767 15.167
27.084 13.046
27.982 11.903
28.825 10.438
29.254 10.010


14.354 24.513
12.473 25.821
11.279 26.618
9.709 27.758
9.257 28.122
8.740 28.533



Fig. 2. GMI0.50 oracle: (i) Gouraud shadow solution and (ii) mesh are shown

Table 2. The RMSE and PSNR measures of the GMI oracle applied to the scene
of Fig. 2, where the GMI0.5 result is shown. The oracle has been evaluated with 10
random lines between each two elements.
oracle RMSEa
GMI1.50 16.529
GMI1.25 15.199
GMI1.00 14.958
GMI0.75 14.802
GMI0.50 14.679




In general, the improvement obtained with the GMI oracle is signicant.

Moreover, its behaviour denotes a tendency to improve towards subextensive
entropic indexes ( < 1). To observe this tendency, another test scene is shown in
Fig. 2 for an entropic index 0.5. Its corresponding RMSE and PSNR measures are
presented in Table 2. The meshes are made up of 10,000 patches with 9,268,000
rays to distribute the power and we have kept 10 random lines to evaluate the
oracle between elements.


J. Rigau, M. Feixas, and M. Sbert


We have presented a new generalised-mutual-information-based oracle for hierarchical radiosity, calculated from the dierence between the continuous and discrete generalised mutual information between two elements of the adaptive mesh.
This measure expresses the loss of information transfer between two patches due
to the discretisation. The objective of the new oracle is to reduce the loss of
information, obtaining an optimum mesh. The results achieved improve on the
classic methods signicantly, being better even than the version based on the
Shannon mutual information. In all the tests performed, the best behaviour is
obtained with subextensive indexes.
Acknowledgments. This report has been funded in part with grant numbers: IST-2-004363 of the European Community - Commission of the European
Communities, and TIN2004-07451-C03-01 and HH2004-001 of the Ministry of
Education and Science (Spanish Government).

1. Goral, C.M., Torrance, K.E., Greenberg, D.P., Battaile, B.: Modelling the interaction of light between diuse surfaces. Computer Graphics (Proceedings of
SIGGRAPH 84) 18(3) (July 1984) 213222
2. Hanrahan, P., Salzman, D., Aupperle, L.: A rapid hierarchical radiosity algorithm.
Computer Graphics (Proceedings of SIGGRAPH 91) 25(4) (July 1991) 197206
3. Feixas, M., del Acebo, E., Bekaert, P., Sbert, M.: An information theory framework
for the analysis of scene complexity. Computer Graphics Forum (Proceedings of
Eurographics 99) 18(3) (September 1999) 95106
4. Feixas, M.: An Information-Theory Framework for the Study of the Complexity
of Visibility and Radiosity in a Scene. PhD thesis, Universitat Polit`ecnica de
Catalunya, Barcelona, Spain (Desember 2002)
5. Taneja, I.J.: Bivariate measures of type and their applications. Tamkang Journal
of Mathematics 19(3) (1988) 6374
6. Havrda, J., Charv
at, F.: Quantication method of classication processes. Concept
of structural -entropy. Kybernetika (1967) 3035
7. Gortler, S.J., Schr
oder, P., Cohen, M.F., Hanrahan, P.: Wavelet radiosity. In
Kajiya, J.T., ed.: Computer Graphics (Proceedings of SIGGRAPH 93). Volume 27
of Annual Conference Series. (August 1993) 221230
8. Feixas, M., Rigau, J., Bekaert, P., Sbert, M.: Information-theoretic oracle based on
kernel smoothness for hierarchical radiosity. In: Short Presentations (Eurographics
02). (September 2002) 325333
9. Rigau, J., Feixas, M., Sbert, M.: Information-theory-based oracles for hierarchical
radiosity. In Kumar, V., Gavrilova, M.L., Tan, C., LEcuyer, P., eds.: Computational Science and Its Applications - ICCSA 2003. Number 2669-3 in Lecture Notes
in Computer Science. Springer-Verlag (May 2003) 275284
10. Bekaert, P.: Hierarchical and Stochastic Algorithms for Radiosity. PhD thesis,
Katholieke Universiteit Leuven, Leuven, Belgium (December 1999)

A Generalised-Mutual-Information-Based Oracle


11. Tsallis, C.: Possible generalization of Boltzmann-Gibbs statistics. Journal of Statistical Physics 52(1/2) (1988) 479487
12. Tsallis, C.: Generalized entropy-based criterion for consistent testing. Physical
Review E 58 (1998) 14421445
13. Taneja, I.J.: On generalized information measures and their applications. In:
Advances in Electronics and Electron Physics. Volume 76. Academic Press Ltd.
(1989) 327413
14. Rigau, J.: Information-Theoretic Renement Criteria for Image Synthesis. PhD
thesis, Universitat Polit`ecnica de Catalunya, Barcelona, Spain (November 2006)

Rendering Technique for Colored Paper Mosaic

Youngsup Park, Sanghyun Seo, YongJae Gi, Hanna Song,
and Kyunghyun Yoon
CG Lab., CS&E, ChungAng University,
221, HeokSuk-dong, DongJak-gu, Seoul, Korea

Abstract. The work presented in this paper shows the way to generate
colored paper mosaics using computer graphics techniques. Following two
tasks need to be done to generate colored paper mosaic. The rst one is
to generate colored paper tile and the other one is to arrange the tile.
Voronoi Diagram and Random Point Displacement have been used in
this paper to come up with the shape of the tile. And, energy value that
the tile has depending on its location is the factor to determine the best
positioning of the tile. This paper focuses on representing the overlap
among tiles, maintenance of the edge of input image, and various shapes
of tiles in the nal output image by solving two tasks mentioned above.
Keywords: Colored paper mosaic, Tile generation and Tile arrangement.


Mosaic is an artwork formed by lots of small pieces called tile. It can be expressed in many dierent ways depending on the type and the position of tile.
Photomosaics[1] shows the big image formed with small square tiles that are laid
out on a grid pattern. Distinctive output was driven from the process of combining multiple images into one image. While Photomosaics shows the arrangement
of tiles in a grid pattern, Simulated Decorative Mosaic[2] has tiles arranged in
the direction of the edge of input image. This shows the similar pattern found
in ancient Byzantine period. This pattern can also be found in Jigsaw Image
Mosaics[3]. The only dierence is to use various shape of image tiles instead
of a single-colored square tiles. In this paper, we show especially how colored
paper mosaic among various styles of mosaic artworks can be represented using
computer graphics techniques.
To generate colored paper mosaic, following two issues need to be taken care.
The rst issue is to decide on the shape of colored paper tile and the second
one is to arrange colored paper tile. Voronoi Diagram[9] and Random Fractal have been used in this paper to come up with the shape of colored paper
tile. But the problem using Voronoi Diagram is that it makes the form of tile
too plain since it generates only convex polygon. Therefore the method represented in this paper uses the predened data of colored paper as database
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 114121, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Rendering Technique for Colored Paper Mosaic


like Photomosaics. Then, it creates small piece of colored paper tile by clipping
Voronoi polygon repetitively from the data of colored paper. Many dierent
shapes of tiles like concave polygon can be expressed since it is made from
repetitive tearing of one colored paper. And the energy value that colored paper tile has depending on its location is calculated to nd the best positioning
of colored paper tile. The location that has the biggest sum of energy value is
dened as the best position. Tiles are placed at the point where the summation
of energy value is the biggest by being moved and rotated toward the nearest

Related Work

Existing mosaic studies focus on the selection, the generation, and the arrangement of tiles. We comparison the existing studies by classifying into two
The studies of rst group focus on the selection and the arrangement of tiles
since they use xed or predened shapes of tiles. Photomasaics[1] creates image formed with various small pieces of image tiles. It is an algorithm that
lays out selected image from the database in a grid pattern. It proposes an
eective method of tile selection from database. But it is hard to keep the
edge of image since the shape of tile in Photomosaic is all square ones. In
the study of Simulated Decorative Mosaic[2], Hausner shows the similar pattern and techniques used in Byzantine era by positioning single-colored square
tile in the direction of the edge of input image. It uses the methods of Centroidal Voronoi Diagram (CVD) and Edge Avoidance to arrange tiles densely.
In Jigsaw Image Mosaics (JIM)[3], it shows extended technique by using arbitrary shapes of image tiles while Simulated Decorative Mosaic uses single-colored
square tiles. It proposes solution of tile arrangement with Energy Minimization
The studies of second group propose the method only about the generation of
tiles. Park[5] proposes passive colored paper mosaic generating technique that
shape and arrangement of tiles is all done by the users input. The proposed
method uses Random Fractal technique for generating torn shaped colored paper
tile. However, it gives the user too much works to do. To solve the problem
passive technique has, automatic colored paper mosaic[6] using Voronoi Diagram
is proposed. The majority works are done by computer and only part the user
needs to do is to input a few parameters. It reduces heavy load of work on user
side; however, it cannot maintain the edge of image since it arranges tiles without
considering the edge. In order to solve this problem, another new technique[7]
is suggested. In this new technique, it arranges Voronoi sites using Quad-Tree
and clips the tiles according to the edge of image once it goes out of the edge.
Even though this technique can keep the edge of images, it cannot express the
real texture and shape of colored paper tearing since the polygon created using
Voronoi Diagram becomes in a convex form and each polygon is not overlapped.
Therefore, the existing study is not showing various shapes of tiles and the
overlap among them.



Y. Park et al.

Data Structure of Colored Paper

The data structure of colored paper is organized with 2 layers that contain the
information such as the texture image and vertex shown as gure 1. The upper
layer means visible space of colored paper that has the color value. And the
lower layer means the white paper represented on the torn portion. To dene
the data structure of colored paper in advance gives two good aspects. The rst
one is that it can express various shape of colored paper tile like concave polygon
besides convex one. This is because previously used colored paper is stored in the
buer and polygon clipping is repeated using Vonoroi Diagram as necessary. The
other one is that dierent type of paper mosaic can be easily done by modifying
data structure. If the image is used magazine, newspaper and so on instead of
colored paper then it will be possible to come up with paper mosaic like Collage.

Fig. 1. The data structure of colored paper object


Image Segmentation

At rst, the necessary image processing works[11] like blurring are performed
on the input image and the image is divided into several regions that have
similar color in LUV space by using Mean-Shift image segmentation technique[8].
We call the region container. And the proposed mosaic algorithm is performed
per container. However Mean-Shift segmentation can create small containers.
If mosaic processing is performed in this stage, the colored paper tiles will not
be attached to these small containers so it will result in lots of grout spaces in
the result image as show in gure 4. Therefore, there is another step needed
to integrate these small containers. To give exibility and to show individual
intention of expression, the process of integration of small containers is controlled
by the users input.


The Generation of Colored Paper Tile

Determination of Size and Color

To determine the size and the color of tile, the initial position where tile is attached is determined in advance by Hill-Climbing algorithm[4]. Hill-Climbing

Rendering Technique for Colored Paper Mosaic


algorithm keeps changing the position till the function value converges to optimal point. Since normally big tiles are applied primarily from boundary rather
than small ones in real life, the function is determined like equation 1 with following two factors: size and boundary. The size factor is dened by D(x, y) that
means the minimum distance value between pixel (x, y) and boundary. And the
boundary factor is dened by D(x, y) D(i, j) that means the sum of dierence
between neighbor pixels. Therefore, the position that has the largest value of
L(x, y) is regarded as an initial position.
L(x, y) =



D(x, y) D(i, j) [x = i&y = j]


i=x1 j=y1

The size of colored paper tile is determined by distance from boundary. At

rst, we divide boundary pixels into two groups. The rst group has smaller
value than y of initial position, And the second group has larger value than y.
Then, between two minimum distance values on each group, the small value set
as the minimum size and the large value set as the maximum size.
Colored paper that has the similar color as the one in the initial position is
selected. Colored paper is dened as single-colored. First of all, square area is
built around the initial position for the size of tile and then the average value of
RGB color in that area is selected.

Determination of Shape

There are two steps to determine the shape of colored paper tile. The rst one
is to determine the overall outline of tile to be clipped and the other one is to
express torn eect. Voronoi Diagram is applied to decide the overall outline of
tile. First, the area of colored paper is divided into several grids according to the
size of tile to be torn. Then, Voronoi diagram is created by placing individual
Voronoi site in each segment as shown in gure 2(b). The generated Voronoi
diagram contains multiple Voronoi polygons so it needs to be decided to clip
which polygon among them. Considering the fact that people start to clip from
the boundary of the paper in real mosaic work, the polygon located near the
boundary is decided to be torn rst. Since there is always vertex in the boundary
of colored paper as shown in gure 2(c), one of the polygons that contain vertex
is randomly chosen. Once the outline of tile is determined by Voronoi polygon,
it is necessary to apply torn eect to the boundary of determined outline. This
torn eect is done by applying Random Point Displacement that is one of the
Random Fractal techniques to colored papers layer individually. Random Point
Displacement algorithm is applied to the boundary of selected Voronoi polygons
that is not overlapped with the boundary of colored paper. The irregularity
of torn surface and white-colored portion can be expressed by perturbing the
random point of edge continuously in the vertical direction. Lastly, clip the
modied Voronoi polygon by Random Point Displacement algorithm as shown
in gure 2(d).


Y. Park et al.

(a) Colored paper

(b) Voronoi

(c) Torn Eect

(d) Clipping

Fig. 2. The process of paper tearing

The Arrangement of Colored Paper Tile

There are two things to consider arranging colored paper tiles. First one is to
maintain the edge of the input image and the other one is to get rid of empty
spaces among tiles or between tile and the edge of the image. To maintain the
edges of input image, the similar technique to Energy Minimization Framework
of Jigsaw Image Mosaics is used in this paper. The energy function is dened
rst depending on the position of tile and the sum of it is calculated like E(x, y)
in equation 2.
E(x, y) = Pi Po Pt

Pi = Tmax /2 D(x, y) where (x, y) C and (x, y)
Po = Wo D(x, y)
where (x, y)

Pt = Wt D(x, y)
where (x, y) T


Pi , Po , Pt shown in the expression above mean the number of pixels located in

the inside of container, outside of container, and on the overlapped area with
other tiles. And Wo , Wt represent weight value depending on each location of
pixel. The bigger the value of sum E(x, y) is the better the position is to maintain
the edges of input image. Therefore, the tile needs to be placed where the sum
of E(x, y) is the greatest. To get rid of empty spaces among tiles and between
the tile and the edge of image, tile is moved and rotated into the direction of
the nearest edge. This movement and rotation is continuously happening till the
sum of E(x, y) from equation 2 has convergence value or is not getting bigger

(a) The best case

(b) The worst case (c) less overlapping

Fig. 3. Positioning of colored paper tile

(d) edge keeping

Rendering Technique for Colored Paper Mosaic


any longer. The gure 3 shows four dierent situation of tile arrangement. The
gure 3(b) shows the case that the tile is positioned outside of the edge of
the image. Correction on tiles location needs to be done since it prevents the
tile from keeping the edge of the image. Two tiles are overlapped too much
in the gure 3(c) and it also needs to be modied. The gure 3(d) shows the
optimal arrangement of the tile. We can control this by adjusting the value of Wo
and Wt .


The gure 4, 5 and 6 shows the result image rendered by assigning the size of
tile of the source image between 4 and 100. The result shown in the gure 4
is the result of colored paper mosaic processed by only applying segmentation
algorithm to the source image. The grout space appears where is segmented



Fig. 4. The examples that have lots of grout spaces

Fig. 5. The result of colored paper mosaic


Y. Park et al.

Fig. 6. The result of colored paper mosaic with height map

smaller than the size 4 since the minimum size of the tile is set to 4. These smaller
containers have to be integrated into the near container in order to get rid of grout
spaces. The result of colored paper mosaic including container integration step is
shown in the gure 5. In the gure 5, the grout spaces shown in gure 4(a) are disappeared. Also, lots of small segments are removed by integration so the number
of smaller size of tiles is reduced. And we can apply the texture eect to the result
image by using texture mapping, height map[10], and alpha blending as shown in
the gure 6. By adding these eects, the mosaic image gets more realistic.

Discussion and Future Work

The work presented in this paper shows the new method to generate colored
paper tile with computer graphics techniques. The dierence that this paper has
is that it can maintain the edges of the input image and express the various
shape of tile and overlaps among tiles. These three achievements are shown in
the gure 4, 5 and 6.
The proposed method has some problems. First, too many small tiles are lled
in between large tiles in the results. It is because grout spaces appear between
the tile and the edge during the process of arranging the tile. It causes the
problem to the quality of result image so it needs to be improved afterward.
Therefore, another step to consider the edge of image during the tile generation
is necessary. This additional step will reduce the generation of grout spaces
among tiles or between the tile and the edge of image. Second, the performance
of whole process is very low, since the tile arrangement is performed per pixel.
Therefore it is needed to apply GPU or any other algorithms for improving the
This paper also has some benets like following. First, the proposed method
can express the various shapes of tile and overlapping between other tiles. Second,

Rendering Technique for Colored Paper Mosaic


if other types of paper like newspaper are used instead of colored paper then it
will be possible to come up with another type of mosaic like Collage. It is easy
to express other type of mosaic in computer graphics by modifying the data
structure if more detailed and elaborate tile selection algorithm is applied.

1. Silver.R and Hawley.M (ed.): Photomosaics, New York: Henry Holt, 1997
2. Alejo Hausner : Simulating Decorative Mosaics, SIGGRAPH 2001, pp.573-580,
3. Junhwan Kim, Fabio Pellacini : Jigsaw Image Mosaics, SIGGRAPH 2002, pp.
657-664, 2002.
4. Chris Allen : A Hillclimbing Approach to Image Mosaics, UW-L Journal of Undergraduate Research , 2004
5. Young-Sup Park, Sung-Ye Kim, Cheung-Woon Jho, Kyung-Hyun Yoon : Mosaic
Techniques using color paper, Proceeding of KCGS Conference, pp.42-47, 2000
6. Sang-Hyun Seo, Young-Sup Park, Sung-Ye Kim, Kyung-Hyun Yoon : Colored Paper Mosaic Rendering, In SIGGRAPH 2001 Abstrac ts and Applications, p.156,
7. Sang-Hyun Seo, Dae-Uk Kang, Young-Sup Park, Kyung-Hyun Yoon : Colored Paper Mosaic Rendering Based on Image Segmentation, Proceeding of KCGS Conference, pp27-34, 2001
8. D. Comanicu, P. Meer : Mean shift: A robust approach toward feature space analysis, IEEE Transaction on Pattern Analysis and Machine Intelligence, 24, 603-619,
May 2002
9. Mark de Berg, M. V. Kerveld, M. Overmars and O. Schwarzkopf : Computational
Geometry Algorithms and Applications, Springer, pp.145-161, 1997
10. Aaron Hertzmann : Fast Paint Texture, NPAR 2002, 2002
11. Rafael C. Gonzalez and Richard E. Woods : Digital Image Processing 2nd Edition,
publish ed by Prentice Hall, 2002

Real-Time Simulation of Surface Gravity Ocean

Waves Based on the TMA Spectrum
Namkyung Lee1 , Nakhoon Baek2, , and Kwan Woo Ryu1

Dept. of Computer Engineering, Kyungpook National Univ., Daegu 702-701, Korea,
School of EECS, Kyungpook National Univ., Daegu 702-701, Korea
Abstract. In this paper, we present a real-time method to display ocean
surface gravity waves for various computer graphics applications. Starting
from a precise surface gravity model in oceanography, we derive its implementation model and our prototype implementation shows more than 50
frames per second on Intel Core2 Duo 2.40GHz PCs. Our major contributions will be the improvement on the expression power of ocean waves
and providing more user-controllable parameters for various wave shapes.
Keywords: Computer graphics, Simulation, Ocean wave, TMA.


Realistic simulation of natural phenomena is one of the interesting and important issues in computer graphics related areas including computer games and
animations. In this paper, we are focusing on the ocean waves, for which we
have many research results but not a complete solution yet[1].
Waves on the surface of the ocean are primarily generated by winds and
gravity. Although the ocean wave includes internal waves, tides, edge waves and
others, it is clear that we should display at least the surface gravity waves on the
computer screen, to nally represent the ocean. In oceanography, there are many
research results to mathematically model the surface waves in the ocean. Simple
sinusoidal or trochoidal expressions can approximate a simple ocean wave. Real
world waves are a comprised form of these simple waves, and called wave trains.
In computer graphics, we can classify the related results into two categories.
The rst one uses uid dynamics equations in a similar way used in the scientic
simulation eld. We have a number of results with the capability of obtaining
realistic animations of complex water surfaces[2,3,4,5,6]. However, these results
are hard to apply to large scenes of waters such as oceans, mainly due to their
heavy computation.
The other category is based on the ocean wave models from the oceanography,
and consists of three approaches. The rst group uses the Gerstner swell model.
Fournier[7] concentrated on the shallow water waves and surf along a shore line.
He started from parametric equations and added control parameters to simulate

Corresponding author.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 122129, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Real-Time Simulation of Surface Gravity Ocean Waves


various shapes of shallow water waves, but not for the large-scale ocean scenes
and/or deep-depth ocean waves. More complex parametric equations to present
the propagation of water waves had been introduced by Gonzato[8]. This model
is well suited for modeling propagating water of wave front, but its equations
are too complex for large-scale ocean waves.
Another group regards the ocean surface as a height eld with a prescribed
spectrum based on the experimental observations from oceanography. Mastin[9]
introduced an eective simulation of wave behavior using Fast Fourier Transform(FFT). The height eld is constructed through inverse FFT of the frequency
spectrum of the real world ocean waves. It can produce complex wave patterns
similar to real world ocean waves. Tessendorf[10] showed that dispersive propagation could be managed in the frequency domain and that the resulting eld
could be modied to yield trochoid waves. However, the negative aspect of FFT
based methods is homogeneity: we cannot handle any local properties such as
refraction, reection, and others.
The last one is the hybrid approach: The spectrum synthesized by a spectral
approach is used to control the trochoids generated by the Gerstner model.
Hinsinger[11] presented an adaptive scheme for the animation and display of
ocean waves in real time. It relied on a procedural wave model which expresses
surface point displacements as sums of wave trains. In this paper, we aim to
construct an ocean wave model with the following characteristics:
Real time capability: They usually want to display a large scale ocean
scene and some special eects may be added to the scene. So, we need to
generate the ocean wave in real time.
More user-controllable parameters: We will provide more parameters
to generate variety of ocean scenes including deep and shallow oceans, windy
and calm oceans, etc.
Focusing on the surface gravity waves: Since we target the large-scale
ocean, minor details of the ocean wave are not our major interest. In fact,
the minor details can be easily super-imposed to the surface gravity waves,
if needed.
In the following sections, we will present a new hybrid approach to nally get
a real-time surface gravity wave simulation. Since it is a kind of hybrid approach,
it can generate large scale oceans without diculty, and works in real time, to be
su- ciently used with computer generated animations or other special eects.
Additionally, we use a more precise wave model and have more controllable parameters including depth of sea, fetch length, wind speed, and so on, in comparison
with previous hybrid approaches. We will start from the theoretical ocean wave
models in the following section, and build up our implementation model. Our
implementation results and conclusions will be followed.

The Ocean Wave Model

The major generating force for waves is the wind acting on the interface between
the air and the water. From the mathematical point of view, the surface is made


N. Lee, N. Baek, and K.W. Ryu

up of many sinusoidal waves generated by the wind, and they are traveling
through the ocean. One of the fundamental models for the ocean wave is the
Gerstner swell model, in which the trajectory of a water particle is expressed as
a circle of radius r around its reference location at rest, (x0 , z0 ), as follows[11]:
x = x0 + r sin(t kx0 )
z = z0 + r cos(t kz0 ),


where (x, z) is the actual location at time t, =2f is the pulsation with the frequency f , and k=2/ is the wave number with respect to the wave length of .
Equation (1) shows a two-dimensional representation of the ocean wave, assuming that the x-axis coincides to the direction of wave propagation. The surface of
an ocean is actually made up of a nite sum of these simple waves, and the height
z of the water surface on the grid point (x, y) at time t can be expressed as:
z(x, y, t) =


Ai cos (ki (x cos i +y sin i )i t+i ) ,


where n is the number of wave trains, Ai is the amplitude, ki is the wave number,
i is the direction of wave propagation on the xy-plane and i is the phase. In
Hinsinger[11], they manually selected all these parameters, and thus, the user
may meet diculties to select proper values of them.
In contrast, Thon[12] uses a spectrum-based method to nd some reasonable
parameter sets. They used the Pierson-Moskowitz(PM) model[13], which empirically expresses a fully developed sea in terms of the wave frequency f as follows:

0.0081 g 2 54 ffp 4
EPM (f ) =
(2)4 f 5
where EPM (f ) is the spectrum, g is the gravity constant and fp = 0.13 g/U10 is
a peak of frequency depending on the wind speed U10 at a height of 10 meters
above the sea surface.
Although Thon used the PM model to give some impressive results, the PM
model itself assumes the innite depth of the ocean and thus may fail to the
shallow sea cases. To overcome this drawback, the JONSWAP model and TMA
model are introduced. The JONSWAP(Joint North Sea Wave Project) model[14]
is developed for fetch-limited seas such as North sea and expressed as follows:
f /fp 1

g 2 54 ffp 4 e 22

(2)4 f 5
where is the scaling parameter, is the peak enhancement factor, and is
evaluated as 0.07 for f fp and 0.09 otherwise. Given the fetch length F , the
frequency at the spectral peak fp is calculated as follows:
 2 0.33
g F
fp = 3.5

Real-Time Simulation of Surface Gravity Ocean Waves


The Texel, Marson and Arsole(TMA) model[15] extends the JONSWAP model
to include the depth of water h as one of its implicit parameters as follows:
ETMA (f ) = EJONSWAP (f ) (f , h),
where (f , h) is the Kitaigorodoskii depth function:

(f , h) =
s(f )
sinh K

with f = f h/g, K = 2(f )2 s(f ) and s(f ) = tanh1 [(2f )2 h].
The TMA model shows good empirical behavior even with the water depth of 6
meters. Thus, it is possible to represent the waves on the surface of lake or smallsize ponds, in addition to the ocean waves. Additionally, it also includes the fetch
length as a parameter, inherited from the JONSWAP model. Thus, the expression
power of the TMA model is much increased in comparison with the PM model
previously used by other researchers. We use this more improved wave model to
nally achieve more realistic ocean scenes with more user-controllable parameters.

The Implementation Model

To derive implementation-related expressions, we need to extend the spectrum

of TMA model to two dimensional world as follows[14]:
E(f, ) = ETMA (f )D(f, ),
where D(f, ) is a directional spreading factor that weights the spectrum at angle
from the downwind direction. The spreading factor is expressed as follows:

D(f, ) = Np cos
where p = 9.77(f /fp) , Np = 212p (2p + 1)/ 2(p + 1) with Eulers Gamma
function and

4.06, if f < fp
2.34, otherwise
For more convenience in its implementation, we will derive some evaluation
functions for the parameters including frequency, amplitude, wave direction,
wave number and pulsation. The frequency of each wave train is determined
from the peak frequency fp and a random oset to simulate the irregularity of
the ocean waves. Thereafter, the pulsation and wave number is naturally calculated by their denition.
According to the random linear wave theory[16,17,18,19,20], directional wave
spectrum E(f, ) is given by
E(f, ) = (k(f ), ) k(f )

dk(f )



N. Lee, N. Baek, and K.W. Ryu

where k(f ) = 4 2 f 2 /g and (k(f ), ) is a wave number spectrum. The second

and the third term in Equation (3) can be computed as:
k(f )

dk(f )
32 2 f 3

This allows us to re-write Equation (3) as follows[17]:

E(f, ) = (k(f ), )

32 2 f 3

From the random linear wave[17,19], the wave number spectrum (k(f ), ) can
be approximated as:

(k(f ), ) =
A(f )2 ,
4 2
where is a constant. Finally, the amplitude A(f ) of a wave train is evaluated

E(f, ) g 2
ETMA (f ) D(f, ) g 2
A(f ) =
8f 3
Using all these derivations, we can calculate the parameter values for Equation (2). And then, we evaluate the height of each grid point (x, y) to construct
a rectangular mesh representing the ocean surface.

Implementation Results

Figures 1, 2 and 3 are some outputs from the prototype implementation. As shown
We implemented the ocean wave generation program based on the TMA model
presented in the previous section. It uses plain OpenGL library and does not use
any multi-threading or hardware-based acceleration techniques. At this time, we
focused on the expression power of our TMA model-based implementation, and
thus, our prototype implementation lacks some acceleration or optimization factors. Even though, it shows more than 50 frames per second on a PC with Intel

(a) wind speed 3m/s, water depth 5m (b) wind speed 3m/s, water depth 100m
Fig. 1. Ocean waves with dierent water depths: Even with the same wind speed, dierent
water depths result in very dierent waves. We use the fetch length of 5km for these images.

Real-Time Simulation of Surface Gravity Ocean Waves


(a) wind speed 3m/s, water depth 100m (b) wind speed 6m/s, water depth 100m
Fig. 2. Ocean waves with dierent wind velocities: Changes in wind speed generate more
clam or more choppy waves. The fetch length of 10 km is used for each of these images.













Fig. 3. An animated sequence of ocean waves


N. Lee, N. Baek, and K.W. Ryu

Core2 Duo 6600 2.40GHz processor and a GeForce 7950GT based graphics card.
We expect that the frame rate will be much better in the next version.
In Figure 1, we can control the depth of the ocean to show very dierent
waves even with the same wind speed and the same fetch length. Especially, the
changes in the water depth are acceptable only for the TMA model, while the
previous PM model cannot handle it. Figure 2 shows the eect of changing the
wind speed. As expected, the high wind speed generates more choppy waves.
Figure 3 is a sequence of images captured during the real time animation of the
windy ocean. All examples are executed with mesh resolution of 200 200. More
examples are in our web page,


In this paper, we present a real-time surface gravity wave simulation method,

derived from a precise ocean wave model in the oceanography. We started from a
precise ocean wave model of TMA model, which has not been used for a graphics
implementation, at least to our knowledge. Since we used a more precise ocean
wave model, users can control more parameters to create various ocean scenes.
The two major improvements of our method in comparison with the previous
works will be:
Enhanced expression power: Our method can display visually plausible
scenes even for shallow seas.
Improved user controllability: Our method provides more parameters
such as fetch length and depth of water, in addition to the wind velocity.
We implemented a prototype system, and showed that it can generate animated sequences of ocean waves in real time. We plan to integrate our implementation to large-scale applications such as games, maritime training simulators,
etc. Some detailed variations to the ocean waves can also be added to our implementation with minor modications.

This research was supported by the Regional Innovation Industry Promotion
Project which was conducted by the Ministry of Commerce, Industry and Energy(MOCIE) of the Korean Government (70000187-2006-01).

1. Iglesias, A.: Computer graphics for water modeling and rendering: a survey. Future
Generation Comp. Syst. 20(8) (2004) 13551374
2. Enright, D., Marschner, S., Fedkiw, R.: Animation and rendering of complex water
surfaces. In: SIGGRAPH 02. (2002) 736744
3. Foster, N., Fedkiw, R.: Practical animation of liquids. In: SIGGRAPH 01. (2001)

Real-Time Simulation of Surface Gravity Ocean Waves


4. Foster, N., Metaxas, D.N.: Realistic animation of liquids. CVGIP: Graphical Model
and Image Processing 58(5) (1996) 471483
5. Foster, N., Metaxas, D.N.: Controlling uid animation. In: Computer Graphics
International 97. (1997) 178188
6. Stam, J.: Stable uids. In: SIGGRAPH 99. (1999) 121128
7. Fournier, A., Reeves, W.T.: A simple model of ocean waves. In: SIGGRAPH 86.
(1986) 7584
8. Gonzato, J.C., Saec, B.L.: On modelling and rendering ocean scenes. J. of Visualization and Computer Animation 11(1) (2000) 2737
9. Mastin, G.A., Watterberg, P.A., Mareda, J.F.: Fourier synthesis of ocean scenes.
IEEE Comput. Graph. Appl. 7(3) (1987) 1623
10. Tessendorf, J.: Simulating ocean water. In: SIGGRAPH 01 Course Notes. (2001)
11. Hinsinger, D., Neyret, F., Cani, M.P.: Interactive animation of ocean waves. In:
SCA 02: Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on
Computer animation. (2002) 161166
12. Thon, S., Dischler, J.M., Ghazanfarpour, D.: Ocean waves synthesis using a
spectrum-based turbulence function. In: Computer Graphics International 00.
(2000) 65
13. Pierson, W., Moskowitz, L.: A proposed spectral form for fully developed wind
seas based on the similarity theory of S.A. kitaigorodskii. J. Geophysical Research
(69) (1964) 51815190
14. Hasselmann, D., Dunckel, M., Ewing, J.: Directional wave spectra observed during
JONSWAP 1973. J. Physical Oceanography 10(8) (1980) 12641280
15. Bouws, E., G
unther, H., Rosenthal, W., Vincent, C.L.: Similarity of the wind wave
spectrum in nite depth water: Part 1. spectral form. J. Geophysical Research 90
(1985) 975986
16. Crawford, F.: Waves. McGraw-Hill (1977)
17. Krogstad, H., Arntsen, .: Linear Wave Theory. Norwegian Univ. of Sci. and Tech.
(2006) oivarn/.
18. Seyringer, H.: Nature wizard (2006) ntnu/
19. Sorensen, R.: Basic Coastal Engineering. Springer-Verlag (2006)
20. US Army Corps of Engineers Internet Publishing Group:
Coastal engineering manual part II (2006)

Determining Knots with Quadratic Polynomial

Zhang Caiming1,2 , Ji Xiuhua1 , and Liu Hui1

School of Computer Science and Technology, University of Shandong Economics,

Jinan 250014, China
School of Computer Science and Technology, Shandong University,
Jinan 250061, China

Abstract. A new method for determining knots in parametric curve

interpolation is presented. The determined knots have a quadratic polynomial precision in the sense that an interpolation scheme which reproduces quadratic polynomials would reproduce parametric quadratic
polynomials if the new method is used to determine knots in the interpolation process. Testing results on the eciency of the new method are
also included.
Keywords: parametric curves, knots, polynomials.


The problem of constructing parametric interpolating curves is of fundamental

importance in CAGD, CG, scientic computing and so on. The constructed curve
is often required to have a better approximation precision and as well as to have
the shape suggested by the data points.
The construction of an ideal parametric interpolating curve requires not only
a good interpolation method, but also appropriate choice of the parameter knots.
In parametric curve construction, the chord length parametrization is a widely
accepted and used method to determine knots [1][2]. Other two useful methods
are centripetal model[3] and adjusted chord length method ([4], referred as Foleys method). When these three methods are used, the constructed interpolant
can only reproduce straight lines. In paper[5], a new method for determining
knots is presented (referred as ZCM method). The knots are determined using
a global method. The determined knots can be used to construct interpolants
which reproduce parametric quadratic curves if the interpolation scheme reproduces quadratic polynomials.
A new method for determining knots is presented in this paper. The knots associated with the points are computed by a local method. The determined knots
have a quadratic polynomial precision. Experiments showed that the curves constructed using the knots by the new method generally has the better interpolation precision.
The remaining part of the paper is arranged as follows. The basic idea of the
new method is described in Section 2. The choice of knots by constructing a
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 130137, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Determining Knots with Quadratic Polynomial Precision


parametric quadratic interpolant to four data points is discussed in Section 3.

The comparison of the new method with other four methods is performed in
Section 4. The Conclusions and Future Works is given in Section 5.

Basic Idea

Let Pi = (xi , yi ), 1 i n, be a given set of distinct data points which satises

the condition that for any point Pi , 1 < i < n, there are at least two sets of four
consecutive convex data points, which include it. As an example, for the data
points in Figure 3, the point Pi+1 belongs the two sets of consecutive convex data
points which are {Pi2 , Pi1 , Pi , Pi+1 } and {Pi , Pi+1 , Pi+2 , Pi+3 }, respectively.
The goal is to construct a knot ti for each Pi , 1 i n. The constructed knots
satisfy the following the condition: if the set of data points are taken from a
parametric quadratic polynomial, i.e.,
Pi = Ai2 + Bi + C,



where A = (a1 , a2 ), B = (b1 , b2 ) and C = (c1 , c2 ) are 2D points, then

ti ti1 = (i i1 ),



for some constant > 0.

Such a set of knots ti , 1 i n, is known to have a quadratic polynomial precision. Obviously, using the knots satisfying equation (2), an interpolation scheme which reproduces quadratic polynomials will reproduce parametric
quadratic polynomials.
Following, the basic idea in determining the knots ti , 1 i n, will be described. If the set of data points is taken from a parametric quadratic polynomial,
P () = (x(), y()) dened by
x() = X2 2 + X1 + X0 ,
y() = Y2 2 + Y1 + Y0 ,


then, there is a rotating transformation to transform it to the following parabola

form, as shown in Figure 1:
y = a1 t2 + b1 t + c1 ,
x = t


Then the knots ti , 1 i n can be dened by

ti = x
i ,

i = 1, 2, 3, , n,

which has a quadratic polynomial precision. Assume that the following transformation
= x cos 2 + y sin 2
y = x sin 2 + y cos 2
transforms P () (3) to the parabola (4), then we have the following theorem 1.


C. Zhang, X. Ji, and H. Liu









Fig. 1. A standard parabola in x

y coordinate system

Theorem 1. If the set of data points is taken from a parametric quadratic

polynomial,P () (3), then the knot ti , i = 1, 2, 3, , n which have a quadratic
polynomial precision, can be dened by
t1 = 0
ti = ti1 + (xi xi1 ) cos 2 + (yi yi1 ) sin 2 ,

i = 2, 3, , n

sin 2 = X2
/ X22 + Y22
cos 2 = Y2 / X22 + Y22



Proof. In the x
y coordinate system, it follows from (3) that
= (X2 2 + X1 + X0 ) cos 2 + (Y2 2 + Y1 + Y0 ) sin 2
y = (X2 2 + X1 + X0 ) sin 2 + (Y2 2 + Y1 + Y0 ) cos 2


If sin 2 and cos 2 are dened by (6), then the rst expression of (7) becomes

+ X0 cos 2 + Y0 sin 2
X1 cos 2 + Y1 sin 2


Substituting (8) into the second expression of (7) and rearranging, a parabola
is obtained, which is dened by
y = a1 x
2 + b1 x
+ c1 ,
where a1 , b1 and c1 are dened by
a1 = Y2 cos 2 X2 sin 2
b1 = 2a1 AB + (Y1 cos 2 X1 sin 2 )A
c1 = a1 A2 B 2 (Y1 cos 2 X1 sin 2 )AB + Y0 cos 2 X0 sin 2
Thus, ti can be dened by x
i , i.e., the knot ti ,i = 1, 2, 3, , n can be dened by
(5), which has a quadratic polynomial precision.

Determining Knots with Quadratic Polynomial Precision


The discussion above showed that the key point of determining knots is to
construct the quadratic polynomial, P () = (x(), y()) (3) using the given data
points. This will be discussed in Section 3.

Determining Knots

In this section, we rst discuss how to construct a quadratic polynomial with four
points, then discuss the determination of knots using the quadratic polynomial.

Constructing a Quadratic Polynomial with Four Points

Let Qi () be a parametric quadratic polynomial which interpolates Pi1 , Pi and

Pi+1 . Qi () can be dened on the interval [0, 1] as follows:
Qi (s) = 1 (s)(Pi1 Pi ) + 2 (s)(Pi+1 Pi ) + Pi


(s si )(s 1)
s(s si )
2 (s) =
1 si



1 (s) =

where 0 < si < 1.

Expressions (9) and (10) show that four data points are needed to determine
a parametric quadratic polynomial uniquely.
Let Pj = (xj , yj ), i 1 j i + 2, be four points in which there are no three
points on a straight line. The Pi+2 will be used to determine si (10) .
Without loss of generality, the coordinates of Pi1 , Pi , Pi+1 and Pi+2 are
supposed to be (0, 1), (0, 0), (1, 0) and (xi+2 , yi+2 ), respectively, as shown in
Figure 2. In the xy coordinate system, Qi (s) dened by (9) becomes
x = s(s si )/(1 si )
y = (s si )(s 1)/si


Let si+2 be the knot associated with the point (xi+2 , yi+2 ). As point (xi+2 , yi+2 )
is on the curve, we have
xi+2 = si+2 (si+2 si )/(1 si )
yi+2 = (si+2 si )(si+2 1)/si


It follows from (12) that

si+2 = xi+2 + (1 xi+2 yi+2 )si


Substituting (13) into (12), one gets the following equation:

s2i + A(xi+2 , yi+2 )si + B(xi+2 , yi+2 ) = 0



C. Zhang, X. Ji, and H. Liu

y 6
1 s Pi-1


s i+1

Fig. 2. Pi+2 is in the dotted region


xi+2 + yi+2
(1 xi+2 )xi+2
B(xi+2 , yi+2 ) =
(1 xi+2 yi+2 )(xi+2 + yi+2 )
> 1, the root of (14) is

xi+2 yi+2
si =
( xi+2 +
xi+2 + yi+2
xi+2 + yi+2 1
A(xi+2 , yi+2 ) =

As si+2


It follows from (9)(10) that if the given data points are taken from a parametric
quadratic polynomial Q(t), then there is an unique si satisfying 0 < si < 1
to make the curve Qi (s) (9) pass through the given data points. Since si is
determined uniquely by (15), Qi (s) is equivalent to Q(t).
Substituting si+2 > 1 into (11) one obtains
xi+2 > 1 and yi+2 > 0,


that is, the point (xi+2 , yi+2 ) should be on the dotted region in Figure 2.

Determining Knots

After si being determined, Qi (s) (9) can be written as


xi (s) = Xi,2 s2 + Xi,1 s + Xi,0 ,

yi (s) = Yi,2 s2 + Yi,1 s + Yi,0 ,


xi1 xi
xi+1 xi
1 si
(xi1 xi )(si + 1) (xi+1 xi )si
Xi,1 =

1 si
Xi,0 = xi1
yi1 yi
yi+1 yi
Yi,2 =
1 si
(yi1 yi )(si + 1) (yi+1 yi )si
Yi,1 =

1 si
Yi,0 = yi1


Xi,2 =

Determining Knots with Quadratic Polynomial Precision


It follows from Theorem 1 that for i = 2, 3, , n2, the knot interval tj+1 tj =
ij between Pj and Pj+1 , j = i 1, i, i + 1 can be dened by
ij = (xj+1 xj ) cos i + (yj+1 yj ) sin i ,

j = i 1, i, i + 1


where cos i and sin i are dened by (please see (6))

2 +Y2
sin i = Xi,2 / Xi,2

cos i = Yi,2 / Xi,2 + Yi,2
Based on the denition (19) that for the pair of P1 and P2 , there is one knot
interval 21 ; for the pair of P2 and P3 , there are two knot intervals, 22 and 32 ;
for the pair of Pi and Pi+1 , 3 i n 2, there are three knot intervals, i1
ii and i+1

similar. Now the knot interval i for the pair of Pi and Pi+1 , i = 1, 3, , n 1
are dened by
1 = 21
2 = 22 + 22
i = ii + 2i1 i2 /(i1 + i2 ), i = 3, 4, , n 3
n2 = n2
n2 + n2
n1 = n1


i1 = |ii i1
i2 = |ii i+1

If the given set of data points are taken from a parametric quadratic polynomial,
then i1 = i2 = 0. The terms 22 , 2i1
+ i1
) and n2
are correction
to 2 , i , i = 4, 5, , n 2 and n2 , respectively.







Fig. 3. Example 1 of the data points

For the data points as shown in Figure 3, as the data points change its convexity, the knot interval between Pi and Pi+1 is dened by the combination of
ii1 and ii+1 , i.e, by
i = (ii1 + ii+1 )/2.
While for data points as shown in Figure 4, the knot intervals are determined by
subdividing the data points at point Pi+1 into two sets of data points. The rst


C. Zhang, X. Ji, and H. Liu



Fig. 4. Example 2 of the data points

set of data point ends at Pi+1 , while the second set of data point starts at Pi+1 .
If Pi1 , Pi and Pi+1 are on a straight line, then setting ti ti1 = |Pi1 Pi |,
ti+1 ti = |Pi Pi+1 |, this determination makes the quadratic polynomial Qi (t)
which passes Pi1 , Pi and Pi+1 be a straight line with the magnitude of the rst
derivative being a constant. Such a straight line is the most well dened curve
one can get in this case.


The new method has been compared with the chord length, centripetal, Foley
and ZCMs methods. The comparison is performed by using the knots computed
using these methods in the construction of a parametric cubic spline interpolant.
For brevity, the cubic splines produced using these methods are called chord
spline, centripetal spline, Foleys spline, ZCM spline and new spline, respectively.
The data points used in the comparison are taken from the following ellipse
x = x( ) = 3 cos(2 )
y = y( ) = 2 sin(2 )


The comparison is performed by dividing the interval [0, 1] into 36 sub-intervals

to dene data points, i.e., i is dened by
i = (i + sin((36 i) i))/36

i = 0, 1, 2, , 36

where 0 0.25.
To avoid the maximum error occurred near the end points (x0 , y0 ) and (x20 ,
y20 ), the tangent vectors of F ( ) at = 0 and = 1 are used as the end
conditions to construct the cubic splines.
The ve methods are compared by the absolute error curve E(t), dened by
E(t) = min{|P (t) F ( )|}
= min{|Pi (t) F ( )|, i i+1 }

i = 0, 1, 2, , 19

where P (t) denotes one of the chord spline, centripetal spline, Foleys spline,
ZCMs spline or new spline, Pi (t) is the corresponding part of P (t) on the subinterval [ti , ti+1 ], and F ( ) is dened by (22). For the point P (t), E(t) is the
shortest distance from curve F ( ) to P (t) .

Determining Knots with Quadratic Polynomial Precision


Table 1. Maximum absolute errors

= .0
= .05
= .10
= .15
= .20
= .25

chord centripetal Foley ZCM New

5.29e-5 5.29e-5 5.29e-5 5.29e-5 5.29e-5
1.67e-4 3.71e-3 2.39e-3 1.58e-4 1.60e-4
3.17e-4 8.00e-3 5.33e-3 2.93e-4 2.89e-4
5.08e-4 1.30e-2 8.88e-3 4.58e-4 4.37e-4
7.41e-4 1.86e-2 1.31e-2 6.55e-4 6.04e-4
1.02e-3 2.49e-2 1.79e-2 8.86e-4 7.88e-4

The maximum values of the error curve E(t) generated by these methods
are shown in table 1. The ve methods have also been compared on data points
which divide [0, 1] into 18, 72, ... etc subintervals. The results are basically similar
as those shown in tables 1.

Conclusions and Future Works

A new method for determining knots in parametric curve interpolation is presented. The determined knots have a quadratic polynomial precision. This means
that from the approximation point of view, the new method is better than the
chord length, centripetal and Foleys methods in terms of error evaluation in the
associated Taylor series. The ZCMs method has also a quadratic polynomial
precision, but it is a global method, while the new method is a local one.
The new method works well on the data points whose convexity does not
change sign, our next work is to extend it to work on the data points whose
convexity changes sign.
Acknowledgments. This work was supposed by the National Key Basic Research 973 program of China(2006CB303102), the National Natural Science
Foundation of China(60533060, 60633030).

1. Ahlberg, J. H., Nilson, E. N. and Walsh, J. L., The theory of splines and their
applications, Academic Press, New York, NY, USA, 1967.
2. de Boor, C., A practical guide to splines, Springer Verlag, New York, 1978.
3. Lee, E. T. Y., Choosing nodes in parametric curve interpolation, CAD, Vol. 21, No.
6, pp. 363-370, 1989.
4. Farin G., Curves and surfaces for computer aided geometric design: A practical
guide, Academic Press, 1988.
5. Zhang, C., Cheng, F. and Miura, K., A method for determing knots in parametric
curve interpolation, CAGD 15(1998), pp 399-416.

Interactive Cartoon Rendering and Sketching of Clouds

and Smoke
Eduardo J. lvarez1, Celso Campos1, Silvana G. Meire1, Ricardo Quirs2,
Joaquin Huerta2, and Michael Gould2

Departamento de Informtica, Universidad de Vigo, Spain
Departamento de Lenguajes y Sistemas Informticos, Universitat Jaume I, Spain
{quiros, huerta, gould }

Abstract. We present several techniques to generate clouds and smoke with

cartoon style and sketching obtaining interactive speed for the graphical results.
The proposed method allows abstracting the visual and geometric complexity of
the gaseous phenomena using a particle system. The abstraction process is
made using implicit surfaces, which are used later to calculate the silhouette and
obtain the result image. Additionally, we add detail layers that allow improvement of the appearance and provide the sensation of greater volume for the
gaseous effect. Finally, we also include in our application a simulator that generates smoke animations.

1 Introduction
The automatic generation of cartoons requires the use of two basic techniques in expressive rendering: a specific illumination model for this rendering style and the visualization of the objects silhouettes. This style is known as Cartoon rendering and its
use is common in the production of animation films and in the creation of television
contents. Cartoon rendering techniques in video games is also growing as they can
produce more creative details than the techniques based on realism.
There are several techniques to automatically calculate silhouette -outline- and celshading [1][2][3]. Shadowing and self-shadowing, along with the silhouettes, are
fundamental effects for expressing volume, position and limits of objects. Most of
these techniques require general meshes and they do not allow representation of
amorphous shapes, which are modeled by particle systems as in the case of clouds and
Our objective is to create cartoon vignettes for interactive entertainment applications, combining cartoon techniques with a particle system simulator that allows representation of amorphous shapes such us clouds and smoke. Special attention should
be paid to the visual complexity of this type of gaseous phenomena, therefore we use
implicit surfaces in order to abstract and simplify this complexity [4][5]. To obtain the
expressive appearance, we introduce an algorithm that enhances silhouette visualization, within a cartoon rendering. For the simulation of smoke, we use a particle
system based on Selles [6] hybrid model.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 138145, 2007.
Springer-Verlag Berlin Heidelberg 2007

Interactive Cartoon Rendering and Sketching of Clouds and Smoke


2 Previous Work
Clouds are important elements in the modeling of natural scenes, both if we want to
obtain high quality images or for interactive applications. Clouds and smoke are gaseous phenomena very complicated to represent because of several issues: their fractal
nature, the intrinsic difficulty of its animation and local illumination differences.
The representation of cloud shapes has been treated by three different strategies:
volumetric clouds (explicit form [7] or procedural [8]), using billboards [9] [10], and
by general surfaces [12][13]. The approach based on volume, in spite of the improvements of graphics hardware, is not yet possible at interactive speed because of
the typical scene size and the level of detail required to represent the sky.
The impostors and billboards approach is the most widely used solution in video
games and, although the results are suitable, their massive use slows the visualization
due to the great number of pixels that must be rendered.
On the other hand, the use of general surfaces allows efficient visualization however it generates overly coarse models for representing volumetric forms. Bouthors
[11], extends Gardners model [12][13] by using a hierarchy of almost-spherical particles related to an implicit field that define a surface. This surface is later rendered to
create a volumetric characteristic that provides realistic clouds.
In expressive rendering, the relevant works on gaseous phenomena are scarce in
the literature. The first works published in this field are from Di Fiore [14] and Selle
[6] trying to create streamlined animations of these phenomena. The approach of Di
Fiore combines a variant of second order particle systems to simulate the gaseous
effect movement using 2D billboards drawn by artists, which are called basic visualization components.
Selle introduces a technique that facilitates the animation of cartoon rendering
smoke. He proposes to use a particle system whose movement is generated with the
method presented by Fedkiw [15] for the simulation of photorealistic smoke. To
achieve the expressive appearance, each particle is rendered as a disc in the depth
buffer creating a smoke cloud. In a second iteration of the algorithm, the silhouette of
the whole smoke cloud is calculated reading the depth buffer and applying the depth
differences. This method obtains approximately one image per second and has been
used by Deussen [16] for the generation of illustrations of trees.
McGuire [17], presents an algorithm for the real-time generation of cartoon rendering smoke. He extends Selles model incorporating shading, shadows, and nailboards
(billboards with depth maps). Nailboards are used to calculate intersections between
smoke and geometry, and to render the silhouette without using the depth buffer. The
particle system is based on work recently presented by Selle, Rasmussen, and Fedkiw
[18], which introduces a hybrid method that generates synergies using Lagrangian
vortex particle methods and Eulerian grid based methods.

3 Scene Modeling
The rendering process necessarily requires an abstraction and simplification of the
motif. This is made evident in the generation of hand-drawn sketches, even more so
when representing gases. By means of several strokes the artist adds detail to the


E.J. lvarez et al.

scene creating a convincing simplification of the object representation which can be

easily recognized by the viewer. Our method provides the user with complete freedom
to design the shape and the aspect (appearance) of the cloud.
In a first approach, we propose the possibility to model clouds as static elements in
the scene, the same way it normally happens in animation films. The process of modeling clouds begins with the definition by the user of particles pi that comprise the
cloud, each one having a center ci, a radius ri and a mass mi.
Once the set of particles is defined we perform the simplification and the abstraction of the geometric model of clouds. To calculate the implicit surface described by
the total particle set, we use the function of density proposed by Murakami and Ichihara [19], and later used by Luft and Deussen[5] for the real-time illustration of plants
with Watercolor style.
The influence of a particle pi in a point q is described by a density function Di (q)
defined as:

|| q - ci ||
Di(q)= 1


2 2


For ||q - ci||<=ri, otherwise Di(q)=0.

In our model we include in the density function the mass mi of each particle which
allows the user to weigh the influence of each particle in the calculation of the implicit surface. The modified density function is expressed as:

|| q - ci ||
Di (q)= mi * 1


2 2


The implicit surface is generated from the summation of the density function of the
F ( q)= Di (q ) T .


Therefore, the implicit surface F (q) =0 is defined as those points q where summation of the density functions equals threshold T. The influence of the radius ri and the
mass mi of particles, as well as the threshold T, are chosen empirically as they depend
on the number and density of particles. Finally we triangulate the implicit surface and
then we optimize it according to the level of subdivisions si chosen by the user. Fig. 1
and Table 1 provide a comparison between different levels of simplification.



Fig. 1. Abstraction and simplification of two clouds models

Interactive Cartoon Rendering and Sketching of Clouds and Smoke


Table 1. Comparison of the triangles count and the parameters used for the implicit surfaces


#tri implicit surface





4 Rendering
Using an implicit surface allows calculation of the silhouette and to apply a illumination model for rendering. For silhouette detection and to achieve the cartoon appearance we use our previously published method [4]. Next, we describe the proposed
method and discuss the visual results obtained thus far.
The detection algorithm allows silhouette extraction as an analytical representation
obtaining interactive frame rates. As opposed to the methods proposed by other authors [14] [17] [6], the analytical description of the silhouette can be used to
create new textured polygons which improve the appearance of the model. Our system allows us to define the height and the scale of the new polygons that form the
The main drawback of this algorithm is that we need to remove those polygons that
have been generated for interior hidden edges of the original polygon. A simple solution to this problem draws the mesh during a second iteration of the algorithm, once
the hidden polygons have been removed, as shown by Fig. 2, left.

Fig. 2. Composing the final image for silhouette/based rendering

Finally, we select the texture to apply to the polygons of the silhouette and the
background image to compose the final image. In Fig. 2 right, we show the Sketch of
a cloud using this technique.
The illumination model used for the cartoon rendering allows a maximum of 32
gradations, which are applied on the mesh generated from the implicit surface as a 1D
texture. The process of obtaining the final image is similar to the one described previously, however in this case we do not use the mask but instead the polygonal mesh
textured with cartoon style, as shown in Fig. 3 left.


E.J. lvarez et al.

Fig. 3. Left, cartoon rendering image. Right, cartoon rendering with transparency.
Given the nature of gaseous phenomena it may be interesting to be able to define
transparency levels at the same time that cartoon rendering is applied. In this case it is
necessary to generate the mask of the cloud and to introduce it in a third step, as it is
shown in Fig. 3, right.

5 Details Layer for Clouds

Once the general aspect of the cloud is defined, it may be interesting to incorporate
greater level of detail to improve its appearance and to provide the sensation of
greater volume. With this purpose, we propose to calculate a second implicit surface.
The calculation of the second implicit surface is made from particles pi defined by
the user in the scene modeling process (section 3). We reduce the value of the radius
ri and the mass mi of each particle and we apply the density function Di(q) again,
creating an inner cloud.
We use this new implicit surface to calculate its silhouette. Since the positions of
particles used for its creation are the same for both surfaces, the second surface as
well as its silhouette will be contained initially within the first surface.
Our system allows the user to independently modify the calculation parameters of
both surfaces, making it is feasible to triangulate both surfaces with different number
of polygons. Moreover, also the height and scale parameters of the silhouette can be
changed for each surface. Thus the polygons that form the silhouette of the inner surface may be visible and cover part of the outer surface, enhancing the appearance of
the final image. The result obtained for the example cloud can be observed in Fig. 4.

Fig. 4. A cloud with two layers of detail

Interactive Cartoon Rendering and Sketching of Clouds and Smoke


Because the second surface is only necessary to add detail through the outline of
the silhouette, and it is inside the outer surface, it is not necessary to visualize it nor to
use it as a mask.

6 Smoke Simulation
Each particle of real smoke has very little mass or volume. Therefore, the smoke simulation is, in fact, the simulation of the instability of the air that contains the smoke particles. Expressive rendering is aimed at obtaining, first of all, a convincing shape of the
object. In the case of the amorphous shape of the smoke, as with clouds, we use a particle set that is the base for calculating the surface that is used for rendering this effect.
Cloud models can be static, however in the case of smoke it is necessary to have a
dynamic particle system. Our model uses a simplified version of the proposal made
recently by Selle et al [18] for the particle system. It allows the user to fit the parameters pertinent to wind, turbulence, environmental forces and vortices, among others.
The positions of particles are calculated interactively according to the initial configuration defined by the user. Once the new positions are computed, we recalculate
the implicit surface using the method described in section 3. Then we calculate the
silhouette and we render it as we have described in the previous sections.
In the real world the smoke particles dissipate according to their speed. Although
speed is a more objective criterion, it is more convincing to do the animation based on
time. This approach allows us to maintain the number of particles steady during the
simulation process. In this way we achieve that the speed of the visualization process
of the smoke remains more or less constant.

Fig. 5. Time evolution of cartoon smoke

Fig. 6. Time evolution of sketch smoke


E.J. lvarez et al.

7 Results
The results obtained show a convincing imitation of hand-drawn sketches and drawings, although our approach is not strict in its physical foundations. We have given
priority to the visual appearance with the purpose of simplifying the amount of information to represent while keeping the overall aspect and the capability of the user to
identify the amorphous objects. With our approach we have obtained good results for
interactive models rendering. Still, to obtain high resolution images intended for printing with good quality, we must optimize the algorithms developed.
The performance of our method has been demostrated in a PC platform, with
AMD's Athlon 64 X2 3800+ processor and a GeForce 7950 512 Mb graphics card,
running Windows XP. Once we calculate the geometry of the objects to render, we set
up different shape parameters for the clouds.
Different models have been created and different parameters have been applied,
which entails the necessity to execute different number of iterations of the algorithm
according to the desired target.
Table 2. Rendering times of clouds and smoke


#tri implicit surface

1600 < #tri < 2100
1400 < #tri < 1700




8 Conclusions and Future Work

We present several techniques that allow representation of clouds and smoke with
cartoon rendering and sketching. In contrast to the existing methods to date, our method provides results with interactive frame rates. The appearance of the gaseous
phenomena is very stylized and incorporates greater level of detail depending on the
user preferences; he can change several parameters affecting the results.
Temporal cost can be improved further by programming our functions in the hardware of the GPU which would allow greater realism to the process of simulation of
the smoke. Also, it would be interesting to incorporate a model of behavior to generate particles of clouds with the purpose of generating animated sequences of its
movement and metamorphosis. Finally, this method could also be enhancing by introducing multiresolution features that would improve the massive application of gaseous effects in computer graphics.

This work was partially supported by grant 05VI-1C02 of the University of Vigo, rant
TIN2005-08863-C03 of the Spanish Ministry of Education and Science and by
STREP project GameTools (IST-004363).

Interactive Cartoon Rendering and Sketching of Clouds and Smoke


1. J. Buchanan, and M. Sousa. The edge buffer: a data structure for easy silhouette rendering. In Proceedings of NPAR 00, (2000) 3942
2. L. Markosian, M. Kowalski, D. Goldstein, S. Trychin, and J. Hughes. Real-time nonphotorealistic rendering. In Proceedings of SIGGRAPH 97, (1997) 415420
3. R. Raskar and M. Cohen. Image precision silhouette edges. In Proceedings of I3D.,
(1999) 135140
4. C. Campos, R. Quirs, J. Huerta, E. Camahort, R. Viv, J. Lluch. Real Time Tree Sketching. Lecture Notes in Computer Science. Springer Berlin / Heidelberg, vol, 0302-9743,
(2004) 197204
5. T. Luft and O. Deussen, Real-Time Watercolor Illustrations of Plants Using a Blurred
Depth Test, In Proceedings of NPAR 06, (2006)
6. A. Selle, A. MOHR and S. Chenney, Cartoon Rendering of Smoke Animations. In Proceedings of NPAR 04, (2004) 5760
7. T. Nishita, E. Nakamae, Y. Dobashi. Display of clouds taking into account multiple anisotropic scattering and sky light. In Proceedings of SIGGRAPH96, (1996) 379386
8. J. Schpok, J. Simons, D. S. Ebert, C. Hansen, A real-time cloud modeling, rendering, and
animation system. Symposium on Computer Animation03, (2003) 160166
9. Y. Dobashi, K. Kaneda, H. Yamashita, T. Okita, T. Nishita, A simple, efficient method
for realistic animation of clouds. In Proceedings of ACM SIGGRAPH00, (2000) 1928
10. M. J. Harris, A. Lastra, Real-time cloud rendering. Computer Graphics Forum 20, 3,
(2001) 7684
11. A. Bouthors and F. Neyret, Modeling clouds shape. In Proceedings Eurographics '04,
12. G. Y. Gardner, Simulation of natural scenes using textured quadric surfaces. In Computer Graphics In Proceedings of SIGGRAPH84, 18, (1984) 1120
13. G. Y. Gardner, Visual simulation of clouds. In Computer Graphics SIGGRAPH 85,
Barsky B. A., 19, (1985) 297303
14. F. Di Fiore, W. Van Haevre, and F. Van Reeth, "Rendering Artistic and Believable Trees
for Cartoon Animation", Proceedings of CGI2003, (2003)
15. R. Fedkiw, J. Stam, and H. W. Jensen, Visual simulation of smoke. In Proceedings of
SIGGRAPH 01, ACM Press, (2001) 1522
16. O. Deussen, and T. Strothotte, Computer-generated pen-and-ink illustration of trees. In
Proceedings of SIGGRAPH 00, (2000) 1318
17. M. McGuire, A. Fein. Real-Time Rendering of Cartoon Smoke and Clouds. In Proceedings of NPAR 06, (2006)
18. A. Selle, N. Rasmussen, R. Fedkiw, A vortex particle method for smoke, water and explosions. ACM Trans. Graph., (2005) 910914
19. S. Murakami and H. Ichihara, On a 3d display method by metaball technique. Transactions of the Institute of Electronics, Information and Communication Engineers J70-D, 8,
(1987) 16071615

Spherical Binary Images Matching

Liu Wei and He Yuanjun
Department of Computer Science and Engineering, Shanghai Jiaotong University,
Shanghai, 200240, P.R. China

Abstract. In this paper a novel algorithm is presented to match the spherical

binary images by measuring the maximal superposition degree between them.
Experiments show that our method can match spherical binary images in a more
accurate way.
Keywords: spherical binary image, matching, icosahedron, subdivide.

1 Introduction
Spherical image plays an important role in optics, spatial remote sensing, computer
science, etc. Also, in the community of computer graphics, spherical image frequently
finds applications in photorealistic rendering[1], 3D model retrieval[2], virtual reality[3]
or digital geometry processing[4-5].
Compared with various algorithms for planar images matching and retrieval, there
is nearly no analogy for spherical ones up to now. But in some occasions, one cant
avoid facing the task of matching spherical images. Since spherical binary images
(SBIs) are used in a majority of cases, we emphasize our attentions on them. In this
paper we propose an effective method to match SBIs by measuring the maximal
superposition degree between them.

2 Our Method
In our research, we assume that the similarity between SBIs can be measured by the
maximal superposition degree between them, which also accords with the visual
apperception of human beings. While we know that spherical surface is a finiteunbounded region, it hasnt any information of border and also we cant find its start
and end, which baffles the matching. The key of this problem is trying to obtain the
result in a finite search space and the error is certain to decrease along with the
increase of search space. That is to say, we need to be capable of explicitly controlling
the error according to practical requests. In this paper, we divide the SBIs into umpty
equivalent regions and compare the difference between the corresponding regions of
two SBIs, the sum of which is the superposition in an orientation. Then we rotate one
SBI around its center with given rule and make another analogical calculation. And
after finite comparisons, their similarity can be obtained by choosing the case with the
most superposition.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 146149, 2007.
Springer-Verlag Berlin Heidelberg 2007

Spherical Binary Images Matching


As we know that regular polyhedra are uniform and have facets which are all of
one kind of regular polygon and thus better tessellations may be found by projecting
regular polyhedra onto the unit sphere after bringing their center to the center of the
sphere. A division obtained by projecting a regular polyhedron has the desirable
property that the resulting cells all have the same shapes and areas. Also, all cells have
the same geometric relationship to their neighbors. So if we adopt a regular polyhedron
as the basis to divide the SBIs, their distribution will also satisfy the requirements of
uniformity and isotropy. Since the icosahedron has the most facets of all five types of
regular polyhedra, we adopt it as a basis in our realization.
The division for a SBI has random orientation, which makes it difficult to compare.
For simplification, we firstly investigate the method to compare two SBIs in the same
orientation. As the SBIs have been divided into 20 equivalent spherical triangles, our
method is to subdivide each into four small ones according to the well known
geodesic dome constructions for several times(Fig.1) to form more compact trigonal
grids. If the gray of the SBI in the position of a grids center is 1, we tag this grid with
1, or otherwise 0. Then we totalize the superposed grids with the same tags and
calculate its proportion as the metric for similarity.

Fig. 1. The grids for a SBI in three resolutions

Suppose that the spherical icosahedron basis has been subdivided for N times and m
be the number of superposed grids with the same tags, then the similarity S between
the SBIs in this orientation can be defined as

20 * 4 N

Comparison of SBIs in the same orientation only reflects the superposition degree
in one situation, and therefore to obtain an all-around matching, we have to search for
more orientations to get the actual similarity. The varieties of orientation are achieved
through two steps in our realization:
Firstly, we rotate the icosahedron basis of one SBI around its center to obtain
another posture which is superposed with the prior one. Because the icosahedron has a
Symmetrical Group(SG) with its rank being 60, we need only rotate it 60 times to
get the most superposition in the group, that is, the max superposition in one SG
S G = max S i , i = 0,1,",60 . Fig.2 shows three orientations in a SG of a SBI for
example. Notice that the SBI is rotated along with the basis, too.


W. Liu and Y. He

Fig. 2. Three orientations in one symmetrical group, from which we can observe that the
trigonal meshes are superposed ( N = 2 )

Secondly, we learn that though rotations and comparison in one symmetrical group
can get a nearly approximate matching, the problem hasnt been completely finished,
yet. An undoubted fact is that the relation between the SBI and its icosahedron basis is
randomly fixed at the beginning, which may create a non-optimal discretization of SBI
for matching. To solve this problem, we adopt to experiment on other relations, which
will perhaps alter the shape or distribution of the discrete SBIs. Our method is to rotate
the icosahedron basis while maintaining the SBI fixed to obtain another SG. To ensure
that all SGs are distributed uniformly and able to cover different angles to solve the
rotation problem effectively, we adopt the relaxation mechanism proposed by Turk[6],
which intuitively has each directions of a SG push around other ones on the sphere by
repelling neighboring directions, and the most important step is to choose a repulsive
force and a repulsive radius for the interval.
Suppose we need L different SGs, there are totally 60 L rotations between the
two SBIs. The average maximum error of rotation angle A for two SBIs in longitude
and latitude can be roughly estimated using the following formula:

360 180

= 60 L A =


The calculation is acceptable. Then we can decide the actual similarity between the
SBIs as S max = max S Gj , j = 0,1,", L 1 . Also, we can easily analyze and
conclude that the whole time complexity of our algorithm is O ( L 4 ) .
Foremost, we evaluate the retrieval performance of these combinations. Assume
that the query SBI belongs to the class Q containing k SBIs. The performance of

each combination of parameters ( N , L ) can be evaluated using the percentage of the

Q that appeared in the top (k 1) matches. As the query SBI is

excluded from the computation, successful rate is 100% if ( k 1) SBIs from the
class Q appeared in the top ( k 1) matches. In our experiments, two parameters
need be decided: N and L . To balance all the influencing facts, we test 18 cases to
decide the most appropriate combination in which N {2,3,4} and
L {10,11,12,13,14,15} . Thus the number of grids for a SBI ranges from 320,
SBIs from the class

1280 to 5120.

Spherical Binary Images Matching


In the experiments, we test various kinds of SBIs and from each kind we choose
five SBIs as the query and test the results. Table 1 lists the average performances of
the 18 cases, from which we find that when N = 3 and L = 15 , the arithmetic
obtains the best performance. Of course, since there is nearly no acknowledged
benchmark and interrelated reports on SBI matching up to now, our experiments and
result of performance can only be an attempt.
Table 1. Average Performances (%)








As for the results, we make a tersely analysis. Generally speaking, discretization in

a high resolution will approach the original SBIs in a more accurate way, but too fine
division will arouse the explosion of information and data, also result in general loss
of description, and as a result a compromise must be considered. In addition, the
performance is certain to improve along with the increase of L , as bigger L leads to
more adequate matching.

3 Conclusion and Further Work

In this paper, a tentative method is proposed to match spherical binary images based
on superposition degree. Experiments show fairish performances, which preliminarily
validates its idea. As for the further work, global descriptors or feature vectors which
are analogous to that for planar images had better be extracted in advance to further
support off-line matching for mass retrieval.
Acknowledgements. This research has been funded by The National Natural Science
Foundation of China (60573146).

1. Sing K. A Survey of Image-based Rendering Techniques. Technical Report Series, CRL
97/4, Cambridge Research Laboratory
2. Kazhdan M, Funkhouser T. Harmonic 3D shape matching. Computer Graphics Proceedings
Annual Conference Series, ACM SIGGRAPH Technical Sketch, Texas, (2002)
3. Shigang L, Norishige C. Estimating Head Pose from Spherical Image for VR Environment.
The 3rd IEEE Pacific Rim Conference on Multimedia, Morioka, (2002), 1169-1176
4. Wu Y, He Y Tian H. Relaxation of spherical parametrization meshes. The Visual Computer,
(2005), 21(8): 897-904
5. Rhaleb Z, Christian R, Hans S. Curvilinear Spherical Parameterization. The 2006 IEEE
International Conference on Shape Modeling and Applications, Matsushima, 11-18
6. Turk G. Generating Textures on Arbitrary Surfaces Using Reaction-Diffusion. Computer
Graphics, (1991), 25(4): 289-298

Dynamic Data Path Prediction in Network Virtual

Sun-Hee Song, Seung-Moon Jeong, Gi-Taek Hur1, and Sang-Dong Ra2
Digital Contents Cooperative Research Center Dongshin University,
Dept. of Digital Contents, Dongshin University
Dept. of Computer Engineering, Chosun University

Abstract. This research studies real time interaction and dynamic data shared
through 3D scenes in virtual network environments. In a distributed virtual
environment of client-server structure, consistency is maintained by the static
information exchange; as jerks occur by packet delay when updating messages
of dynamic data exchanges are broadcasted disorderly, the network bottleneck
is reduced by predicting the movement path by using the Dead-reckoning
algorithm. The shared dynamic data of the 3D virtual environment is
implementation using the VRML EAI.
Keywords: net-VE, Dead-reckoning, Consistency.

1 Introduction
Net-VE(Network Virtual Environment)[1][2] is a system that connection the
distributed network to the virtual reality technology and offers 3D space to
cooperate distributed multi-users interaction through realtime networking.
Consistency at a distributed virtual environment[3] of the client-server structure is
continued by continuous exchange of static information among distributed clients.
The cycles transfer of static information brings traffic overhead of network. The
precise way for network users to know the others static is to transfer the packet by
hand shaking for each frame, which takes a overload of the synchronization and
decrease the velocity.
Based on the roles by which the dynamic data of the distributed multi-users is
processed through the multi-casting communication via the client-server and the peerpeer server, the network system in this study is composed of the message server and
the application server and distribution servicing loads by allocating realtime data to
the dynamic data server and non-realtime data to the static data server. When a new
client is connecter to a 3D scene of the network virtual space, it interpolates the prior
location with the Dead-reckoning[4] path prediction algorithm of DIS(Distributed
Interactive Simulation) to continue consistency and presentation the dynamic data
sharing scene of the 3D virtual space.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 150153, 2007.
Springer-Verlag Berlin Heidelberg 2007

Dynamic Data Path Prediction in Network Virtual Environment


2 Dynamic Data Path Prediction

2.1 Path Prediction Using Dead-Reckoning Algorithm
When you know the current location x(t ) , the location at the time change t + t after
movement for a cycle interval from the time t at an average velocity can be
calculated as in expression (1). Based on the location of the shared object in
expression (1), the object location at the current time can be estimated. The previous
location is interpolated if the error is over the predetermined threshold after reviewing
the error between the estimated and the actual static values.

x(t + t ) = x(t ) + xt

y (t + t ) = y (t ) + yt

x = V cos
y = V sin


V : average velocity at thetime [t , t + t ]

: average direction angle at thetime [t , t + t ]

Fig. 1. Dead-reckoning Convergence

Fig. 1 shows the Dead-reckoning convergence process. We can get more precise
estimates by increasing the estimate function interval of expression (2) but it results in
more composite calculation. Therefore, we use 2nd level functions such as 1st
differentials or 2nd differentials. We adjusts the threshold of the Dead-reckoning
convergence number and control the static information transfer rate. The client that
receives the discreteness dynamic data static information creates a continually shared
static using the shared static location convergence expression (2)
x(t ) = x(t0 ) + (t t0 )

(t t0 ) 2 d 2 x(t )
dx(t )
|t =t0 +
|t =t0 +"
dt 2


Fig. 2. shows the measurement of the actual location, the convergence location and
the estimated location error when the path from the initial value of the dynamic data
by the Dead-reckoning convergence width, ( x, y, ) =(1.5, 1.8, 70.0), to the value,
( xn , yn , n ) =(4.62, 5.64, 70.0), is set to the velocity 2.4, the acceleration 0, the time

stamp 2.0, and the DR Interval 0.75, 0.50, 0.10, 0.05. When the actual location in
Table 1 and the location prediction error by convergence width are measured at the
point ( xn , yn , n ) =(4.62, 5.64, 70.0), the estimated error rate gets smaller and it
becomes possible to predict the location which is closer to the actual path when the


S.-H. Song et al.

Dead-reckoning convergence width is adjusted between 0.05 and 0.5, as shown in

Fig. 2. As the location prediction interpolation error is 0 or -0.01 in (b) DR Interval =
0.10 and (c) DR Interval = 0.05, the dynamic data movement is not sensed at the
client rendering. Although realtime rendering is more possible as the consistency is
higher, it is possible to send the location change information of the shared object to
the other clients and continue an proper transfer rate when in 0.10, because the server
function and frequent updates cause network broadband width delays.

(a) DR Interval=0.50

(b) DR Interval=0.10

(c) DR Interval=0.05

Fig. 2. Position Prediction of Dead-reckoning Convergence

The location interpolation includes check of the error between the estimated and
the real values and interpolation of the previous location when the error is over the
predetermined threshold of the Dead-reckoning convergence. If the threshold is big,
the average transfer rate of static information is low, even though the error of the
shared static gets bigger. If the threshold is small, the average transfer rate and
broadband width get heighten even though the error of the shared static gets smaller.
Pt 0 and V r mean the ESPDU location and velocity, respectively. expression (3)is for
interpolating the previous location of the entity using the initial location value
the time stamp velocity at the linear block

t0 by

d n and the location estimate t1 .

Pt1 = Pt 0 + V r (t1 t0 ) .


3 Conclusion
The dynamic data whose path was predicted by a Dead-reckoning algorithm
interpolates the previous location with an interpolation node(Interpolation), transfers
the shared object static information, and continues consistency with other clients.
At the network 3D virtual space, the movement path was predicted using the Deadreckoning algorithm at the client buffer because the congested broadcast by
interaction and static information caused network delay and jerks. The error between
the estimated and the actual static values, which is more than the threshold based on

Dynamic Data Path Prediction in Network Virtual Environment


the shared object location, required interpolation of the prior location using the Deadreckoning estimate function and multicasting of the ESPDU packet of the DIS.
Fig. 3 is the 3D scene with the output through the client rendering engine at the
network virtual space. The actual path of the dynamic data agent_A is 'Actual Path',
and as the Dead-reckoning estimate location path is a 'DR path' and the dynamic data
moves suddenly when the user who received the shared static updates the information,
it does not change right away to a client cache value, but moves to the 'Interpolation
path' by the convergence interval.

Fig. 3. Dead-reckoning Apply of 3D Graphics Scene

[1] Singhal, S. and Zyda, M., 1999. Networked Virtual Environments: Design and
Implementation, ACM Press [ISBN 0-201-32557-8].
[2] Bouras, C., Triantafillou, V. and Tsiatsos, T., 2001. Aspects of collaborative learning
environment using distributed virtual environments In Proceedings of ED-MEDIA,
Tampere, Finland, June25-30 pp.173-178
[3] Bouras,C., Psaltoulis, D., Psaroudis, C. and Tsiatsos, T., 2003. Multi user layer in the EVE
distributed virtual reality platform In Proceedings of Fifth International Workshop on
Multimedia Networks Systems and Applications (MNSA 2003) Providence, Rhode Island,
USA, May 19-22 pp.602-607.
[4] W. Cai, F.B.S. Lee, L. Chen, An auto-adaptive Dead-reckoning algorithm for distributed
interactive simulation, in: Proceedings of the Thirteenth Workshop on Parallel and
Distributed Simulation, 1999, pp.82-89.

Modeling Inlay/Onlay Prostheses with Mesh

Deformation Techniques
Kwan-Hee Yoo1, Jong-Sung Ha2, and Jae-Soo Yoo3

Dept. of Computer Education, Chungbuk National University, Korea
Dept. of Game and Contents, Woosuk University, Korea
School of EECE, Chungbuk National University, Korea

Abstract. This paper presents a method for effectively modeling the outer
surfaces of inlay/onlay prostheses restoring teeth that are partially destroyed.
We exploit 3D mesh deformation techniques: direct manipulation free-form
deformation (DMFFD) [9] and multiple wires deformation (MWD) [10] with
three kinds of information: standard teeth models, scanned mesh data from the
plaster cast of a patient's tooth, functionally guided plane (FGP) measuring the
occlusion of the patients' teeth. Our implementation can design inlay/onlay
prostheses by setting up various parameters required in dentistry during
visualizing the generated mesh models.
Keywords: Prostheses modeling, inlay/onlay, mesh deformation.

1 Introduction
Many artificial teeth prostheses are composed of cores and crowns [1]: the cores
directly contact the abutment to increase the adhesive strength to the crowns that are
revealed to the outside sight when the artificial teeth prostheses are put in.
Inlay/onlays can be regarded as a kind of single crowns, which are used for
reconstructing only one tooth that are partially destroyed. In general, a tooth adjoins
with adjacent teeth and also contacts other teeth at the opposite side when the upper
and lower jaws occlude. The adjoining surfaces at the adjacent side are said to be
adjacent surfaces, and the contact surfaces at the opposite side during the occlusion
are called occlusal surfaces. The inlay is a prosthesis fabricated when little dental
caries or established prostheses are on the two surfaces, while the onlay is a prosthesis
fabricated when its cusp in tongue side exists soundly but other parts are destroyed.
In modeling inlay/onlays with CAD/CAM and computer graphics techniques, the
most important subject is how to model their 3D shapes same as the dentists want to
form. That is, the adhesive strength to the abutment must be maximized. Furthermore,
the appropriate adjacency with neighboring teeth and the accurate occlusal strength to
the opposite tooth has to be guaranteed. Previous researches for modeling
inlay/onlays can be divided to two categories: 2D image-based [2-5] and 3D meshbased [6,7].
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 154157, 2007.
Springer-Verlag Berlin Heidelberg 2007

Modeling Inlay/Onlay Prostheses with Mesh Deformation Techniques


Our method adopts the mesh-based modeling approach similarly to the GN-1
system [7]. In this paper, however, differently to taking a side view of 3D scanners for
producing mesh models in the GN-1 system, an inlay/onlay is modeled by dividing its
surface into two parts: an inner surface adhering to the abutment and an outer surface
revealed to the outside sight. The inner surfaces of inlay/onlays are modeled same as
the results of Yoo et al. [8] with the 2D Minkowski sum: compute a new model that is
the expansion of a terrain model with expansion values given by users. This paper
focuses on modeling the outer surface, which is just the union of two subparts: the
adjacent and occlusal surfaces, by deforming the corresponding standard tooth
according to the inherent features of each tooth.

2 Modeling the Outer Surfaces of Inlay/Onlays

The standard teeth models include the information of axes and geometric features for all
teeth in the upper and lower laws. First, the standard teeth are transformed and aligned
to the patients teeth by referencing the arrangement information of the former such as
adjacent points, tongue side points, lingual side points, and the positional information of
the latter. And then, adjacent surfaces are generated with the technique of direct
manipulation free-form deformation (DMFFD) [9] to a standard tooth by considering
the contact points. On the other hand, occlusal surfaces are generated by applying the
technique of multiple wires deformation (MWD) [10] to the two corresponding
polygonal lines that are, respectively, extracted from a standard tooth and FGP.
The DMFFD [9] is an extended version of free form deformation (FFD) [11],
which directly controls the points on the mesh for the deformation. For an arbitrary
point X and a set P of control points, they define the deformation equation as the
following matrix form X = BP . Here the matrix B is obtained by the B-spline
blending function with the three parametric values that are determined from the
given X . Then, the transformed point X ' is represented as B( P + P) , that is,
X = BP . For moving a given point X in the amount of X , the amount P for
moving control points can be inversely computed as.
P = B + X .


In the above equation, the B + is a pseudo inverse of the matrix B . If we apply FFD
to X with the computed P , X is transformed into X ' . Hence, it is possible to
deform a mesh intentionally, if we apply FFD to all vertices of the mesh after
computing P for each vertex with the same method.
The deformation technique of multiple wires deformation (MWD) [10] is used for
more naturally deforming the wired curves representing geometric features of cusp,
ridge, fissure, and pit, which are extracted after scanning the FGP. A wired curve is
represented as a tuple < W , R, s, r , f > , where W and R are the free-form parametric
curves that are the same in an initial state, s is a scalar for adjusting the radial size in
the curve circumference, r is a value representing the range effecting the curve
circumference, and f is a scalar function defined as f : R + [0,1] . The function f
guarantees the C1-continuity at least, and satisfies the properties of
f (0) = 1, f ( x) = 0 for x 1 and f ' (1) = 0 . Our implementation uses the C -continuous


K.-H. Yoo, J.-S. Ha, and J.-S. Yoo

function f ( x) = ( x 2 1) 2 , x [0,1] as in [10]. As R is deformed into W , an arbitrary

point p on R will be deformed accordingly. Let p R be the point nearest to R , and
pW be the corresponding point in W . Then, p R and pW have the same curve
parametric value. When W is deformed, the point p moves to p ' as.
p ' = p + ( pW p R ) f ( x ) .


In Equation (2), f ( x) is a function with three parameters R , p , and r , where r

represents a range. Generally, we define x =

p pR

. We can move the point p to

p ' by deforming W with an expansion parameter s for changing the wire size.

p ' = p + ( s 1)( p p R ) f ( x) + ( pW p R ) f ( x) .


Definitely, the above equation has the property that the expansion parameter s
moves the point p in the direction p p R . This principle of wire deformation is
extended for deforming the multiple wires. Let pi be the variation value of p when
the wire Wi is deformed. Then, the deformed point p ' in deforming all wires
Wi , i = 1,", n is written as.

p' = p +

pi f i ( x) m
i =1

i =1


f i ( x) m

The parameter m is used for locally controlling the shapes of the multiple wires, i.e.,
it controls the effects of Wi and si during the deformation. For example, the effects
of Wi and si rapidly increase according to the increasing value of m when f i (x)
approaches to 1.
In modeling the occlusal surfaces, Ri is the curve interpolating all points of the the
geometric features lines of cusp, ridge, fissure, and pit. The wired curve Wi
corresponding to Ri is determined by more complicated computations; for each
segment Li of the polygonal lines, we compute the intersection line segment Li '
between FGP and a z-axis parallel plane passing Li , cut Li ' so that it has the same xand y- coordinates with the end points of Li , and finally obtain a curve interpolating
all points of the cut line segments. Since the two curves interpolates the same number
of points, we can get the parametric value of the curves for any point on Wi . In our
implementation, we use the Catmull-Rom curve for the interpolating curves, and the
one suggested by Singh et al. [10] for the function f . Our implementation assigns the
values 1, 5, and 1 to si , ri and m , respectively. For the all of points p on the
standard tooth, W , and R , we compute pR and pW and then obtain pi with
Equation (4). By applying Equation (5) to pi of all wired curves and f , we can get
the finally deformed point q' .

Modeling Inlay/Onlay Prostheses with Mesh Deformation Techniques


3 Experiments and Future Works

Our system for modeling inlay/onlays is implemented in the environments of
Microsoft Foundation Class (MFC) 6.0 and OpenGL graphics library on PC. Fig. 1
illustrates the designed outer surface for an onlay.





Fig. 1. Designing the outer surface for an onlay; (a) a standard tooth model, (b) a scanned FGP
model, (c) a finally designed onlay, and (d) an onlay put in on the abutment

For more accurate modeling, several parameters can be set up through a simple
interface as the designed inlay/onlays are visualized. Currently, our implementation
gets the adjacent points manually for designing the adjacent surfaces of inlay/onlays.
In future, an automatic method for determining such adjacent points is needed to be
developed. It is another research subject to simulate the teeth occlusion by using the
FGP and the geometric features of teeth.
Acknowledgements. This work was partially supported by the Korea Research
Foundation Grant funded by the Korean Government (MOEHRD) (KRF-2006D00413).

1. Yoon, C.G., Kang, D.W., Chung, S.M.: State-of-arts in Fixed Prosthodontics, Jongii Press,
Korea (1999)
2. Myszkowski, K., Savchenko, V.V., Kunii, T.L.: Computer modeling for the occlusal
surface of teeth, Proc. of Conf. on Computer Graphics International (1996)
3. Savchenko, V.V., Schmitt, L.M.: Reconstructing occlusal surfaces of teeth using a genetic
algorithm with simulated annealing type selection, Proc. of Solid Modeling (2001) 39-46
4. Yoo, K.Y., Ha, J.S.: User-Steered Methods for Extracting of Geometric Features in 3D
Meshes, Computer-Aided Design and Applications, Vol. 2 (2005)
5. Sirona Corporation, (1985)
6. Nobel Digital Process Corporation, Procera Systems, Nobel Digital Process Corporation,
Sweden (2001)
7. GC Corporation, GN-I Systems, GC Corporation, Japan (2001)
8. Yoo, K.Y., Ha, J.S.: An Effective Modeling of Single Cores prostheses using Geometric
Techniques, Journal of Computer-Aided Design, Vol. 37, No. 1 (2005)
9. Hsu, W.M., Hughes, J.F., Kaufman, H.: Direct manipulation of free-form deformations, In
Computer Graphics (SIGGRAPH '92), Vol. 26 (1992) 177-184
10. Singh, K., Fiume, E.: Wires: A Geometric Deformation Techniques, SIGGRAPH (1998)
11. Sederberg, T., Parry, S.: Free-form deformation of solid geometric models, In Computer
Graphics (SIGGRAPH86) (1986) 151-160

Automatic Generation of Virtual Computer

Rooms on the Internet Using X3D
Aybars Ugur1 and Tahir Emre Kalayc2

Ege University, 35100 Bornova-Izmir, Turkey,

Abstract. In this paper, some natural links between virtual reality and
interactive 3D computer graphics are specied and Web3D technologies especially VRML and X3D are briey introduced. Web-based tool
which is called EasyLab3D was designed and implemented using X3D.
This web-based tool is used to generate automatically 3D virtual computer rooms that can be navigated in. It is important for introduction
of departments, companies and organizations which have computer laboratories and for planning new computer rooms. As a result, state-of-the
art technologies and methods in development of automatic 3D scene and
model generation tools are discussed.


Sherman and Craig[1] dene VR (Virtual Reality) as a medium composed of

interactive computer simulations that sense the participants position and actions and replace or augment the feedback to one or more senses, giving the
feeling of being mentally immersed or present in the simulation. According to
them, the key elements to experiencing VR are a virtual world, immersion, sensory feedback, and interactivity. It is the use of computer graphics systems in
combination with various display and interface devices to provide the eect of
immersion in the interactive 3D computer-generated environment. We call such
an environment a virtual environment[2].
High quality interactive 3D content on the web without bandwidth and platform limitations allows internet users to become fully immersed in realistic 3D
worlds. Many Web3D technologies have been developed to give people real-time,
three-dimensional and interactive computer graphics on the web.
In this study, we designed and implemented an X3D-based tool (EasyLab3D)
which is used to generate automatically and navigate in virtual computer rooms
on the web using state-of-the-art technologies.

Extensible 3D (X3D)

Web3D is simply 3D graphics on the web. VRML-NG (X3D) arises in 1999

as a result of eorts to carry 3D to all environments. X3D which is developed
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 158161, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Automatic Generation of Virtual Computer Rooms


by Web3D consortium is the ISO standard for real-time 3D computer graphics.

XML was adopted as syntax for X3D in order to solve a number of real problems
of VRML. According to Yumetech President, Alan Hudson the reasons to use
XML as syntax are to interoperate with the Web and to incorporate new graphics
technologies in a standardized way1 . Main X3D features are extensions to VRML
(e.g. Humanoid Animation, Nurbs, GeoVRML etc.), the ability to encode the
scene using an XML syntax as well as the Open Inventor-like syntax of VRML97,
and enhanced application programmer interfaces.
Blais et al.[3] present a Web3D application for military education and training.
Patel et al.[4] describe an innovative system designed for museums to create,
manage and present multimedia based representations of museum artifacts in
virtual exhibitions both inside and outside museums. Barbieri et al.[5] developed
computer science virtual museum which can be visited online, also which contains
simple interactive games that illustrate basic principles of computer science.
Some other projects based on X3D are explained in [6] and [7].

Automatic Computer Room Generation Tool

We developed Web-based Automatic Virtual Computer Room Generation Tool

(EasyLab3D2) using Java and Xj3D. Some features such as navigation in 3D
room scenes and realistic presentation are important for people to introduce
computer laboratories to visitors or internet users. This tool can also be used for
designing new labs by providing 3D previews. 3D model of a computer laboratory
(Ege Lab) is generated using EasyLab3D in a few seconds (Fig. 1).

Fig. 1. Ege Lab in our department generated by EasyLab3D (Perspective View)

3D models (table/desk, room) required by program were developed using Flux

Studio 2.0 Beta3 . All models were developed as prototype to reduce the le size


A. U
gur and T.E. Kalayc

and to use an object many times without coding same things repeatedly. Created
model prototypes are stored at web space4 for online use.
Scenes are generated using our layout algorithms and prototype instances are
created with calculated transformations and put in a temp le on the y. Thus
user can save the scene generated as a le and publish on the web site. User
can also examine these les later using other browsers and plug-ins. Scenes are
created on the y, shown on the Xj3D browser using Java programming language.
Xj3D provides to change X3D scenes dynamically using SAI5 (Scene Authoring
Interface) technology. Java Web Start technology6 is used to make easier run of
program and to be downloaded all required libraries automatically.
Automatic generation algorithms for most popular rectangular computer
room layouts are implemented in this project. Computers which are on the tables are placed in that order. Computer Room Layout 1 has equally sized gaps
between computer tables and Layout 2 includes one corridor has two computer
blocks one at the left side of room and one at the right side shown in Fig. 2.
Room parameters given by the user are width (sizeX), length (sizeY), height
and distance to board (gapY, minimum feasible distance between board and
tables). Table/desk parameters are width (sizeX, width of table), length (sizeY)
and height. Some other parameters are Desk Count (room.tableCount), Sitting
Gap (table.gapY, feasible distance between tables that a computer user works
easily, in Y axis), Desk Gap (table.gapX, gap between tables in X axis).

Fig. 2. Computer Room Layout 1 (on the left) and Layout 2 (on the right)

Java-like Algorithm generates 3D model of computer room for Layout 1:

// Calculating table capacity of room width (X)
room.tCountX = (room.sizeX+table.gapX)/(table.sizeX+table.gapX);
// Calculating table capacity of room length (Y)
room.usableLength = room.sizeY - room.gapY;
6 sai.html

Automatic Generation of Virtual Computer Rooms


room.tCountY = (room.usableLength+table.gapY)/
room.maxTableCount = room.tCountX * room.tCountY;
// Placement
foreach(table in room) calculatePosition(table);
Tables and computers exceeding the room capacity are not inserted into the
scene as specied in the algorithm.


New generation 3D languages, APIs and key Web3D technologies (X3D, Java
3D) oer possibilities for creating and manipulating with 3D objects and scenes
easily. X3D provides support to insert 3D shapes, text and objects into scene, to
transform objects, to modify attributes of objects and others such as grouping,
animation, illumination, etc. X3D also uses XMLs advantages such as rehostability, page integration, integration with the next-generation web technologies,
extensive tool chain support. Developments in Web3D technologies and authoring tools are important for the future of VR.
Increasing number of web-based 3D graphics projects easy to use like EasyLab3D will make 3D graphics natural part of web and will improve web quality.
3D interactive model of a computer lab is more than a series of 2D pictures and
also more enjoyable and realistic. Companies and organizations can use this tool
for generating 3D models of their labs only giving a few parameters. Internet
users can access and navigate in these lab models. This automatic generation
tool is also useful for interior designs of new labs by providing 3D previews.

1. Sherman, W.R., Craig, A.B.: Understanding Virtual Reality: Interface, Application,
and Design. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (2002)
2. Sowizral, H.A., Deering, M.F.: The java 3d api and virtual reality. IEEE Computer
Graphics and Applications 19 (1999) 1215
3. Blais, C., Brutzman, D., Horner, D., Nicklaus, S.: Web-based 3d technology for
scenario authoring and visualization: The savage project. In: I/ITSEC. (2001)
4. Patel, M., White, M., Walczak, K., Sayd, P.: Digitisation to presentation - building
virtual museum exhibitions. In: VVG. (2003) 189196
5. Barbieri, T., Garzotto, F., Beltrame, G., Ceresoli, L., Gritti, M., Misani, D.: From
dust to stardust: A collaborative 3d virtual museum of computer science. In: ICHIM
(2). (2001) 341345
6. Hetherington, R., Farrimond, B., Presland, S.: Information rich temporal virtual
models using x3d. Computers & Graphics 30 (2006) 287298
7. Yan, W.: Integrating web 2d and 3d technologies for architectural visualization: applications of svg and x3d/vrml in environmental behavior simulation. In: Web3D 06:
Proceedings of the eleventh international conference on 3D web technology, New
York, NY, USA, ACM Press (2006) 3745

Stained Glass Rendering with Smooth Tile

SangHyun Seo, HoChang Lee, HyunChul Nah, and KyungHyun Yoon
ChungAng University,
221, HeokSuk-dong, DongJak-gu, Seoul, Korea

Abstract. We introduce a new glass tile generation method for simulating Stained Glass using region segmentation algorithm and cubic spline
interpolation method. We apply a Mean shift segmentation algorithm
to a source image to extract a shape of glass tile. We merge regions
by user input and use morphological operation to remove the invalid
shape. To make the shape of glass tile, we apply cubic spline interpolation and obtain the leading and the region with smooth boundary. Next,
we re-segment the region using the spline curves. Finally we apply the
transformed colors to each region to create a whole glass tile.


This study is to make the Stained Glass image that looks like manually produced
by artists using the 2D image as the input. The Stained Glass rendering is a
eld of NPR(Non-Photo realistic Rendering) and it is very much dierent from
the traditional realistic rendering. While the rendering primitive of the realistic
rendering is a pixel that of the Stained Glass rendering is a region, the collection
of the pixels. Therefore the output image may be varied according to the size and
the shape of the area. In this paper, we would like to introduce a new method
to create glass tile. The Stained Glass is made by cutting and pasting the glass
therefore the unit of the Stained Glass is a glass tile. As conventional algorithm
used to create a glass tile simply using region segmentation, it could not have
the Stained Glass like feeling if the region segmentation were not appropriate.
In order to resolve this problem, we interpolated the boundaries between the
regions, re-segmented each segmented area to create the region to compose glass

Related Work

Although many dierent studies have been performed since the studies regarding
the NPR were started Strothotte[1], Recently, there have been many trials to
simulate the Stained Glass using computer technology.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 162165, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Stained Glass Rendering with Smooth Tile Boundary


In Photoshop, The Stained Glass lter is one of them. The Stained Glass
lter of the Photoshop basically makes the image using Voronoi Diagram having
random Voronoi sites(Ho[5]).
Mould[2] approached to the studies on the Stained Glass in dierent ways.
This method dividied the input image using the region segmentation method
based on the color, and then it mitigated the segmented regions applying the
morphological operation. However the regions created by Mould[2] method are
far from the formative structures.


Glass Tile Generation for the Stained Glass

Region Generation of Glass Tiles

It is very important to abstract the segments that can be expressed in glass tiles.
In this study, we used the Mean Shift segmentation algorithm(Comanicu[4]) to
generate the basic segmented regions creating the glass tiles(Fig. 1(b)).
Additionally, as the regions with unexpected shapes can be created from the
application of the segmentation algorithm, we made it possible for the user to
merge the regions through input(Fig. 1(c)).

Fig. 1. Stained glass rendering process (a) Input image (b) Mean shift segmentation
(c) Region merge (d) Morphological operation (e) Region boundary interpolation (f)
Region re-division (g) Rendered image after color transform


Interpolation of the Region Boundaries

In order to purify each segment, we used morphological operations (Fig. 1(d))and

the cubic spline interpolation(Fig. 1(e)). because the abstracted regions have
improper shaped regions created by the Mean Shift segmentation algorithm and
the rough boundaries in the form of noise.
Although Mould[2] tried to resolve those two problems using the morphological operations only, this study used it to remove the improper shaped segments
and used the cubic spline interpolation to make the rough boundary.


S. Seo et al.

Next, to make the boundaries of the mitigated shaped regions smooth, we

applied the cubic spline interpolation. In order to calculate cubic spline interpolation we need to select the control point. To select the control point, we dened
the distance of the control point and then selected the point on the boundary
with the pre-set distance from the current control point (Fig. 2(a)).

Fig. 2. (a)Search control points for spline interpolation (b)Region re-segmentation process (c)Re-segmentation result

We applied the same method to create the leading that lls the gaps between
glasses in the actual stained glass during the interpolation process. It is to abstract the parts that did not create the segment as the leading by creating and
applying the smaller spline curves than the basic segments.

Re-segmentation of the Region

For securing the formative beauty that shown in the actual stained glass. We
re-segmented the large segment region(Fig. 1(f)).
In the re-segmentation process, we made random curve and re-segmented the
regions based on the curve.
First of all, select a random point on the region and identify two points
on the boundary near the selected point to use them as the control points.
Based on the selected three points, create a curve using the cubic spline interpolation (Fig. 2(b)). We limited the number of the control points as three
because if we use too many points on the segment it would bend the curve too
much. It is improper to apply the over-bent curve to the stained glass rendering
(Fig. 2(c)).

Determination of the Colors for Each Region

Mould[2] converted the colors in the input image to the colors that could be
used in the Middle Age to designate the colors of the glass tiles. We applied the
same method to this study. Through this process, we could have the strong color
contrast eects(Fig. 1(g)).

Stained Glass Rendering with Smooth Tile Boundary



In this study, we created the shapes of the smooth glass tiles similar to those in
the actual stained glass by interpolating the boundaries between the segments
to simulate the stained glass image, and then created the frame shape of leading
with irregular thickness that we could nd between the glass tiles. Additionally
we created the formative characteristics that composed a meaningful segment by
gathering small glass tiles through the re-segmentation of the segment. We also
expressed the strong color contrast through the color conversion process(Fig. 3).
Additionally, to emphasize the formative shapes, we highlighted the boundaries
before re-segmentation with thick lines. As the stained glass is mainly used in
windows, we gave the round light source eects to the image so that we could
get the lighting eects. Fig. 3 shows the comparison between the image after the
light source eect application and the image from Mould[2].
Actual stained glass is made of color glasses.

Fig. 3. Result Images

1. Thomas Strothotte and Stefan Schlechtweg, Non-Photorealistic Computer Graphics: Modeling, Rendering and Animation, (2002), Morgan Kaufmann, ISBN:
2. David Mould: A Stained Glass Image Filter. In the proceedings of the 14th EUROGRAPHICS Workshop on Rendering, pp. 20-25
3. Grodecki, L., Brisac, C. : Gothic Stained Glass. Thames and Hudson, London, (1985)
4. Comanicu, D., Meer, P. : Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Machine Intell, 24, 4 (2002), 603-619
5. Ho, K., Keyser, J., Lin, M., Manocha, D. and Culver, T. : Fast Computation
of Generalized Voronoi Diagrams Using Graphics Hardware. In the proceedings of
SIGGRAPH 99: 277-286
6. Gonzalez, Woods: Digital Image Processing, Addison Wesley, (1993)
7. Adam Finkelstein, Marisa Range, :Image Mosaics, Technical Report of Princeton
Univ., (1998)

Guaranteed Adaptive Antialiasing Using

Interval Arithmetic
Jorge Florez, Mateu Sbert, Miguel A. Sainz, and Josep Veh
Institut dElectr`
onica, Inform`
atica i Autom`
Universitat de Girona, Girona 17071, Spain

Abstract. Interval arithmetic has been used to create guaranteed intersection tests in the ray tracing algorithms. Although those algorithms
improve the reliability in the visualization of implicit surfaces, they do
not provide an alternative to avoid point sampling inside the pixel. In
this paper, we develop an interval adaptive antialiasing algorithm (IAA)
by means of the study of the coherence of sets of rays crossing a pixel
(instead of individual rays) in order to detect variations over the hit
surface. This method allows us to obtain better visualizations than the
traditional interval ray tracing algorithms.


The ray tracing of implicit surfaces suers of accuracy problems, which are related to thin features that disappear when some special surfaces are rendered.
This occurs because the computers can not guarantee the robustness in oating
point operations during the intersection test [3,6]. Many authors have proposed
reliable ray tracing algorithms that perform guaranteed intersection tests based
on interval arithmetic [2,7,8].
However, those authors do not propose a reliable way to reduce aliasing in the
visualization of the surfaces. An alternative is to use adaptive sampling [9]. In
this technique, rays are traced for every corner of the pixel. If the values are too
dierent, the pixel is subdivided and new rays are traced in the new corners. Due
to the fact that it is still possible to miss thin parts of the surface, this method
uses bounding boxes for small objects. If the ray intersects a bounding box, the
sampling rate is increased to guarantee that view rays do not miss the object.
Although eective in most of the cases, this technique does not work very well
with long thin objects [4].
Other approaches are based on gathering information of adjacent rays as cone
tracing [1] and beam tracing [5]. The main disadvantage of those proposals is
that they require computationally complex intersection tests.
This paper introduces a method called interval adaptive antialiasing (IAA)
with the following characteristics:
This method examines areas of the pixel instead of points as in point sampling. Interval arithmetic is used to guarantee that small parts of the surface
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 166169, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Guaranteed Adaptive Antialiasing Using Interval Arithmetic


inside the pixel are not missed. All the set of rays that cover an area of the
pixel are treated as a unique ray.
The information obtained from sets of rays is studied to determine if the
area covered by the rays presents too much variation over the surface.
This method does not require bounding boxes to detect small features as
adaptive sampling does. Also, the complexity of the intersection test is almost the same as the traditional interval ray tracing.

Interval Adaptive Antialiasing (IAA)

The intersection between the implicit function f (x, y, z) = 0 and a ray dened
x = sx + t(xp sx ); y = sy + t(yp sy ); z = sz + t(zp sz )
is dened by the function:
f (sx + t(xp sx ), sy + t(yp sy ), sz + t(zp sz ))
where (sx , sy , sz ) are the coordinates of the origin or view point, (xp , yp , zp ) are
the values of a point in the screen and t indicates the magnitude in the direction
of the ray. If the parameter t is replaced with an interval T , a set of real values
can be evaluated instead of a unique value. To cover pixel areas instead of points,
the real values of xp and yp of the screen must be considered as interval values
too. The function to include the new interval values can be dened as follows:
F (Xp , Yp , T ) = F (sx + T (Xp sx ), Sy + T (Yp sy ), sz + T (zp sz ))


To perform the evaluation with equation 1, the intervals Xp and Yp must

be xed to a range of values inside the pixel, and a bisection process must be
started over the parameter T . Every interval generated for the subdivision of T
is evaluated to know if the set of rays intersects the surface. If the set of rays does
not intersect any part of the implicit surface, then the result of the evaluation
of equation 1 does not contain zero (0
/ F (Xp , Yp , T )). Otherwise, it is possible
that one or more rays in that pixel area intersect the surface. In that case, the
parameter T must be subdivided until the machine precision is achieved.
To save the values of T near to the intersection of the set of rays, the following process is performed: when 0 F (Xp , Yp , T ) and F (Xp , Yp , T.Inf ) > 0, the
inmum value of the result (the less positive) is saved in a vector. Also if 0
F (Xp , Yp , T ) and F (Xp , Yp , T.Sup) < 0, the maximum value is saved in another
vector. When the subdivision process is over, the smaller of the positive points and
the bigger of the negatives are taken to create the interval of the nal value of T .
The interval T is used to detect variations over the surface in the following
way: using T , the interval values of X, Y and Z are calculated. Those values
correspond to the set of all the intersections of the set of rays. Also, the interval
normal is calculated using the derivative of the function F  (X, Y, Z), which is
the same derivative of the implicit function using interval values of X, Y, Z.
Finally, the interval dot product between the set of normals and the view rays


J. Fl
orez et al.

is calculated. If the width of the interval containing the dot products between
the set of rays and the normals is bigger than a predened threshold, or if the
surface is not monotonic for the values of T , the surface varies too much in the
evaluated area.
The interval adaptive antialiasing is perfomed in every pixel as follows: all the
area of the pixel is evaluated using process described in section 2.1. If the surface
varies too much inside the pixel, the pixel is divided in four subpixels and the
process is performed on them. In other case, the pixel or subpixel evaluated is
shaded using the average of the normals. If the pixel is divided, the average of the
shade values of the subpixels is used to obtain the nal shade value of the pixel.

Experimentation and Results

The IAA method was tested over the surfaces presented in gure 1. The comparisons have been performed between an adaptive algorithm using a traditional
ray tracing algorithm, and our interval adaptive algorithm. Figures 1a and 1b
show a twist with a shadow. The problems in the visualization in (a) occur because shadow rays miss the thin details of the twister. Also, the visualization
of gure 1a takes 27 minutes; gure 1b takes 20 minutes. The time dierence

Fig. 1. Experimentation images. (a) Fine details of the shadow are not well visualized
using traditional interval ray tracing. (b) Using IAA, those details are better visualized.
(c) A Blobby surface rendered by IAA algorithm. (d)A tri-trumpet surface, in which
some sections appear separated although interval arithmetic is used for the intersection
test, as is shown in (d). Using IAA, the surface is rendered correctly (f).

Guaranteed Adaptive Antialiasing Using Interval Arithmetic


is due to IAA detect pixels without too much variation inside, using only one
intersection test. Using the traditional ray tracing algorithm, at least four rays
are traced for every pixel.


In this paper we have presented an interval adaptive antialiasing method (IAA)

for the interval ray tracing of implicit surfaces. It can be adapted to the traditional interval algorithms used to ray tracing implicit surfaces, without increasing
the complexity of the intersection tests. Also, the proposed technique generates
better visualization results than methods based on point sampling, specially
for surfaces with thin features. IAA is completely based on interval arithmetic,
which guarantees the reliability of the algorithm. As a future work, we are planning to apply our method for reections and refractions. In this paper, sets of
rays are traced to visualize view and shadow rays.

This work has been partially funded by the European Union (European Regional Development Fund) and the Spanish Government (Plan Nacional de Investigacin Cientca, Desarrollo e Innovacin Tecnolgica, Ministerio de Ciencia
y Tecnologa) through the co-ordinated research projects DPI2002-04018-C0202, DPI2003-07146-C02-02, DPI2004-07167-C02-02, DPI2005-08668-C03-02 and
TIN2004-07451-C03-01 and by the government of Catalonia through SGR00296.

1. Amanatides, J.: Ray Tracing with Cones. Computer Graphics. 18 (1984) 129135
2. Capriani, O., Hvidegaard, L., Mortensen, M., Schneider, T: Robust and ecient ray
intersection of implicit surfaces. Reliable Computing. 1(6) (2000) 921
3. Fl
orez, J., Sbert, M., Sainz, M., Veh: Improving the interval ray tracing of implicit
surfaces. Lecture Notes in Computer Science. 4035 (2006) 655-664
4. Genetti, J., Gordon, D.: Ray Tracing With Adaptive Supersampling in Object Space.
Graphics Interface. (1993) 7077
5. Heckbert, P., Hanrahan, P.: Beam Tracing Polygonal Objects. Computer Graphics.
18 (1984) 119127,
6. Kalra, D., Barr, A.: Guaranteed ray intersection with implicit surfaces. Computer
Graphics (Siggraph proceedings). 23 (1982) 297206
7. Mitchell, Don: Robust ray intersection with interval arithmetic. Proceedings on
Graphics interface 90. (1990) 6874
8. Sanjuan-Estrada, J., Casado, L., Garca I.: Reliable Algorithms for Ray Intersection
in Computer Graphics Based on Interval Arithmetic. XVI Brazilian Symposium on
Computer Graphics and Image Processing. (2003) 3544
9. Whitted, T: An Improved Illumination Model for Shaded Display. Communications
of the ACM. 23 (1980) 343-349

Restricted Non-cooperative Games

Seth J. Chandler
University of Houston Law Center

Abstract. Traditional non-cooperative game theory has been an extraordinarily powerful tool in modeling biological and economic behavior,
as well as the eect of legal rules. And, although it contains plausible concepts of equilibrium behavior, it does not contain a theory of dynamics
as to how equilibria are to be reached. This paper on Restricted NonCooperative Games inserts dynamic content into traditional game theory
and thus permits modeling of more realistic settings by imposing topologies that restrict the strategies available to players. It uses Mathematica
to show how the payo array used in conventional game theory, coupled
with these strategy topologies, can construct a "game network", which
can be further visualized, analyzed, and "scored" for each of the players.
The paper likewise uses Mathematica to analyze settings in which each
player has the ability to engineer its own strategy topology and suggests
other potential extensions of Restricted Non-Cooperative Games.1
Keywords: non-cooperative game theory, Mathematica, law, Nash
Equilibrium, game network, New Kind of Science, directed graphs.


In conventional non-cooperative game theory, each player can see and can instantaneously select any element of its strategy set in response to the other
players strategy selections.[1] In real settings, however, the strategies available
to a player at any given time will often be a function of the strategy it selected
at a prior time.[2] It may, for example, be possible to change only one aspect
of a strategy at a time. Alternatively, as in work earlier done by the author in
"Foggy Game Theory,"[3] the strategies may be placed in some cyclic topology
and only changes within some distance of the current stategy are permitted.
Sometimes these constraints on the dynamics of strategy selection may be the
result of external circumstances or cognitive limitations on the part of the player;
other times they may be deliberately engineered by the player itself.[4] Either

Mathematica code used to create the framework is available from the author on

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 170177, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Restricted Non-cooperative Games


way, however, the result is to overlay the strategies with a network connecting
them (a topology) in which some strategies are connected and others are not.2

From Strategy Topologies and Payo Arrays to Game


The left panel of Figure 1, produced using Mathematicas GraphPlot command,

shows a sample strategy network sD for an imaginary driver who has strategies
labeled A through E (each strategy representing some combination, perhaps, of
care and frequency). Notice that while the driver can continue the strategy of
B, once it abandons that strategy it cannot return.This is the sort of realism
permitted by this extension of conventional game theory. Notice further that
the strategies dier in their ability to immediately access other strategies: D can
access C and E immediately while A can access A and E immediately. Another
player (perhaps a pedestrian) might, for example, have strategies labeled 1-3,
again perhaps some combination of care when walking and frequency of walks.
The pedestrians strategy network sP is shown in the right panel of Figure 1, in
which the dashed arrows show the strategy connections.

Driver Strategy Topology


Pedestrian Strategy Topology



Fig. 1. Strategy topologies for driver and pedestrian

We can now use Mathematica s structural operations to create a new network

(directed graph or digraph) that is the Graph Cartesian Product of the networks
sD and sP . Thus, given the strategy topologies shown in Figure 1, if the existing
strategy combination is C1, the next strategy combinations could be A1, C1, D1
or E1 (if the driver moves) or C1, C2 or C3 (if the pedestrian moves).
In conventional game theory, the players get dierent payos depending on
what strategy combination is selected. So too here. I assume that players are
greedy (and perhaps not terribly clever) in that, in selecting their next move, they

All strategies have at least one outgoing edge, though that edge can be a self-loop.
Otherwise a player would not know what to do on their next move. One can imagine
a yet more general case in which the strategies available to each player are a function
not simply of the strategy employed by that particular player on the prior move but
the strategy combination used on the prior move or move history. Conventional noncooperative game theory may be thought of as a special case of restricted game
theory in which each player has a complete graph as their strategy topology.


S.J. Chandler

Driver Payoffs

Pedestrian Payoffs

0.878 0.958 0.967

0.108 0.691 0.015

0.057 0.976 0.223

0.054 0.139 0.439

0.778 0.863 0.448

0.811 0.549 0.12

0.764 0.349 0.877

0.589 0.097 0.257

0.944 0.491 0.603

0.469 0.608 0.687







Fig. 2. Game network for driver and pedestrian based on payo array

choose the one whose associated strategy combination (given the complementary
strategy selections of the other players) oers them the highest payo.3 This
modication results in a thinning of the network created above so that, in an
n-player game, only n edges can generally emerge from each node. Each edge
represents the best move for one of the n players.
To use more formal notation, if there are n players in some restricted game
and the strategy topology (graph) of player i {1,. . .,n} is denoted as si , and

the set of strategy combinations in the restricted game is S ( si ), then the


moves potentially available to player i from some strategy combination u may

be written as Equation 1, where V is a function listing the vertices of a graph
and E is a function listing the edges of a graph. One can then write the moves
potentially available to all players from u as Equation 2 and the set of
moves potentially available to all players from all strategy combinations as
Equation 3.

More elaborate behavioral assumptions are certainly possible. Professor Steven

Brams, for example, relies on a similar dynamic structure in his celebrated Theory of
Moves.[2] He assumes, however, that the players can forsee and optimally negotiate
an entire tree (acyclic directed network) of moves among strategy combinations. Restricted Non-Cooperative Game Theory avoids some of the issues associated with the
construction of trees in the Theory of Moves in that, among other things, there are
no "terminals." Instead, players confront a cyclic network of strategy combinations.

Restricted Non-cooperative Games


M [u, s1 , . . . , sn , i] = {{{u S, v S}, i}|{ui, vi } E[si ] j=i uj = vj } (1)


M [u, s1 , . . . , sn ] = M [u, s1 , . . . , sn , i]


M [s1 , . . . , sn ] = M [u, s1 , . . . , sn ]




One now uses the payos to narrow the set of moves to a subset of the set of
potential moves. A plausible way of doing so is, as in conventional game theory, to
create a payo function mapping each strategy combination to a set of payos,
one for each of the players. One can conveniently represent that function as
the n |s1 | . . . |sn | array P (dimensionality n + 1), where |s| assumes its
conventional meaning of the cardinality of a set s. Pi is in turn the n-dimensional
array in which element {u1 , . . . , un } represents the the payo to the ith player
resulting from strategy combination u. One can then denote the restricted game
G as having the following set of moves:
G[P, s1 , . . . , sn ]={{{u, v}, i} M [s1, . . . , sn ]|Pi [v]


{{u,m},i}M[u,s1 ,...,sn ,i]

Pi [m]}

This mathematical formalism is visualized in Figure 2. The top panel represents the payos associated with each strategy combination in the sample driverpedestrian game. The bottom panel shows the new "Game Network." Moves by
the driver are shown as medium-width solid lines while moves by the pedestrian
are thick dashed lines. Very faint, thin, dotted lines represent moves that were
potentially available but were discarded because they were not the best move
for either player.

Scoring Game Networks

We can now assign a score to each of the players from the game network described
above. The players score is simply a weighted average of the payos the player
receives from each strategy combination. A plausible weighting scheme assumes
that a random walk is taken by the players on the game network (starting from
some random node) and that the weights are thus the stationary probabilities
of being at each node (strategy combination). Thus, if there were, as in some
conventional non-cooperative games, a single strategy combination to which all
the players inexorably converged regardless of some "starting point" (a classic
Nash Equilibrium), that strategy combination would receive a weight of one.
If there were several such nodes, each would receive a weight corresponding
to the size (number of nodes) of its basin of attraction. These weights can be
computed readily by creating a Markov transition matrix associated with the
game network and then computing the stationary values.4 The scores can be
normalized by dividing each score by the score that would result if all nodes of

Mathematica has iterative constructs such as Nest or built-in functions such as

Eigenvectors that make this process quite simple.


S.J. Chandler

the network were weighted equally. In formal notation, the normalized score for
player i in game g = G[P, s1 , . . . , sn ] is equal to

[P, g, i] =
wu [g]Pi [u]/|V [g]| ,
uV [g]

where wu [g] is the weight accorded strategy combination u in game g. In our

sample driver-pedestrian game, the normalized score is 1.6345 for the driver and
1.214 for the pedestrian.5

Driver Payoffs

Pedestrian Payoffs

0.878 0.958 0.967

0.108 0.691 0.015

0.057 0.976 0.223

0.054 0.139 0.439

0.778 0.863 0.448

0.811 0.549 0.12

0.764 0.349 0.877

0.589 0.097 0.257

0.444 0.491 0.603

0.969 0.608 0.687







Fig. 3. Modication of payo array generates new game network and scores

Just as in conventional game theory one can study how changes in the strategy combination-payo mapping alter the Nash Equilibria and/or the payos
at the equilibria, in restricted non-cooperative game theory one can study how
changes in the strategy combination-payo mapping alter the game network,
which in turn alter the players scores. Figure 3 shows the new payo array (top
panel) that results from requiring the driver to pay the pedestrian an amount of
0.5 if strategy combination E1 is employed, and the resulting new game network

In traditional game theory, the ability to nd at least one "Nash equilibrium" for all
games and the accompanying payos is preserved by permitting the players to use
probabilistic strategies[1] and to then indulge contested assumptions about the behavior of players under uncertainty. Probabilistic strategy selection is not permitted

Restricted Non-cooperative Games


(bottom).6 The new normalized scores are 1.49 for the driver and 1.60 for the

A New Kind of Science Approach to Game Networks

With this framework in place we can also undertake studies not possible in
conventional non-cooperative game theory. We can examine how, given a payo
matrix, changing the strategy topologies aects the associated game network,
which in turn aects the weights received by each strategy combination, which
in turn aects the scores received by each player. We can examine properties
of the game network itself, such as the lengths of its maximal cycles. We can
also, in eect, create a metagame in which the strategies are not just things
such as A or B but also choices about whether to permit oneself to transition
from A to B. Physical sabotage of otherwise existing transition possibilities can
create such restrictions; so can economic relationships with third parties such
that various strategy transitions become suciently unprotable (dominated)
and thus disregarded.
I now begin to examine this last proposition systematically using ideas of
Stephen Wolframs A New Kind of Science involving consideration of very simple cases and enumeration of possibilities.[6] Consider a game of n players
indexed over
i with

 each player having |s| strategies available to it. Each player
now has 1 + 2|s| |s| possible strategy topologies. This is so because each strategy can connect with the other strategies in 2|s| ways, but one of those ways, the
empty set, is prohibited, as each player must always 
have a next move,
 and there
are |s| strategies to be considered. There are thus ni=1 1 + 2|s| |s| strategy
combination topologies that can exist. Although the number of strategy topologies can readily become quite large, if there are two players and each player has
three strategies available to it, each player has 343 strategy topologies and there
are 117649 strategy combination topologies that can be enumerated along with
an identical number of associated game networks.7 It is well within the ability of
the framework we have created and todays computers to extract the maximal
cycle lengths and the scores for each of these game networks.
I can create a random payo array and then create adjacency list representations for each possible strategy topology. I can then invoke a command that
creates a losslessly compressed representation of the game network for all pairs
of strategy topologies. On a 2006 Macbook Pro, the computation for all game
networks of n = 2, |s| = 3 takes about 30 minutes and produces an expression consuming ve megabytes. (Mathematica permits a monitor to be attached

Legal rules often change payo arrays in just the fashion shown here by requiring one
player in a "game" to pay another player in a "game" if certain strategy combinations
are employed.[5] Total payos are generally conserved within a strategy combination,
unless there are transaction costs or payos to non-parties, in which event the total
payo can diminish.
If there were three players and each player had four strategies, there would be
129746337890625 game networks to be considered, which shows a limitation of a
"brute force" approach on contemporary computers.


S.J. Chandler

to the computation to watch its progress and the data can be stored for later
With this listing of all possible networks and a decompression algorithm, I
can then compute the lengths of each maximal cycle for each of the 3432 game
networks. It ends up being most common for game networks to have the potential to cycle through at most three strategy combinations before returning
to a previously played strategy combination. Maximal cycles of 7, 8 and even
9 strategy combinations prove possible, however. Indeed, we can focus on the
smaller number of game networks that show complex dynamics with long cycles in which, depending on the sequence of player moves, there is at least the
potential for many strategy combinations to be pursued before returning to a
previously played combination.
Alternatively, I can take the game networks and compute a mapping between
pairs of strategy topologies and the scores for each player. This computation takes
only 10 minutes, much of which is spent in decompressing the game networks.
One can do several things at this point. One can examine which strategy
topology tends to have the highest average score for a particular payo array.
Figure 4 shows the results of this experiment. It shows that both players tend to
do best when they have signicant choices available to them regardless of their
current strategy choice. "Pre-commitment strategies," which attempt to restrict
strategy selections, tend not to do well when one does not know the strategy
topology of the opposing player.

Player 1

Player 2

Fig. 4. Strategy topologies yielding highest average payos

One can also examine the character of any pure traditional Nash equilibrium
for the "meta-game" created by this process in which the "strategies" are now
strategy topologies and the payos are the "scores." When one runs this experiment on the sample game shown above, one nds that there are eight Nash
Equilibria.8 Figure 5 shows a sample Nash equilibrium.9

All the equilibria have the following characteristics: the second player always chooses
strategy topologies in which it must move to strategy 1 no matter what strategy it
has played before; the rst player never permits a move to strategy 3 no matter what
player 1 does and no matter what it has done on any prior move.
One could create meta - metagames by imagining that the players can not only alter
their strategy topologies but also their ability to transition from among strategy
topologies. Because this would create over 5.7 1070831 possible game networks,
however, any exhaustive study of the possibilities is, for the forseeable future, impractical.

Restricted Non-cooperative Games



















1, 3

1, 1


2, 1

2, 2

3, 3


1 2

1, 2

2, 3



3, 1


3, 2

Fig. 5. A sample Nash Equilibrium set of strategy topologies and the associated game


Mathematica successfully creates a useful and exible framework for the study
of n-player Restricted Non-Cooperative Games in which the players have potentially dierent strategy topologies. The paper does not purport to study this
extension of game theory exhaustively. The intent is to develop a general set of
tools from which further study can be protably pursued.
Acknowledgments. The author thanks Jason Cawley of Wolfram Research,
Professor Darren Bush of the University of Houston Law Center and Professor
Steven Brams of New York University for extraordinarily helpful comments on
a draft of this paper, as well as Yifan Hu of Wolfram Research for designing
Mathematicas GraphPlot package used extensively here.

1. Gintis, H.: Game theory evolving a problem-centered introduction to modeling
strategic behavior. Princeton University Press, Princeton, N.J. (2000)
2. Brams, S.J.: Theory of moves. Cambridge University Press, Cambridge [England] ;
New York, NY, USA (1994)
3. Chandler, S.J.: Foggy game theory and the structure of legal rules. In: Tazawa, Y.
(ed.): Symbolic Computations: New Horizons. Tokyo Denki University Press, Tokyo
(2001) 31-46
4. Dixit, A.K.: Strategic Behavior in Contests. American Economic Review 77 5 (1987)
5. Baird, D.G., Gertner, R.H., Picker, R.C.: Game theory and the law. Harvard University Press, Cambridge, Mass. (1994)
6. Wolfram, S.: A new kind of science. Wolfram Media, Champaign, IL (2002)

A New Application of CAS to LATEX Plottings

Masayoshi Sekiguchi, Masataka Kaneko, Yuuki Tadokoro, Satoshi Yamashita,
and Setsuo Takato
Kisarazu National College of Technology,
Kiyomidai-Higashi 2-11-1, Chiba 292-0041, Japan

Abstract. We have found a new application of Computer Algebra System (CAS), KETpic which has been developed as a macro package for
a CAS. One of aspects in its philosophy is CAS-aided visualization in
LATEX documents. We aim to extend KETpic to other CASs, and derive
necessary conditions from the basic idea for CASs to accept it, i.e., I/O
functions with external les and manipulating numerical or string data.
Finally, we describe KETpic for Maple as a successful example. By using
KETpic we can draw ne pictures in LATEX documents.
Keywords: CAS, LATEX.


In many cases, mathematicians or mathematics teachers, as well as other scientists, need to prepare good illustrations or educational materials. In general, CAS
gives us a set of highly accurate numerical data. Therefore, it is quite natural
to utilize CAS for the purpose of creating ne pictures. CASs support beautiful
and impressive graphics and some of them can output the picture in graphical
formats (EPS, JPEG, GIF, BMP, etc). However, the authors have not been satised with printed matters obtained as a direct output from CASs, as well as
from CADs (Computer Aided Design) or from data/function-plotting programs,
like Gnuplot. The reason is that mathematical lettering in their pictures is not
clear. We need to optimize their outputs so as to be available for mathematical
textbooks or academic papers.
On the other hand, LATEX has a quality in lettering high enough to satisfy
us suciently but no abilities of symbolic or numerical computation (see [5]).
It has the picture environment or ability of displaying graphical data les in
EPS format. By using Tpic, a graphical extension of LATEX, we can draw various
pictures based on 2D numerical data (see [2,4]). However, it is cumbersome to
handle numerical data directly and to generate tpic special commands. It is better to write a program generating tpic special commands from numerical plotting
data. The program will be another individual software or a macro package for
a CAS.
The authors have developed KETpic for Maple, a Maple macro package. It
generates tpic special commands, and enables us to draw complicated but 2D
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 178185, 2007.
c Springer-Verlag Berlin Heidelberg 2007

A New Application of CAS to LATEX Plottings


mathematical objects in LATEX documents at the highest accuracy. Detailed description of KETpic for Maple is given in [7]. Recently we organized a project
in which we aim to extend KETpic to other CASs. We consider necessary conditions for CASs to accept KETpic. Philosophy of designing KETpic includes a
basic idea, CAS-aided visualization in LATEX documents which we call CAS-aided
LATEX plottings. The requirements will be naturally derived from the basic idea.
Section 2 is devoted to construction of necessary conditions for a CAS to
generate graphic les which we can include in LATEX documents. In section 3, we
show that Maple satises the requirements, describe how KETpic realizes the
idea, and illustrate its outputs.

Requirements for CAS-Aided LATEX Plottings

In order to realize the idea CAS-aided LATEX plottings, we decided to develop

a macro package for a known CAS. We did not select other ways of development:
a new CAS which is designed to realize the idea or another individual software
which calls a kernel of CAS as an external computing engine. We believe that it
is best to develop a macro package for a known CAS.
Hereafter, we suppose that a standard CAS is equipped with abilities of symbolic or numerical computing, programming, and generating graphical images.
In addition, we require the following necessary conditions for a CAS.

Loadability of macro packages from external les,

Writability of numerical data and strings with formats on text les,
Accessibility to raw numerical data in 2D/3D coordinates,
Ability of manipulating numerical values or strings to generate graphic
codes, e.g., tpic special commands, PostScript, or EPS.

If a CAS satises writability without formats instead of the condition R2,

it can write a sequence of raw data in a text format. In this case, it is necessary
to translate the unformatted data into a formatted data. The translation may
be done by a post-processor. The condition R4 and the ability of programming
enable us to handle a lot of data collectively or iteratively, and optimize their
For KETpic, we have chosen to generate tpic special commands. Our choice,
Tpic, allows us to obtain rich graphical expressions. We believe that Tpic is best
because it is wide-spread. Unfortunately a previewer, Mxdvi in Mac OS X, does
not support Tpic. We oer a particular version of KETpic for the previewer. It
generates eepic [4] commands instead of tpic special commands, and is downloadable from our web site [6]. Another way to provide rich expressions is to
generate PostScript or EPS (Encapsulated PostScript) les. Many versions of
LATEX allow EPS les to be inserted in documents. This is still realistic if we are
familiar to grammar of EPS format.
A graph with many curves or items is a powerful expression in mathematical documents. Producing the graph becomes easier by collective or iterative
operations, e.g., list processing, DO-loop, WHILE-loop, and so on.


M. Sekiguchi et al.

Optimization of graphics means ne-tuning of outputs and customizing of

graph accessories, e.g., tickmarks, labels of axes, and legends of curves, which
make pictures more appealing. For ne-tuning we have added commands to
KETpic, by which we can draw various hatchings, dashed lines, and projections of 3D objects. For customizing of graph accessories we have added other
commands to KETpic. These operations are realized by programming of CASs.


A Successful Example: KETpic for Maple

Maple Satises the Requirements

The condition R1 is satised by command read equipped with in Maple. We

can use it as follows.
> read UsersFolder/ketpicw.m;
The condition R2 is satised by Maple commands fopen, fclose, and fprintf.
They dene KETpic commands openfile and closefile. The usage is as
> openfile(UsersFolder/figure1.tex):
> closefile():
These also satisfy the condition R4. KETpic commands openpicture,
closepicture, and setwindow return \begin{picture} and \end{picture}
with option indicating its window size and a unit length. For instance, the
following set of commands,
> setwindow(0..5,-1.5..1.5):
> openpicture("1cm"):
> closepicture():
returns a set of commands of the picture environment as follows.
The following Maple command gives plotting data to a variable g1.
> g1:=plot(sin(x),x=0..5):
If we execute a command ending with semi-column instead of column in the
command line above, we can see the internal expression of g1 which takes a list

A New Application of CAS to LATEX Plottings


> g1:=plot(sin(x),x=0..5);
This operation satises the condition R3. This internal expression can be constructed through DO-loop operations and string manipulation. Maple command
op returns n-th operand of its argument.
> op(g1);
> op(3,g1);
> op(1,op(1,g1))
> op(1,op(1,op(1,g1)))
Maple commands sscanf and convert can translate characters into numerical
data, and vice versa. Other commands cat, substring, and length can be used
for string manipulations, concatenating, and so on. One of drawing commands
of KETpic is drwline. The usage is as follows.
> drwline(g1):
This command returns a set of tpic special codes as follows.
> drwline(g1);
\special{pa 0 0}\special{pa 43 -43}...\special{pa 164 -160}%
\special{pa 1844 394}...\special{pa 1969 378}%
Commands for hatching area, drawing dashed curves, and customizing graph
accessories are as follows.


Double backslash \\ returns single backslash because single backslash is a control code in Maple. The rst command setax denes axes, the origin, and their
names. In this case, the name of the horizontal axis is , and the vertical one
. The command hatchdata returns a set of stripes inside a closed curve obtained


M. Sekiguchi et al.

from g1 and g2. Its third argument [g1,"s"] indicates a region in the south of
curve g1. Similarly the forth argument [g2,"n"] indicates a region. The rst
argument ["ii"] indicates inside areas of them. The second argument [3,0]
denes a reference point. The command dashline(g1,g2) returns plotting data
of g1 and g2 with dashed lines. The last command line puts a legend /4 (sin
cos )d at a point whose position is slightly dierent from (2, 1) to the northeast. The resulting gure is given in Fig. 1.


(sin cos )d

Fig. 1. Output of 3.1


Special Functions or Functions Dened by Integrals

Using Maple, we can call special functions, calculate values of them, and plot
graphs. The Chi-square distribution is dened by the gamma function (x) as
x 2 1 e 2
2 2 ( n2 )

fn (x) =


Curves of the distributions can be obtained by Maple, and can be included in

this document by KETpic (see Fig. 2 (left)). The corresponding sequence of
KETpic commands are given below.

20 0


Fig. 2. Chi-square distributions for degrees of freedom n = 1, 2, , 9 (left) and their

corresponding denite integrals (right)

A New Application of CAS to LATEX Plottings


> f:=(n,x)->x^{n/2-1}*exp(-x/2)/2^{n/2}/GAMMA(n/2);
> tmp:=[]:
for i from 1 to 9 do
The internal expression of g4 is
One can nd a denition of the Chi-square distribution in the rst line. This is
an advantage of CAS. Another advantage is an iterative operation which one can
nd in the last three lines. To dene a function, one can use an integral form.
The following function is a denite integral of the Chi-square distribution (see
Fig. 2 (right)).


Fn (x) =

fn (t) dt.


The corresponding sequence of KETpic commands are given below.

> F:=(n,x)->int(f(n,t),t=0..x);
> tmp:=[]:
for i from 1 to 9 do

Curves Dened by Implicit Functions or with Parameters

Using Maple, we can draw a curve dened by implicit functions. In general, contours are obtained by the same way. Fig. 3 (left) shows contours of the following
Coulomb potential,
(x, y) = 
(x + 1) + y
(x 1)2 + y 2


where two electric charges place on (1, 0). The corresponding sequence of KETpic commands are given below.
> g6:=contourplot(((x+1)^2+y^2)^(-1/2)+((x-1)^2+y^2)^(-1/2),
There are no technical diculties in this case. As well, it is not dicult to plot
parametric curves. Conformal mappings of complex functions consist of a set of
parametric curves. Fig. 3 (right) shows a conformal mapping of the following
complex function.
g(z) = .


M. Sekiguchi et al.

In this case, we emphasize its dierent images of Re(z) and Im(z). The images
of Im(z) in Fig. 3 (right) are plotted in bold curves. We explain the technique
briey. The corresponding sequence of KETpic commands are given below.
> g7:=conformal(1/z,z=-1-I..1+I):
> g8:=[]:
for i from 1 to 11 do
> g9:=[]:
for i from 12 to 22 do
The rst line simply gives plotting data to a variable g7. The second argument in conformal, z=-1-I..1+I, indicates a range of z, i.e., |Re(z)| 1 and
|Im(z)| 1. The value of g7 consists of 11 (default) curves of g(Im(z)) and 11
ones of g(Re(z)) in this order. The second (resp. third) line collects the rst
(resp. remaining) 11 curves and saves them in a variable g8 (resp. g9). We obtain Fig. 3 (right) by writing g8 with the doubled width and g9 with the default
width in a text le. The corresponding KETpic commands are as follows.
> drwline(g8,2):
> drwline(g9):
The option, 2 after g8 in the rst command, implies multiplier of the line



Fig. 3. Contours of a Coulomb potential in (3) (left) and a conformal mapping of a
complex function in (4) (right)

A New Application of CAS to LATEX Plottings


Conclusions and Future Work

We have claried the requirements from R1 to R4 for a CAS to accept our

new application, CAS-aided LATEX plottings. We suppose the CAS be standard,
which means the CAS is equipped with abilities of symbolic computing and
numerical computing, programming, and showing graphical images.
Our rst example is a macro package for Maple, which we call KETpic for
Maple. The package is able to produce accurate and richly expressive pictures
with a minimal input and a reasonable eort. KETpic for Maple is available
on major platforms; Windows, Macintosh or Linux. Its minimal conguration
is a combination of Maple V release 5 (see [1,3]) and a DVI driver supporting
Tpic (see [4]). Anyone interested in KETpic can download the latest version
with its command reference and some examples from our web site [6], which are
completely free of charge.
KETpic is powerful to create LATEX plottings but is relatively weaker at the
following aspects. First, it does not support GUI. Therefore, users might have
diculties to handle KETpic. However, it is a neccesary consequence of textbased user-interface of CAS which realizes accurate plottings. Second, curve
tting is one of remaining problems of KETpic because GUI environments are
the best for tting a curve. Third, at present, KETpic is not good at 3D drawings,
especially surface plottings. Finally, there are no versions for other CASs. We
have several plans to extend KETpic to other CASs, e.g., Mathematica or free
CASs. We are developing a project to improve KETpic. In addition, we are
preparing its user manual.

1. Char, B.W. and Geddes, K.O., et al: Maple V Library Reference Manual, (1991),
2. Goossens, M., Rahtz, S. and Mittelbach, F.: The LATEX Graphics Companion,
(1997), Addison-Wesley.
3. Heal, K.M., Hansen, M.L. and Rickard, K.M.: Maple V Learning Guide, (1996),
4. Kwok, K.: EEPIC: Extensions to epic and LATEX Picture Environment Version 1.1,
5. Mittelbach, F. and Goossens, M., et al: The LATEX Companion, (2004), AddisonWesley.
6. Sekiguchi, M.:
7. Sekiguchi, M., Yamashita, S. and Takato, S.: Development of a Maple Macro Package Suitable for Drawing Fine TEX-Pictures, (2006), Lecture Notes in Computer
Science 4151 (eds. A. Iglesias & N. Takayama), pp.2434, Springer-Verlag.

JMathNorm: A Database Normalization Tool

Using Mathematica
Ali Yazici1 and Ziya Karakaya2

Computer Engineering Department, TOBB University of Economics & Technology,

Ankara - Turkey
Computer Engineering Department, Atilim University, Ankara - Turkey

Abstract. This paper is about designing a complete interactive tool,

named JMathNorm, for relational database (RDB) normalization using Mathematica. It is an extension of the prototype developed by the
same authors [1] with the inclusion of Second Normal Form (2NF), and
Boyce-Codd Normal Form (BCNF) in addition to the existing Third
normal Form (3NF) module. The tool developed in this study is complete and can be used for real-time database design as well as an aid
in teaching fundamental concepts of DB normalization to students with
limited mathematical background. JMathNorm also supports interactive
use of modules for experimenting the fundamental set operations such
as closure, and full closure together with modules to obtain the minimal
cover of the functional dependency set and testing an attribute for a candidate key. JMathNorms GUI interface is written in Java and utilizes
Mathematicas JLink facility to drive the Mathematica kernel.


Design of a RDB system consists of four main phases, namely, (i) determination
of user requirements, (ii) conceptual design, (ii) logical design, and nally, (iv)
physical design [2]. During the conceptual design phase, set of business rules is
transformed into a set of entities with a set attributes and relationships among
them. Extended Entity Relationship (EER) modeling tool can be utilized for the
graphical representation of this transformation. The entity set in the EER model
is then mapped into a set of relation schemas {R1 , R2 , R3 , ..., Rn } where each Ri
represents one of the relations of the DB schema. A temporary primary key is
designated, and a set of functional dependencies (FDs) among the attributes of
each schema are established as an outcome of this phase.
As a side product of the logical design phase, each Ri is transformed into
well-formed groupings such that one fact in one group is connected to other
facts in other groups through relationships [3]. The ultimate aim of this article is
to perform this rather mechanical transformation process, called normalization,
eciently in an automatic fashion.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 186193, 2007.
c Springer-Verlag Berlin Heidelberg 2007

JMathNorm: A Database Normalization Tool Using Mathematica


Commercial DB design tools do not provide a complete solution for automatic

normalization and existing normalization tools for the purpose require high level
programming skills and complex data structures. Two such implementations in
Prolog language are discussed in [4,5]. Another study on automatic transformation is given in [6] in which UML is used to access Object Constraint Language
(OCL) to construct expressions that encode FDs using classes at a meta-level.
An alternative approach to normalization is given in [3] that focus on addressing FDs to normalize DB schema in place of relying on the formal denitions
of normal forms. The impact of this method on IS/IT students perceptions is
also measured in the same study. It appears that this approach is only useful
for small sets of FDs in a classroom environment and in particular not suited
for automatic normalization. A web-based tool for automatic normalization is
given in [7] which can normalize a DB schema up to 3NF for a maximum of 10
FDs only.
This article is an extension of the work [1] and discusses a complete normalization tool called JMathNorm which implements 2NF, 3NF, and BCNF using the
abstract algorithms found in the literature [2,9,13]. JMathNorms normalization
modules are written in Mathematica [8] using the basic list/set operations, the
user interface is designed using Java language, and nally, execution of Mathematica modules is accomplished by employing Mathematicas Java Link (JLink)
utility. The design approach in this study is similar to the Micro tool given in
[9]. However, JMathNorm provides additional aspects for educational purposes
and is implemented eciently without using any complex data structures such
as pointers.
The remainder of this article is organized as follows. Section 2 briey reviews
the DB normalization and some of the basic functions used in normalization
algorithms. In Section 3 Mathematica implementation of BCNF algorithm is
given. JMathNorm tool is demonstrated in Section 4. Remarks about the tool
and discussion for future work are provided in the nal section.

A Discussion on Normalization Algorithms

A functional dependency (FD) is a constraint about sets of attributes of a relation Ri in the DB schema. A FD between two sets of attributes X and Y,
denoted by, X Y species that there exists at most one value of Y for every
value of X (determinant)[2,10,11]. In this case, one asserts that X determines Y
or Y is functionally dependent on X.
For example, for a DB schema PURCHASE-ITEM = {orderNo, partNo,
partDescription, quantity, price}, with P K = {orderNo, partNo}, using a set of
business rule one can speciy the following FDs:
F D1 : {orderNo, partNo} {partDescription, quantity, price}
F D2 : partNo partDescription
For a given schema, other FDs among the attributes can be inferred from the
Armstrongs inference rules [2]. Alternatively, for an attribute set X, one can


A. Yazici and Z. Karakaya

deduce the others known as X closure, X + , determined by X, based on the FD

set F of the schema. Set closure is one of the fundamental functions for the
normalization algorithms and will be referred to as ClosureX [1] in the sequel.
FullClosureX, X ++ , is yet another function similar to ClosureX which returns
all attributes that are fully dependent on X with respect to FD set. This function
is used to remove partial dependencies for transforming a relation into 2NF. An
algorithm for full closure function is given below[9]:
Algorithm FullClosureX (X: attribute set; F: FD set ): return closure in tempX ;
1. tempX := X;
2. repeat
oldX := tempX;
for each FD Y Z in F do
if Y tempX then if not(Y X) then tempX := Z tempX
else if Y = X then tempX := Z tempX;
until (length(oldX) = length(tempX));
3. return tempX;
Given a set of FDs F, an attribute B is said to be extraneous [11] in X A
with respect to F if X = ZB, X = Z, and A Z + . A set of FDs H is called a
minimal cover[1,5] for a set F if each dependency in H as exactly one attribute
on the right-hand side, if no attribute on the left-hand side is extraneous, and
if no dependency in H can be derived from the other dependencies in H. Actually, the calculation of a minimal cover consists of Elimination of Extraneous
Attributes followed by the Elimination of Redundant Dependencies. Normalization algorithms considered in this study makes use of the minimal cover of
a given set of FDs. Moreover, they are computationally ecient with at most
O(n2 ) operations where n is the number of FDs in the schema.
Normalization is a step by step process to transform the DB schema into a set
of subschemas. For the normal forms used in this study (2NF, 3NF and BCNF)
this is achieved by decomposing each Ri into a set of relations by removing
certain kind of redundancies in the relation. Lack of normalization in a DB
schema causes update anomalies [13] which may destroy the integrity of the DB.
If a relation has no repeating groups, it is said to be in the rst normal form
(1NF). In this study, it is assumed that all relations do satisfy this condition. A
relation is in the second normal form (2NF) if no part of a PK determines nonkey
attributes of the relation. Note that, for the example above, because of F D2, the
relation is not in 2NF. 3NF relations prohibit transitive dependencies among its
attributes. And, nally, in Boyce-Codd Normal Form (BCNF) a nonkey attribute
cannot determine a prime attribute (any part of PK).
A 2NF algorithm with the attribute preservation property is given in [9].
JMathNorm uses a slightly modied version of this algorithm to remove partial
dependencies and hence transform the DB schema into 2NF. Bernsteins Synthesis algorithm [1,12] is implemented to provide 3NF relations directly for a
given set of attributes and a set of FDs F. Original dependencies are preserved,
however, lossless join property [1] is not guaranteed by this algorithm.

JMathNorm: A Database Normalization Tool Using Mathematica


In certain 3NF DB schemas, a FD from a nonprime attribute into a prime one

may exist. Boyce-Codd Normal Form (BCNF) of a 3NF relation is achieved by
removing such dependencies. A sketch of the BCNF algorithm with the lossless
join properties [2,12] is given below.
Algorithm BCNF (R: attribute set in 3NF; F: FD set ): return Q in BCNF ;
1. D := R;
2. while there is a left-hand side X of a FD X Y in F do
if X Y violatesBCNF then
decompose R into two schemas Rm := D Y ; and Rn := X Y ;
3. return Q := Rm Rn ;
The function violateBCNF tests if a given FD violates the BCNF condition by
calculating the X closure. If it includes all the attributes from R then R does
not violate the BCNF constraint, otherwise R violates the constraint and needs
to be decomposed into Rm and Rn as given above.


Mathematica Implementation
BCNF with Mathematica

In Fig.1, a use case diagram is given to demonstrate the functions and modules
used in the tool.
Tasks in Fig.1 are eectively implemented as Mathematica modules by utilizing only the Mathematicas list structure and the well-known set operations
[8]. These operations are U nion[], Complement[], Intersection[], M emberQ[],
Extract[], Append[], Length[], and Sort[].
A FD set F of a schema is represented by two lists, one for the left hand
sides (FL), and the other for the right hand sides of F (FR). Obviously, the
order of attributes in such a list is important and should be maintained by care
throughout the normalization process.
For the example above, the FD set is represented in Mathematica as follows:
F L = {{orderNo, partNo}, {orderNo, partNo},{orderNo, partNo},partNo}
F R = {partDescription, quantity, price, partDescription}
Accordingly, F L[[i]] F R[[i]], for i = 1, 2, 3, 4 as specied by the FD set F.
As an illustration, the Mathematica code for the BCNF algorithm is given
below. Given a FD set and a 3NF relation R, BCNF algorithm rst looks for a
BCNF violation using the function violatesBCNF. When found, it returns in Q
two sub relations satisfying the BCNF constraint.
BCNF[FL_, FR_, R_] := Module[{i, X, D, Q, DIF, REL}, D = R; Q = {};
For[i = 1, i <= Length[FL], i++,
If[Length[FL[[i]]] > 1, X = Sort[FL[[i]]], X = {FL[[i]]}];
flag = violatesBCNF[FL, FR, X, FR[[i]], U];


A. Yazici and Z. Karakaya

If[flag == 1, REL = Union[X, {FR[[i]]}];

Q = Union[Q, {REL}]; RC = Complement[R, {FR[[i]]}];
DIF = Intersection[R, RC]; Q = Union[Q, {DIF}];];];Return[Q];];
violatesBCNF[FL_,FR_,X_,Y_,R_]:=Module[{XP, flag},
If[XP==Sort[U], flag=0,flag=1];Return[flag];];








** *

* *



Minimal Cover


Elim. Extra. Attribs.

Elim. Red. Deps.



Fig. 1. Use Case Diagram for Normalization Modules

An example of a relation with BCNF violation and its decomposition by the

code above is given below. Consider a relation CLIENT-INTERVIEW={clientno,
interviewdate, interviewtime, stano, roomno} with the following FDs:
F D1: {clientNo, interviewdate} {interviewtime,stano,roomno}
F D2: {stano,interviewdate,interviewtime} clientno
F D3: {stano,interviewdate} roomNo
In this relation {clientno, interviewdate}, and {stano, interviewdate} are both
candidate keys and share a common attribute. And, BCNF constraint is violated
by FD2. The result of running the BCNF module by providing the required
parameters produces the following decomposition Q.
Q = {{interviewdate, roomno, staf f no},
{clientno, interviewdate, interviewtime, staf f no}}

JMathNorm User Interface

An interactive tool, JMathNorm, with a GUI written in Java is designed to

implement the system given in Fig.1. Each algorithm is implemented as a Mathematica module. JLink (Java Link) facility of Mathematica is utilized to load the

JMathNorm: A Database Normalization Tool Using Mathematica


Fig. 2. JMathNorms menu options

Fig. 3. Dialog box to dene FDs

Mathematica kernel and execute these modules as required. JMathNorm starts

with a dialogue box asking for the relevant path of Mathematica kernel. Mathematica functions for the normalization are loaded afterwards. Consequently,
only the calling statement of those modules is passed to Mathematica to receive
a result string. The result returned is just the set representation in a string. This
string is to be parsed into the desired data structure. In JMathNorm, results are
stored into javas Vector data structure.
The interface oers a menu driven interaction with the system. The main and
Operations submenus are displayed in Fig.2. JMathNorms FD pull-down menu
can be used to set up a new set of FDs, open an existing one, save or edit FDs
using a data entry dialog box. One can experiment with basic normalization
tasks, namely, set closure, set full closure, elimination of redundant attributes,


A. Yazici and Z. Karakaya

Fig. 4. A sample run for 3NF decomposition

elimination of redundant dependencies, testing for primary key and obtaining

minimal cover by utilizing the Basic Operations submenu. These set theoretic
operations form the basis of all of the normalization algorithms discussed in the
preceding sections. Moreover, because of their symbolic nature, verication of
the result returned from each manually is rather cumbersome. JMathNorm overcomes this problem, by providing a verication mechanism as a background for
teaching normalization theory eectively in a classroom environment. Database
schemas can be transformed into the required normal form directly from the
NForm submenu. As a result of normalization, the original relations of the DB
schema are decomposed into sub relations which is displayed systematically by
the Results submenu. In Fig.3, the dialog box for dening FDs are shown. A
sample run to decompose the relation into 3NF for is displayed in Fig.4.

Tests and Discussions

Several benchmark tests found in the literature are successfully applied with
varying number of FDs having dierent initial normal forms. Normalization
algorithms used in the tool possess at most quadratic time complexity providing
in the number of FDs and are computationally eective.
JMathNorm was also used in a classroom environment during a Database Systems course oered to about 25 third year computer engineering majors during
the Spring semester of 2006-07 academic year. Students are requested to form
project teams and design a medium size database system involving 8-10 relations. During the design process, they ended up with normalizing the relational
schema. Students usually preferred using JMathNorm to support or validate the
normalization process. It was reported that each team used JMathNorm on average of four times. In addition to the use in the project, students utilized the
tool to understand the normalization process and the underlying theory based
on the set theoretic operations discussed earlier.

JMathNorm: A Database Normalization Tool Using Mathematica


In the course evaluation forms, majority of the students have indicated that
the tool was quite useful to check their manual work in studying the normalization algorithms and to normalize schemas for the database design project of the
Modules of JMathNorm was written in Mathematica utilizing only basic
list/set operations as the fundamental data structure. These operations empowered by the symbolic nature of Mathematica resulted in an eective normalization tool. Currently, it does not have the ability to create SQL statements for
the normalized schema. A table creation facility geared towards a specic DBMS
is to be included to JMathNorm.

1. Yazici, A. and Karakaya, Z.: Normalizing Relational Database Schemas Using
Mathematica, LNCS, Springer-Verlag, Vol.3992 (2006) 375-382.
2. Elmasri, R. and Navathe, S.B.: Fundamentals of Database Systems, 5th Ed., Addison Wesley (2007).
3. Kung, H. and Case, T.: Traditional and Alternative Database Normalization Techniques: Their Impacts on IS/IT Students Perceptions and Performance, International Journal of Information Technology Education, Vol.1, No.1 (2004) 53-76.
4. Ceri, S. and Gottlob, G.: Normalization of Relations and Prolog, Communications
of the ACM, Vol.29, No.6 (1986)
5. Welzer, W., Rozman, I. and Gyrks, J.G.: Automated Normalization Tool, Microprocessing and Microprogramming, Vol.25 (1989) 375-380.
6. Akehurst, D.H., Bordbar, B., Rodgers, P.J., and Dalgliesh, N.T.G.: Automatic
Normalization via Metamodelling, Proc. of the ASE 2002 Workshop on Declarative
Meta Programming to Support Software Development (2002)
7. Kung, H-J. and Tung, H-L.: A Web-based Tool to Enhance Teaching/Learning
Database Normalization, Proc. of the 2006 Southern Association for Information
Systems Conference (2006) 251-258.
8. Wolfram, S.: The Mathematica Book, 4th Ed., Cambridge University Press (1999).
9. Du, H. and Wery, L.: Micro: A Normalization Tool for Relational Database Designers, Journal of Network and Computer Applications, Vol.22 (1999) 215-232.
10. Manning, M.V.: Database Design, Application Development and Administration,
2nd. Ed., McGraw-Hill (2004).
11. Diederich, J. and Milton, J.: New Methods and Fast Algorithms for Database
Normalization, ACM Trans. on Database Systems, Vol.13, No.3(1988) 339-365.
12. Ozharahan, E.: Database Management: Concepts, Design and Practice, Prentice
Hall (1990).
13. Bernstein, P.A.: Synthesizing Third Norm Relations from Functional Dependencies,
ACM Trans. on Database Systems, Vol.1, No.4 (1976) 277-298.

Symbolic Manipulation of Bspline Basis

Functions with Mathematica
A. Iglesias1 , R. Ipanaque2 , and R.T. Urbina2

Department of Applied Mathematics and Computational Sciences,

University of Cantabria, Avda. de los Castros,
s/n, E-39005, Santander, Spain
Department of Mathematics, National University of Piura,
Urb. Miraores s/n, Castilla, Piura, Per

Abstract. Bspline curves and surfaces are the most common and most
important geometric entities in many elds, such as computer design and
manufacturing (CAD/CAM) and computer graphics. However, up to our
knowledge no computer algebra package includes especialized symbolic
routines for dealing with Bsplines so far. In this paper, we describe a
new Mathematica program to compute the Bspline basis functions symbolically. The performance of the code along with the description of the
main commands are discussed by using some illustrative examples.


Bspline curves and surfaces are the most common and most important geometric
entities in many elds, such as computer design and manufacturing (CAD/CAM)
and computer graphics. In fact, they become the standard for computer representation, design and data exchange of geometric information in the automotive,
aerospace and ship-building industries [1]. In addition, they are very intuitive,
easy to modify and manipulate - thus allowing the designers to modify the shape
interactively. Moreover, the algorithms involved are quite fast and numerically
stable and, therefore, well suited for real-time applications in a variety of elds,
such as CAD/CAM [1,7], computer graphics and animation, geometric processing [5], articial intelligence [2,3] and many others.
Although there is a wealth of powerful algorithms for Bsplines (see, for instance, [6]), they usually perform in a numerical way. Surpringly, although there
is a large collection of very powerful general-purpose computer algebra systems,
none of them includes specic commands or specialized routines for dealing with
Bsplines symbolically. The present work is aimed at bridging this gap. This paper
describes a new Mathematica program for computing Bspline basis functions in
a fully symbolic way. Because these basis functions are at the core of almost any
algorithm for Bspline curves and surfaces, their ecient manipulation is a critical
step we have accomplished in this paper. The program is also able to deal with
Bspline curves and surfaces. However, this paper focuses on the computation of
Bspline basis functions because of limitations of space. The program has been
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 194202, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Symbolic Manipulation of Bspline Basis Functions with Mathematica


implemented in Mathematica v4.2 [8] although later releases are also supported.
The program provides the user with a highly intuitive, mathematical-looking
output consistent with Mathematicas notation and syntax [4].
The structure of this paper is as follows: Section 2 provides some mathematical background on Bspline basis functions. Then, Section 3 introduces the new
Mathematica program for computing them and describes the main commands
implemented within. The performance of the code is also discussed in this section
by using some illustrative examples.

Mathematical Preliminaries

Let T = {u0 , u1 , u2 , . . . , ur1 , ur } be a nondecreasing sequence of real numbers

called knots. T is called the knot vector. The ith Bspline basis function Ni,k (t)
of order k (or equivalently, degree k 1) is dened by the recurrence relations

1 if ui t < ui+1
Ni,1 (t) =
i = 0, 1, 2, . . . , r 1
0 otherwise
Ni,k (t) =

t ui
ui+k t
Ni,k1 (t) +
Ni+1,k1 (t)
ui+k1 ui
ui+k ui+1


for k > 1. Note that i-th Bspline basis function of order 1, Ni,1 (t), is a piecewise
constant function with value 1 on the interval [ui , ui+1 ), called the support of
Ni,1 (t), and zero elsewhere. This support can be either an interval or reduce to
a point, as knots ui and ui+1 must not necessarily be dierent. If necessary, the
convention = 0 in eq. (2) is applied. The number of times a knot appears in
the knot vector is called the multiplicity of the knot and has an important eect
on the shape and properties of the associated basis functions. Any basis function
of order k > 1, Ni,k (t), is a linear combination of two consecutive functions of
order k 1, where the coecients are linear polinomials in t, such that its order
(and hence its degree) increases by 1. Simultaneously, its support is the union of
the (partially overlapping) supports of the former basis functions of order k 1
and, consequently, it usually enlarges.

Symbolic Computation of Bspline Basis Functions

This section describes the Mathematica program we developed to compute the

Bspline basis functions in a fully symbolic way. For the sake of clarity, the program will be explained through some illustrative examples.
The main command, Ni,k [knots,var], returns the i-th Bspline basis function
of order k in the variable var associated with an arbitrary knot vector knots,
as dened by eqs. (1)-(2). For instance, eq. (1) can be obtained as:
In[1]:=N0,1[{ui ,ui+1 },t]
Out[1] := W hich[t < ui , 0, ui t < ui+1 , 1, t ui+1 , 0]


A. Iglesias, R. Ipanaque, and R.T. Urbina

where the output consists of several couples (condition,value) that reproduce the
structure of the right-hand side of eq. (1). The command Which evaluates those
conditions and returns the value associated with the rst condition yielding True.
Our command PiecewiseForm displays the same output with a more similar
appearance to eq. (1):

t < ui

Out[2] := 1
ui t < ui+1

t ui+1
This output shows the good performance of these commands to handle fully
symbolic input. Let us now consider a symbolic knot vector of length 4 such as:
Out[3] := {x(1), x(2), x(3), x(4)}
Now, we compute the basis functions up to order 3 for this knot vector as
In[4]:Table[Table[Ni,k[%,t] // PiecewiseForm,{i,0,3-k}],{k,1,3}]
Out[4] :=

t < x(1)
t < x(1)
t < x(1)

1 x(1) t < x(2)

0 x(1) t < x(2)

0 x(1) t < x(2)





0 x(3) t < x(4)

0 x(3) t < x(4)

1 x(3) t < x(4)

t x(4)
t x(4)
t x(4)














x(3) t < x(4)

x(3)x(4) x(3) t < x(4)








(x(1)x(2)) (x(1)x(3))









t x(4)
Note that, according to eq. (2), the i-th basis function of order k is obtained
from the i-th and (i + 1)-th basis functions of order k 1. This means that
the number of basis functions decreases as the order increases and conversely.
Therefore, for the set of basis functions up to order 3 we compute the Ni,k , with
i = 0, . . . , 3 k for k = 1, 2, 3. The whole set exhibits a triangular structure of
embedded lists in Out[4] for each hierarchical level (i.e. for each order value).
The knot vectors can be classied into three groups. The rst one is the
uniform knot vector; in it, each knot appears only once and the distance between

Symbolic Manipulation of Bspline Basis Functions with Mathematica


Fig. 1. (top-bottom, left-right) Bspline basis functions for the uniform knot vector
{1, 2, 3, 4, 5} and orders 1, 2, 3 and 4 respectively

consecutive knots is always the same. As a consequence, each basis function is

similar to the previous one but shifted to the right according to such a distance.
To illustrate this idea, let us proceed with a numerical knot vector so that the
corresponding basis functions can be displayed graphically. We compute the basis
functions of order 1 for the uniform knot vector {1, 2, 3, 4, 5}:
In[5]:=Table[Ni,1[{1,2,3,4,5},t] //PiecewiseForm,{i,0,3}]



1 t < 2

3 t < 4

4 t < 5



1 t < 2

3 t < 4

4 t < 5



1 t < 2

3 t < 4

4 t < 5



1 t < 2

3 t < 4

4 t < 5


From (2) we can see that the basis functions of order 2 are linear combinations
of these step functions of order 1 (shown in Figure 1(top-left)). The coecients
of such a linear combination are linear polynomials as well, so the resulting basis
functions are actually piecewise linear functions (see Fig. 1(top-right)):
In[6]:=Table[Ni,2[{1,2,3,4,5},t] //PiecewiseForm,{i,0,2}]

1 + t

Out[6] :=



1 t < 2

2 + t
3 t < 4

4 t < 5



1 t < 2

3 t < 4
3 + t

4 t < 5



1 t < 2

3 t < 4

4 t < 5


Similarly, the basis functions of order 3 are linear combinations of the basis
functions of order 2 in Out[6] according to (2):


A. Iglesias, R. Ipanaque, and R.T. Urbina

In[7]:=Table[Ni,3[{1,2,3,4,5},t] //PiecewiseForm,{i,0,1}]

t < 1
0 2


1 t < 2

1 t < 2


11 + 5 t t2 2 t < 3

Out[7] :=
2 + 7tt 3 t < 4

3 t < 4





Note that we obtain two piecewise polynomial functions of degree 2 (i.e. order 3), displayed in Fig. 1(bottom-left), both having a similar shape but shifted
by length 1 with respect to each other. Finally, there is only one basis function of
order 4 for the given knot vector (the piecewise polynomial function of degree 3
in Fig. 1(bottom-right)):
In[8]:=Ni,4[{1,2,3,4,5},t] //PiecewiseForm

0 3


1 t < 2

3145 t+21 t2 3 t3

Out[8] := 131+117 t33 t2 +3 t3

3 t < 4



One of the most exciting features of modern computer algebra packages is
their ability to integrate symbolic, numerical and graphical capabilities within
a unied framework. For example, we can easily display the basis functions of
Out[5]-Out[8] on the interval (1, 5):
{i,0,4-#}],DisplayFunction->Identity]& /@ Range[4];
Out[10] := See F igure 1
A qualitatively dierent behavior is obtained when any of the knots appears
more than once (this case is usually referred to as non-uniform knot vector).
An example is given by the knot vector {0, 0, 1, 1, 2, 2, 2}. In this case, the basis
functions of order 1 are given by:
In[11]:=Table[Ni,1[{0,0,1,1,2,2,2},t] // PiecewiseForm,{i,0,5}]

t < 0 0
t < 0 0

0 0t<1
1 0t<1
0 0t<1






t 2 0
t 2 0

Out[11] :=
t < 0 0
t < 0 0











Symbolic Manipulation of Bspline Basis Functions with Mathematica


Note that the knot spans involving the same knot (t = 0, t = 1 or t = 2)

at both ends reduce to a single point. This causes some basis functions (N01 ,
N21 , N41 and N51 in Out[11]) to be zero. This behavior continues until the order
reaches the multiplicity value of the multiple knot minus 2. For instance, there
is an identically null basis function of order 2, namely N42 :
In[12]:=Table[Ni,2[{0,0,1,1,2,2,2},t] // PiecewiseForm,{i,0,4}]

t<0 0
t < 0 0

1t 0t<1
t 0t<1









Out[12] :=
t < 0 0







The basis functions of order 3 become:
In[13]:=Table[Ni,3[{0,0,1,1,2,2,2},t] // PiecewiseForm,{i,0,3}]


2t 2t2 0 t < 1







Out[13] :=
t < 0









Multiple knots do inuence the shape and properties of basis functions; for
instance, each time a knot is repeated, the continuity of the basis functions whose
support includes this multiple knot decreases. In particular, the continuity of
Ni,k at an interior knot is C km1 [6], m being the multiplicity of the knot. To
illustrate this fact, we compute the unique basis function of order 6:
In[14]:=(f6=N0,6[{0,0,1,1,2,2,2},t]) // PiecewiseForm



(10 7t)t
Out[14] :=

(t 2)3 23t2 32t + 12 1 t < 2

As we can see, m = 2 for the knot t = 1 and hence N0,6 is C 3 -continuous at
this point. This implies that its third derivative, given by:
In[15]:=(f63=D[f6,{t,3}])//Simplify //PiecewiseForm


(4 7t)t
0 t < 1
Out[15] :=
23t2 68t + 48 1 t < 2


A. Iglesias, R. Ipanaque, and R.T. Urbina

Fig. 2. (left) 6th-order basis function; (right) its third derivative

Fig. 3. Bspline curve and its control polygon (the set of segments connecting the control
points) for: (left) a non-periodic knot vector; (right) a uniform knot vector

is still continuous but no longer smooth (the continuity of tangent vectors is lost
at this point). Figure 2 displays both the basis function of order 6 (on the left)
and its third derivative (on the right):
PlotRange->All]& /@ {f6,f63}
Out[16] := See F igure 2
The most common case of non-uniform knot vectors consists of repeating the
end knots as many times as the order while interior knots appear only once
(such a knot vector is called non-periodic knot vector). In general, a Bspline
curve does not interpolate any of the control points; interpolation only occurs
for non-periodic knot vectors (the Bspline curve does interpolate the end control
points) [6,7]. To illustrate this property, we consider the BSplineCurve command
(whose input consists of the list of control points pts, the order k, the knot vector
knots and the variable var), dened as:
In[17]:=BSplineCurve[pts List,k ,knots List,var ]:=
bs.pts // Simplify];
For instance, let us consider a set of 2D control points and two dierent knot
vectors (a non-periodic vector kv1 and a uniform knot vector kv2) and compute
the Bspline curve of order 3:

Symbolic Manipulation of Bspline Basis Functions with Mathematica


In[20]:=BSplineCurve[cp,3,#,t]& /@ {kv1,kv2};
ParametricPlot[#1 //Evaluate,#2,PlotRange->All,
Out[22] := See F igure 3
The curve interpolates the end control points in the rst case, while no control
points are interpolated in the second case at all. For graphical purposes, the

support of the Bspline curves restrict to the points such that
Ni,k (t) = 1.

The next input computes the graphical support for the curves in Fig. 3:

Ni,3 [#,t]& /@ {kv1,kv2} // PiecewiseForm

Out[23] :=



2 2 1)


t + 6t 7

0 t < 1
1t<2 ,

2 t < 3



6t 17



t < 1

1 t < 2

2 t < 3

4 t < 5

5 t < 6

6 t < 7


This result makes evident that the Bspline curves in Fig. 3 must be displayed
on the intervals (0, 3) and (3, 6) respectively (see the last line of In[21]).
Acknowledgements. This research has been supported by the Spanish Ministry of Education and Science, Project Ref. #TIN2006-13615.

1. Choi, B.K., Jerard, R.B: Sculptured Surface Machining. Theory and Applications.
Kluwer Academic Publishers, Dordrecht/Boston/London (1998)
2. Echevarra, G., Iglesias, A., G
alvez, A.: Extending neural networks for B-spline
surface reconstruction. Lectures Notes in Computer Science, 2330 (2002) 305-314
3. Iglesias, A., Echevarra, G., G
alvez, A.: Functional networks for B-spline surface
reconstruction. Future Generation Computer Systems, 20(8), (2004) 1337-1353
4. Maeder, R.: Programming in Mathematica, Second Edition, Addison-Wesley, Redwood City, CA (1991)
5. Patrikalakis, N.M., Maekawa, T.: Shape Interrogation for Computer Aided Design
and Manufacturing. Springer Verlag (2002)


A. Iglesias, R. Ipanaque, and R.T. Urbina

6. Piegl, L., Tiller, W.: The NURBS Book (Second Edition). Springer Verlag, Berlin
Heidelberg (1997)
7. Rogers, D.F.: An Introduction to NURBS. With Historical Perspective. Morgan
Kaufmann, San Francisco (2001)
8. Wolfram, S.: The Mathematica Book, Fourth Edition, Wolfram Media, Champaign,
IL & Cambridge University Press, Cambridge (1999)

Rotating Capacitor and a Transient Electric

Haiduke Saraan1 and Nenette Saraan2

The Pennsylvania State University

University College
York, PA 17403
Penn State Shock Trauma Center
The Milton S. Hershey Medical Center
Hershey, PA 17033

Abstract. The authors designed a rotating parallel-plate capacitor; one

of the plates is assumed to turn about the common vertical axis through
the centers of the square plates. We insert this capacitor into a series
with a resistor, forming a RC circuit. We analyze the characteristics of
charging and discharging scenarios on two dierent parallel tracks. On
the rst track we drive the circuit with a DC power supply. On the second
track, we drive the circuit with an AC source.
The analyses of the circuits encounter non-linear dierential equations. We format the underlying equations into their generic forms. We
then apply Mathematica [1], NDSolve to solve them. This work is an example showing how with the help of Mathematica one is able to augment
the scope of the traditional studies.
Keywords: Mathematica, Electric Network, Geometry.

Introduction and Motivation

It is a far-fetched concept to think about a transient electrical circuit and incorporate its characteristics to a discrete and abstract geometrical problem. The
authors have even taken the initiative one step further, relating these two basic concepts to the kinematics of mechanics. In other words, this article shows
how these three discrete concepts are brought together and molded into one
coherent and unique project. To accomplish this, one needs to think creatively,
Mathematica is the tool of choice helping to explore the possibilities. This article
including the Introduction is composed of eight sections. In Section 2, we apply
Mathematica to evaluate the overlapping area of the two rotating squares about
their common vertical axis. In section 3 we incorporate the rotational kinematics
and consider two dierent scenarios: 1) a symmetrical, uniform rotation; and 2)
an asymmetrical, accelerated rotation.
In Section 4-7, we view the overlapping squares as being two parallel metallic
plates that are separated by a gap forming a parallel-plate capacitor. Since the
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 203210, 2007.
c Springer-Verlag Berlin Heidelberg 2007


H. Saraan and N. Saraan

area of the overlapping plates evaluates the capacitance of the capacitor, the rotating plates make the capacitor a variable one. Technical literature particularly
Mathematica-based articles and reports lack one such view. It is the ultimate
objective of this project to analyze the response of the electrical circuits to the
kinematics of the rotating plates.
Specically, in this article, we address the modications of the basic responses
of the electrical circuits composed of a resistor connected in a series with our
designed, time-dependent capacitor. In particular, we analyze the characteristics
of the RC circuits driven with DC as well as AC sources. In conjunction with
our analysis, in section 8 we close the article suggesting a few related research
avored circuit analysis projects.


Figure 1 shows two identical overlapping squares. The bottom square designated
with non-prime vertices is fastened to the xy coordinate system. The top square,
designated with prime vertices is rotated counter clockwise about the common
vertical axis through the common origin O by an angle . The squares have the
side length of L and the rotation angle is the angle between the semi diagonals

OP 1 and OP 1 .
To evaluate the overlapping area of these two squares we evaluate the area
of trapezoid oabco; the overlapping area then equals four times the latter. The
intersecting points of the rotated sides of the top square with the sides of the
bottom one are labeled a, b, and c. Utilizing the coordinates of these points,
the area of the trapezoid is the sum of the areas of two triangles abc, and oac.

b p1






Fig. 1. Display of two rotated squares. The bottom square is fastened to the xy coordinate system, the top square is rotated counter clockwise by radian.

Rotating Capacitor and a Transient Electric Network


To evaluate the coordinates of a, b, and c we write the equations for the

slanted lines, P4 P1 and P1 P2 and intersect them with the sides of the bottom
square. Intersection of the former with the P4 P1 and P1 P2 gives the coordinates
of a and b respectively. Similarly, the intersection of the latter with
P1 P2 yields

   L 1tan 2 L 

  , 2 , and c
the coordinates of c. Theses are: a 2 , 2 tan 2 , b 2

1+tan 2
To evaluate the areas of the needed triangles, we convert the above coordinates
into Mathematica code. The inserted 1s in the thirdposition of
are for further

2 2
2 , 1 ,b[L , ] =
 L 1tan 2 L 

  , 2 , 1 , c[L , ] =
L2 tan 2 , L2 , 1 . We dene two auxiliary


functions, abc[L , ] = {a[L, ], b[L, ], c[L, ]},oac[L , ] = {o, a[L, ], c[L, ]}

The needed areas are, areaABC[L , ] = 12 det[abc[L, ]],areaOAC[L , ] =
2 det[oac[L, ]],areaOABCO[L , ] = areaOAC[L, ] + areaABC[L, ].
We divide the overlapping area by the area of the square, L2 , and plot its
normalized values as a function of the rotation angle . Figure 2 shows the
normalized area starts and ends at the same values. Its value after a 4 radian
turn drops to about 83% of the maximum value. The plot as one anticipates is
symmetric about 4 .

normalized area
3 5 3 7 ,rad

16 8 16 4 
16 8 16 2
Fig. 2. The normalized values of the overlapping area of the squares as a function of
the rotation angle

Modes of Mechanical Rotations

In this section we extend the analysis of Section 2. Here, instead of viewing the
rotation as being a discrete and purely geometrical concept, we view it as a
kinematic process. We set the rotation angle = t; that is, we introduce the
continuous time parameter t. For = 2
T with the period T = 4s, we explore


H. Saraan and N. Saraan

the uniform rotation. For an asymmetrical case, we consider a rotation with a

constant angular acceleration. According to = 12 t2 , to rotate the square by

2 in one second yields = s2 . The corresponding normalized overlapping
areas are displayed in Fig 3.
Show[GraphicsArray[{U nif ormRotation,
AcceleratedRotation}], DisplayF uncton DisplayF unction].









1 1 3 1 5 3 7
8 4 8 2 8 4 8


1 1 3 1 5 3 7
8 4 8 2 8 4 8


Fig. 3. The graphs are the normalized values of the overlapping areas for: a: a uniform
rotation with = 2 rad/s, (the left graph) and b: a uniform angular acceleration with
= rad
(the right graph)

Electrical Networks

Now we consider a RC series circuit. One such circuit driven by a DC power

supply is shown in Figure 4. The circuit is composed of two loops. Throwing the
DPDT (Double-Pole Double-Throw) switch to as position charges the capacitor,
while setting the switch to a bs discharges the charged capacitor.
As we pointed out in the Introduction, in this section we view the overlapping
squares as being two parallel metallic plates that are separated by a gap forming

Fig. 4. The schematics of a DC driven RC circuit. Throwing the DPDT switch onto
as charges the capacitor, while throwing the switch onto bs discharges the charged

Rotating Capacitor and a Transient Electric Network


a parallel-plate capacitor. Since the capacitor of a parallel-plate capacitor is in

proportion to the overlapping area of the plates, the continuous rotation of the
plates makes the capacitor time-dependent. It is the objective of this section to
analyze the characteristic responses of one such time-dependent capacitor in the
charging and discharging processes.

Characteristics of Charging and Discharging a DC

Driven RC Circuit with Time-Dependent Uniformly
Rotating Plates

For the charging process we apply Kirchho circuit law [2], this gives
dQ 1 A0
Q(t) = 0,

For the sake of convenience, we assume V C0 = 1, where C0 is the capacitance

of the parallel-plate with the plates completely overlapped, Q(t) and A(t) are
the capacitors charge and the overlapping area at time t, respectively; A0 is
the area of one of the squares; and = RC0 is the time-constant of the circuit.
For a constant capacitor A(t) A0 , and eq(1) yields the standard solution
Q(t) = 1 e . In this equation the maximum charge is normalized to unity.
For the rotating plates, however, eq(1) does not have an analytic solution.
We apply Mathematica NDSolve along with an appropriate initial condition and
solve the equation numerically this yields Q(t). We graphically compare its
characteristics vs. the characteristics of an equivalent RC circuit, see Fig 5.
Similarly, we analyze the characteristics of the discharging process. Equation
1 A0
(1) for the corresponding discharging process is dQ(t)
dt + A(t) Q(t) = 0. This
equation for a constant capacitor, A0 = A(t), yields dQ(t)
dt + Q(t) = 0, and
gives Q(t) = e . For the rotating capacitor, however, its solution is Q(t) =

e 0 A() d . To solve the latter we apply Mathematica NIntegrate. This yields

the needed values. The results are displayed in Fig 5.

Charging 1 t,sec

Q Discharging
1 1 3 1 5 3 7 1
8 4 8 2 8 4 8


1 1 3 1 5 3 7 1
8 4 8 2 8 4 8


Fig. 5. Display of charging, discharging and the overlapping area of the uniformly
rotating plates. For the rst two graphs from left to right, the outer and the inner
curves/dots represent the constant and time-dependent capacitors, respectively. The
far right graph is borrowed from Fig 3.

It is interesting to note that the charging and discharging circuits respond

dierently to the time-varying capacitors; the impact of the time-dependent


H. Saraan and N. Saraan

capacitor is more pronounced for the former. Moreover, for the chosen timeconstant = 16 s, although the constant capacitor reaches its plateau within one
second, it appears the variable capacitor requires a longer time span.

Characteristics of Charging and Discharging DC

Driven RC Circuit with Time-Dependent Accelerated
Rotating Plates

One may comfortably also apply the analysis of Section IIa to generate the
characteristic curves associated with the uniformly accelerated rotating plates.
The Mathematica codes may easily be modied to yield the needed information.
The codes along with the associated graphic outputs are


Q Discharging


1 1 3 1 5 3 7 1
8 4 8 2 8 4 8


1 1 3 1 5 3 7 1
8 4 8 2 8 4 8


1 1 3 1 5 3 7 1
8 4 8 2 8 4 8


Fig. 6. Display of charging, discharging and the overlapping area of the uniformly
accelerated rotating plates. The graph codes are the same as Fig 5. The far right graph
is borrowed from Fig 3.

To form an opinion about the characteristics of the charging curve for the
variable capacitor, one needs to view it together with the far right graph. The
rotating plates in this case are accelerated, illustrating that for identical time
intervals, the overlapping area at the beginning is greater than the overlapping
area at the end of the interval. The eects of the asymmetrical rotation are most
clearly visible at the tail of the curve. Similar to the uniform rotation (see the
second plot of Fig 5) the impact of the non-uniform rotation for the discharge
circuit is negligible.

Characteristics of Charging and Discharging an AC

Driven RC Circuit with Time-Dependent Capacitor

In this section we analyze the charging and the discharging characteristics of an

RC series circuit driven with an AC source. Schematically speaking, this implies
in Fig 4 we replace the DC power supply with an AC source. For this circuit,
Kirchhos law yields
dQ 1 A0
Q(t) sin(2f t) = 0,


Rotating Capacitor and a Transient Electric Network


In this equation f is the frequency of the signal and the voltage amplitude is
set to one volt.
Equation (1) is a non-trivial, non-linear dierential equation. To solve the
eq(2), we apply NDSolve along with the corresponding initial condition. The
response of the circuit is compared to the equivalent circuit with a constant

AC Driver






Fig. 7. Plot of the charge vs. time. The outer, inner and the dashed curves are the
capacitors charge for the constant and the time-dependent capacitors for uniform and
accelerated rotations, respectively.

Utilizing the Mathematica code, one may analyze the frequency sensitivity of
the circuit. As the result of one such analysis, we observe that the dierences
between these characteristics are pronounced, provided the frequencies are set
to less than 1 hz.


As indicated in the Introduction, the authors have proposed a unique research

project that has brought together three dierent subject areas: Geometry, Mechanics, and Electrical Network. Mathematica, with its exible and easy to use
intricacies, is chosen as the ideal tool to analyze the project and address the
what-if scenarios. As pointed out in the text, some of the derived results are
intuitively just. And for the hard to predict cases, we applied Mathematica to
analyze the problem and to form an opinion. As an open-ended question and research oriented project, one may attempt to modify the presented analysis along
with the accompanied codes to investigate the response of parallel RC circuits.
It would also be complimentary to our theoretical analysis to manufacture a
rotating capacitor to supplement the experimental data.


H. Saraan and N. Saraan

1. S. Wolfram, The Mathematica Book, 5th Ed., Cambridge University Publication,
2. D. Halliday, R. Resnick, and J. Walker, Fundamentals of Physics, 7th Ed, New York:
John Wiley and Sons, 2005.

Numerical-Symbolic Matlab Program for the

Analysis of Three-Dimensional Chaotic Systems
Akemi Galvez
Department of Applied Mathematics and Computational Sciences,
University of Cantabria, Avda. de los Castros,
s/n, E-39005, Santander, Spain

Abstract. In this paper, a new numerical-symbolic Matlab program for

the analysis of three-dimensional chaotic systems is introduced. The program provides the users with a GUI (Graphical User Interface) that
allows us to analyze any continuous three-dimensional system with a
minimal input (the symbolic ordinary dierential equations of the system
along with some relevant parameters). Such an analysis can be performed
either numerically (for instance, the computation of the Lyapunov exponents, the graphical representation of the attractor or the evolution of the
system variables) or symbolically (for instance, the Jacobian matrix of
the system or its equilibrium points). Some examples of the application
of the program to analyze several chaotic systems are also given.


The analysis of chaotic dynamical systems is one of the most challenging tasks
in computational science. Because these systems are essentially nonlinear, their
behavior is much more complicated than that of linear systems. In fact, even
the simplest chaotic systems exhibit a bulk of dierent behaviors that can only
be fully analyzed with the help of powerful hardware and software resources.
This challenging issue has motivated an intensive development of programs and
packages aimed at analyzing the range of dierent phenomena associated with
the chaotic systems.
Among these programs and packages, those based on computer algebra systems ( CAS) are receiving increasing attention during the last few years. Recent
examples can be found, for instance, in [2,3,5] for Matlab, in [4,7,9,10,11,12,16]
for Mathematica and in [17] for Maple, to mention just a few examples. In addition to their outstanding symbolic features, the CAS also include optimized
numerical routines, nice graphical capabilities and - in a few cases such as in
Matlab - the possibility to generate appealing GUIs (Graphical User Interfaces).
In this paper, the abovementioned features have been successfully applied to
generate a new numerical-symbolic Matlab program for the analysis of threedimensional chaotic systems. The program provides the users with a GUI that
allows us to analyze any continuous three-dimensional system with a minimal input (the symbolic ordinary dierential equations of the system along with some
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 211218, 2007.
c Springer-Verlag Berlin Heidelberg 2007


A. G

relevant parameters). Such an analysis can be performed either numerically (for

instance, the computation of the Lyapunov exponents, the graphical representation of the attractor or the evolution of the system variables over the time) or
symbolically (for instance, the Jacobian matrix of the system or its equilibrium
points). This paper describes the main components of the system as well some of
its most remarkable features. Some examples of the application of the program
to analyze several chaotic systems are also given.

Program Architecture and Implementation

The program introduced in this paper is comprised of four dierent components:

1. a set of numerical libraries containing the implementation of the commands
and functions designed for the numerical tasks. They have been generated by
using the native Matlab programming language and taking advantage of the
wealth of numerical routines available in this system. Usually, these Matlab
routines provide full control on a number of dierent options (such as the
absolute and relative error tolerance, stopping criteria and others) and are
fully optimized to oer the highest level of performance. In fact, this is one
of the major strengths of the program and one of the main reasons to choose
Matlab as its optimal programming environment.
2. a set of symbolic routines and functions. They have been implemented by using the Symbolic Math Toolbox that provides access to several Maple routines
for symbolic tasks.
3. the graphical commands for representation tasks. The powerful graphical capabilities of Matlab exceed those commonly available in other CAS such as
Mathematica and Maple. Although our current needs do not require to apply them at full extent, they avoid the users the tedious and time-consuming
task to implement many routines for graphical output by themselves. Some
nice viewing features such as 3D rotation, zooming in and out, coloring and
others are also automatically inherited from the Matlab windows system.
4. a GUI. Matlab provides a mechanism to generate GUIs by using the socalled guide (GUI development environment). This feature is not commonly
available in many other CAS so far. Although its implementation requires for complex interfaces - a high level of expertise, it allows the end users to
apply the program with a minimal knowledge and input, thus facilitating its
use and dissemination.
Regarding the implementation, this program has been developed by the author in Matlab v6.0 by using a Pentium IV processor at 2.4 GHz. with 512 MB
of RAM. However, the program supports many dierent platforms, such as PCs
(with Windows 9x, 2000, NT, Me and XP) and UNIX workstations. Figures
in this paper correspond to the PC platform version. The graphical tasks are
performed by using the Matlab GUI for the higher-level functions (windowing,
menus, or input) while the built-in graphics Matlab commands are applied for

Numerical-Symbolic Matlab Program


rendering purposes. The numerical kernel has been implemented in the native
Matlab programming language, and the symbolic kernel has been created by
using the commands of the Symbolic Math Toolbox.

Some Illustrative Examples

In this section we show some applications of the program through some illustrative examples.

Visualization of Chaotic Attractors

This example is aimed at showing the numerical and graphical features of the
program. Figure 1 shows a screenshot of a typical session for visualization of
chaotic attractors. The dierent windows involved in this task have been numbered for the sake of clarity: #1 indicates the main window of the program, from
where the other windows will show up when invoked. The workow is as follows:
rstly, the user inputs the system equations (upper part of window #1), which
are expressed symbolically. At this stage, only continuous three-dimensional
ows - described by a system of ordinary dierential equations (EDOs) - are
considered. For instance, in Fig. 1 we consider Chuas circuit, given by:

x = [y x f (x)]
y = x y + z

z = y

f (x) = bx + (a b) [|x + 1| |x 1|]
is the 3-segment piecewise-linear characteristic of the nonlinear resistor (Chuas
diode) and , , a and b are the circuit parameters. Then, the user declares the
system parameters and their values. In our example, we consider = 8.9, =
14.28, a = 1.14 and b = 0.71, for which the system exhibits chaotic behavior
[1]. In order to display the attractor and/or the evolution of the system variables
over the time, some kind of numerical integration is required. The lower part of
window #1 allows the user to choose dierent numerical integration methods
[15], including the classical Euler and 2nd- and 4th-order Runge-Kutta methods
(implemented by the author) along with some more sophisticated methods from
the Matlab kernel such as ode45, ode23, ode113, ode15s, ode23s, ode23t and
ode23tb (see [14] for details). Some input required for the numerical integration
(such as the initial point and the integration time) is also given at this stage. By
pressing the Numerical Integration settings button, window #2 appears and
some additional options (such as the absolute and relative error tolerance, the
initial and maximum stepsize and renement, the computation speed and others)
can be set up. Once chosen, the user proceeds to the graphical representation
stage, where he/she can display the attractor of the dynamical system and/or
the evolution of any of the system variables over the time. Such variables can


A. G

Fig. 1. Screenshots of the Matlab program for the analysis of chaotic systems: general
setup for the visualization of chaotic attractors

be depicted on the same or on dierent axes and windows. The Graphical

Representation settings button opens the window #3, where dierent graphical
options such as the line width and style, markers for the equilibrium points, and
others (including some coloring options leading to window #4) can be dened.
The nal result is the graphical output shown in window #5 where the double
scroll attractor is displayed.

Numerical-Symbolic Matlab Program


Fig. 2. Symbolic computation of the Jacobian matrix for the Lorenz system


Symbolic-Numerical Analysis of Chaotic Systems

An appealing feature of this program is the possibility to analyze the chaotic

systems either symbolically or numerically. Figure 2 shows an example for the
well-known Lorenz system [13], given by:

x = (y x)
y  = Rx y xz ,

z = xy bz
where , R and b are the system parameters. The program includes a module
for the computation of the Jacobian matrix and the equilibrium points of any
three-dimensional ow. The Jacobian matrix is a square matrix whose entries
are the partial derivatives of the system equations with respect to the system
variables. If no value for the system parameters is provided, the computation is
performed symbolically and the corresponding output depends on those system
parameters. Figure 2 shows the Jacobian matrix for the Lorenz system, which
depends not only on the system parameters but also on the system variables.
Otherwise, the computations are performed numerically. For instance, once some


A. G

Fig. 3. Numerical computation of the equilibrium points and the Lyapunov exponents
of the Lorenz system

in this example), the
Lyapunov exponents (LE) of the system can be numerically computed. To this
purpose, a numerical integration method is applied. Figure 3 shows the window at which the dierent options for this numerical integration process can
be chosen (left window) along with the graphical representation of the three
Lyapunov exponents over the time (right window). As shown in the gure, the
numerical values of these LE are 1.4 and 0.0022 and 15 respectively. Roughly
speaking, the LE are a generalization of the eigenvalues for nonlinear ows.
They are intensively applied to analyze the behavior of nonlinear systems, since
they indicate if small displacements of trajectories are along stable or unstable
parameter values are given ( = 10, R = 60 and b =

Numerical-Symbolic Matlab Program


Fig. 4. (left) Attractor and equilibrium points of the Lorenz system analyzed in Figure 3; (right) evolution of the system variables over the time

directions. In particular, a negative LE indicates that the trajectory evolves

along the stable direction for this variable (and hence, regular behavior for that
variable is obtained) while a positive value indicates a chaotic behavior. Because
in our example we nd positive LE, the system exhibits a chaotic behavior. This
fact is evidenced in Figure 4 (left) where the corresponding attractor of the
Lorenz system for our choice of the system parameters is displayed. The gure
also displays the equilibrium points of the Lorenz system for our choice of the
system parameters. Their corresponding numerical values are shown in the main
window of Figure 3. Finally, Figure 4 (right) shows the evolution of the system
variables over the time from t = 0 to t = 200.

Conclusions and Further Remarks

In this paper, a new numerical-symbolic Matlab program for the analysis of

three-dimensional continuous chaotic systems has been introduced. The system
allows the user to compute the Jacobian matrix, the equilibrium points and the
Lyapunov exponents of any chaotic three-dimensional ow, as well as to display
graphically the attractor and/or the system variables. Some examples of the
application of the program have also been briey reported. Future works include
the extension of this program to the case of discrete systems, the implementation
of especialized routines for the control of chaos [6,7,8] and the synchronization
of chaotic systems [10,11]. This research has been supported by the Spanish
Ministry of Education and Science, Project Ref. #TIN2006-13615.


A. G

1. Chua, L.O., Komuro, M., Matsumoto, T. The double-scroll family. IEEE Transactions on Circuits and Systems, 33 (1986) 1073-1118
2. Dhooge, A., Govaerts, W., Kuznetsov, Y.A.: Matcont: A Matlab package for numerical bifurcation analysis of ODEs. ACM Transactions on Mathematical Software
29(2) (2003) 141-164
3. Dhooge, A., Govaerts, W., Kuznetsov, Y.A.: Numerical continuation of fold bifurcations of limit cycles in MATCONT. Proceedings of CASA2003. Lecture Notes
in Computer Science, 2657 (2003) 701-710
4. G
alvez, A. Iglesias, A.: Symbolic/numeric analysis of chaotic synchronization with
a CAS. Future Generation Computer Systems (2007) (in press)
5. Govaerts, W., Sautois, B.: Phase response curves, delays and synchronization in
Matlab. Proceedings of CASA2006. Lectures Notes in Computer Science 3992
(2006) 391-398
6. Gutierrez, J.M., Iglesias, A., Guemez, J., Matas, M.A.: Suppression of chaos
through changes in the system variables through Poincare and Lorenz return maps.
International Journal of Bifurcation and Chaos, 6 (1996) 1351-1362
7. Gutierrez, J.M., Iglesias, A.: A Mathematica package for the analysis and control
of chaos in nonlinear systems. Computers in Physics, 12(6) (1998) 608-619
8. Iglesias, A., Gutierrez, J.M., Guemez, J., Matas, M.A.: Chaos suppression through
changes in the system variables and numerical rounding errors. Chaos, Solitons and
Fractals, 7(8) (1996) 1305-1316
9. Iglesias, A.: A new scheme based on semiconductor lasers with phase-conjugate
feedback for cryptographic communications. Lectures Notes in Computer Science,
2510 (2002) 135-144
10. Iglesias, A., G
alvez, A.: Analyzing the synchronization of chaotic dynamical systems with Mathematica: Part I. Proceedings of CASA2005. Lectures Notes in
Computer Science 3482 (2005) 472-481
11. Iglesias, A., G
alvez, A.: Analyzing the synchronization of chaotic dynamical systems with Mathematica: Part II. Proceedings of CASA2005. Lectures Notes in
Computer Science 3482 (2005) 482-491
12. Iglesias, A., G
alvez, A.: Revisiting some control schemes for chaotic synchronization
with Mathematica. Lectures Notes in Computer Science 3516 (2005) 651-658
13. Lorenz, E.N.: Journal of Atmospheric Sciences, 20 (1963) 130-141
14. The Mathworks Inc: Using Matlab. Natick, MA (1999)
15. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes
(2nd edition), Cambridge University Press, Cambridge (1992)
16. Saraan, H.: A closed form solution of the run-time of a sliding bead along a freely
hanging slinky. Proceedings of CASA2004. Lecture Notes in Computer Science,
3039 (2004) 319-326
17. Zhou, W., Jerey, D.J. Reid, G.J.: An algebraic method for analyzing open-loop
dynamic systems. Proceedings of CASA2005. Lecture Notes in Computer Science,
3516 (2005) 586-593

Safety of Recreational Water Slides:

Numerical Estimation of the Trajectory,
Velocities and Accelerations
of Motion of the Users
Piotr Szczepaniak and Ryszard Walenty
Silesian University of Technology,
Faculty of Civil Engineering,
ul. Akademicka 5, PL44-100 Gliwice, Poland,

Abstract. The article briey shows how to estimate the safety of recreational water slides by numerical analysis of motion of the users. There
are presented: mathematical description of a typical water slides geometry, simplied model of a sliding person, model of contact between the
user and the inner surface of the slide, equations of motion written for
a rigid body with 6 degrees of freedom and nally some sample results
compared to the limitations set by current European Standard.
Keywords: water slide, safety, water park, motion, dynamics, nite
dierence, numerical integration, modeling, Mathematica.


Water slides are one of the most popular facilities in water parks. They are
built all over the world. One of the most important problems associated with
them is safety. It is not well recognized, both mathematically and technically.
There are very few scientic papers concerning the problem. The most complete
publication [1] deals with the mathematical model of a water sliding process
using the assumption that the user has constant contact with the inner surface of
the chute. Actually, the most dangerous situations happen when the user looses
contact and cannot control the ride. The next problem is acceleration, often
called G-load. The human body, and especially brain and heart, is sensitive to
acceleration that signicantly exceeds gravity acceleration g. Several accidents,
in a variety of countries resulted in severe injuries and even death.
The contemporary practice, due to lack of design methodology, consists in
testing by a water slides expert after nishing the construction, however this
expert is described only as a t person, dressed in bathing suit [2]. It is not
acceptable from the point of view of modern engineering philosophy, which requires prior analysis based on mathematical model, which should be veried
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 219226, 2007.
c Springer-Verlag Berlin Heidelberg 2007


P. Szczepaniak and R. Walenty


Fig. 1. Basic elements of water slides

Typical Geometry of Water Slides

Most of the water slides (especially those built high over the ground level) are
constructed of two basic types of elements: the straight ones, being just a simple
cylinder, and the curved ones, having the shape of a slice of a torus. All these
elements are equipped on both ends with anges that allow connecting the following parts of the slide with screws (see Fig. 1 and 2). Because the resulting
axis of the chute consists of straight lines and circular arcs, it cant be described
by a simple equation, but some kind of an interval changing function is needed.

Fig. 2. A sample shape of a chute

The best choice seems to be the IntepolatingFunction within the Mathematica system. To obtain a parametric equation of the axis, one must rst
calculate a set of coordinates of discrete points, lying on the axis at small intervals. It can be done using procedures described in [3] or with the aid of any
software for 3-D graphics. The next step is to build an IntepolatingFunction
for each of the coordinates separately and nally join them into one vector function axis[l]. Having this function one can get the parametric equation of the
surface of the slide surface[l,phi,radius] using the following Mathematica

Safety of Recreational Water Slides


surface[l_, phi_, radius_] := axis[l] +

radius * Module[{vecda, lda13, lda12},
vecda = axis[l];
lda13 = Sqrt[Sum[vecda[[i]]^2, {i, 3}]];
lda12 = Sqrt[Sum[vecda[[i]]^2, {i, 2}]];
{{vecda[[1]] / lda13, -vecda[[2]] / lda12,
{vecda[[2]] / lda13, vecda[[1]] / lda12,
-(vecda[[2]] * vecda[[3]]) / (lda13 * lda12)},
{vecda[[3]] / lda13, 0, lda12 / lda13)}}
].{0, Cos[phi], Sin[phi]}] .
Within the above code l denotes the position of a current cross-section,
measured along the axis, phi and radius are cylindrical coordinates of points
creating the cross-section. Figure 2 has been created with this code and the
ParametricPlot3D command.

Model of a Sliding Person

The next task is to create a model of the human body. Its obvious, that a
complete bio-mechanical model with all its degrees of freedom (DOF) and parameters would be the best, but unfortunately its almost impossible to predict
the values of dimensions, mass, moments of inertia and stiness of all parts of
the users body, especially at the design stage of the construction process. Thats
why a simpler model is needed.
To create it, one can notice, that sliding people are quite stiened and the
fastest users touch the surface of the chute only with their heels and bladebones. This allows us to replace the sliding person by a rigid body, constrained
by 3 unilateral supports located at the vicinity of the previous mentioned parts
of the body (spheres at the vertices of the triangle representing the body on
Fig. 3). Such a body has 6 DOFs - translations xi and rotations i around 3
axes of the local (i ) or global (Xi ) coordinates system, and is subjected to the
inuence of the following forces: gravity F G , normal contact forces F Ni and
friction forces F Ti , shown on Fig. 4. Vectors of these forces can be calculated
using the following formulae:

0 if (ui < 0) [(k ui + c ui u i ) < 0]
F Ni =
(k ui + c ui u i ) ni otherwise
F Ti = |F Ni |

v Ti
|v Ti |

FG = mg ,
F Sum =



(F Ni + F Ti ) + F G ,



P. Szczepaniak and R. Walenty


Fig. 3. Replacement of a user by a model of a rigid body

Fig. 4. Model of contact between the moving body (grey circles) and the inner surface
of the chute (dotted line). Dot (black ) in the upper left corner denotes the center of
the current cross-section (axis of the slide).

M Sum =


pi (F Ni + F Ti ) ,



i number of the current zone of contact,
k, c constants of the quasi viscous-elastic model of human body,
ui deection of the zone of contact,
ni unit vector, normal to the surface of the slide,
coecient of friction,
v Ti tangent component of the velocity vector,
m mass of the user,
g vector of gravitational acceleration,
F Sum summary force,
M Sum summary moment of forces.
At this stage the main problem is calculating ui and ni . The best way to solve
it, is to nd on the axis of the slide these points, that are nearest to the current

Safety of Recreational Water Slides


positions of the centers of the contact spheres, because vectors ni must lie on
lines connecting these pairs of points. It can be easily done with the Mathematica
FindMinimum command.

Equations of Motion

The applied equations of motion are based on the well known Newtons laws of
motion [4]:
[m x(t)] = F Sum ,
[K(t)] = M Sum ,
K(t) = A(t) J (t) ,

cos[3 (t)] sin[3 (t)] 0

cos[2 (t)] 0 sin[2 (t)]

A(t) = sin[3 (t)] cos[3 (t)] 0
sin[2 (t)] 0 cos[2 (t)]

0 cos[1 (t)] sin[1 (t)] ,
0 sin[1 (t)] cos[1 (t)]

J1 0 0
J = 0 J2 0 ,
0 0 J3

d1 (t) d2 (t)
cos[1 (t)] +
(t) =
0 +

sin[1 (t)]

sin[2 (t)]
d3 (t)
sin[1 (t)] cos[2 (t)] ,
cos[ (t)] cos[ (t)]




K(t) vector of moment of momentum,
A(t) matrix of transformation from local (rotating)
to global (xed) coordinates system,
J tensor of main moments of inertia,
(t) vector of angular velocity.
As one can see, the equations of motion are so complicated, that it is impossible to obtain an analytical solution. In fact, even NDSolve, the numerical
solution of dierential equations in Mathematica, does not work due to the usage of the FindMinimum command within the equations of motion (see Sect. 3).
So a special code had to be written, and it follows an algorithm based on a
combination of the Taylors series and multi-step methods [5]:


P. Szczepaniak and R. Walenty


Input -> totalTime, deltaT, initialcoordinates, initialvelocity,

function acceleration[coordinates, velocities];
steps = totalTime / deltaT;
coord[[1]] = initialcoordinates;
vel[[1]] = initialvelocity;
acc[[1]] = acceleration[coord[[1]],vel[[1]]];
coord[[2]] = coord[[1]] + vel[[1]] * deltaT
+ 0.5 acc[[1]] * deltaT^2;
vel[[2]] = vel[[1]] + acc[[1]] * deltaT;
Do[acc[[i]] = acceleration[coord[[i]],vel[[i]]];
coord[[i+1]] = 2 * coord[[i]] - coord[[i-1]]
+ acc[[i]] * deltaT^2;
vel[[i]] = 0.5 (coord[[i+1]] - coord[[i-1]]) / deltaT,
{i, 2, steps + 1}];
Output -> lists: coord[[i]], vel[[i]], acc[[i]].
Of course, each single element of the output lists coord[[i]], vel[[i]] and
acc[[i]] is a 6 dimensional vector. This code seems to be stable and a sucient
precision is reached with deltaT = 0.002 [second].

Sample Results

Some sample results were obtained for the slide shown in Fig. 2. There is shown
the trajectory of the motion and the axis of the slide in Fig. 5. The beginning
is at the point (0,0), and then the body goes along the direction of the X2 axis,
next turns left and follows the curved axis of the slide, slightly swinging. More

Fig. 5. Top view of the axis of the slide (dotted line) and the trajectory of motion (bold

Safety of Recreational Water Slides


Fig. 6. Sample results (description in text)

Fig. 7. Results of the G-load calculations. Shades of grey denotes dierent values of G.

detailed results can be read from Fig. 6. There are presented: the G-load acting
on the moving body
G = |F Sum F G | /(9.81 m) ,



P. Szczepaniak and R. Walenty


value of velocity and rst and second angular coordinates (longitudinal and
transversal rotations around 1 and 2 axes - see Fig. 3), where on the horizontal
axes of these charts is marked the length of the chute.
The current design code [2] sets some limitations on the values of permissible
G-load for safety reasons. It says, that G 2.6 g is safe, and 2.6 g < G 4.0 g is
acceptable, but only for less then 0.1 second. Within the presented example these
limitations are kept, but sometimes it is hard to do so. Especially the second
condition can cause some problems when designing a very steep and fast water
slide, where the high speed generates huge centrifugal force at each bend.


Numerical modelling of motion is a good method of checking the geometry of

water slides during the design process. It allows to estimate the excitement level
of the ride, which is dependent on speed and acceleration, and points out the
dangerous parts of the slide, where the value of G-load is to high (exceeds 2.6 or
4.0 g). The applied algorithms are user-friendly, and the results of computations
can be presented in many forms, beginning from pure numbers and ending on
pictures like Fig. 7 and animations of the motion.
Acknowledgments. This paper has been supported by the Polish Committee
of Scientic Research (grant No. 0416/T02/2006/31).

1. Joo, S.-H., Chang, K.-H.: Design for the safety of recreational water slides. Mech.
Struct. & Mach. 29 (2001) 261294
2. European Standard EN 1069-1:2000 Water slides of 2 m height and more - Part 1:
Safety requirements and test methods. CEN, Bruxelles (2000)
3. Szczepaniak, P.: Zjezdzalnie wodne. Obliczanie geometrii zjezdzalni i modelowanie
ruchu uzytkownika (Water slides. Calculating the geometry and modelling of a motion of a user). MSc thesis. Silesian University of Technology, Gliwice (2003)
4. Borkowski, Sz.: Mechanika og
olna. Dynamika Newtonowska (General Mechanics.
Newtons Dynamics). 2nd edn. Silesian University of Technology Press, Gliwice
5. Burden, R.L., Faires, J.D.: Numerical Analysis. 5th ed. PWS Publishing Company,
Boston (1993)

Computing Locus Equations for Standard

Dynamic Geometry Environments
Francisco Botana, Miguel A. Ab
anades, and Jes
us Escribano
Departamento de Matem
atica Aplicada I, Universidad de Vigo, Campus A
Xunqueira, 36005 Pontevedra, Spain
Ingeniera Tecnica en Inform
atica de Sistemas, CES Felipe II (UCM), 28300
Aranjuez, Spain
Departamento de Sistemas Inform
aticos y Computaci
on, Universidad Complutense de
Madrid, 28040 Madrid, Spain

Abstract. GLI (Geometric Locus Identier ), an open web-based tool

to determine equations of geometric loci specied using Cabri Geometry
and The Geometers Sketchpad, is described. A geometric construction of
a locus is uploaded to a Java Servlet server, where two computer algebra
systems, CoCoA and Mathematica, following the Groebner basis method,
compute the locus equation and its graph. Moreover, an OpenMath description of the geometric construction is given. GLI can be eciently
used in mathematics education, as a supplement of the locus functions
of the standard dynamic geometry systems. The system is located at
Keywords: Interactive

geometry, Automated

deduction, Locus,


Dynamic geometry programs are probably the most used computers tools in
mathematics education today, from elementary to pregraduate level. At the same
time, computer algebra systems are widely employed in the learning of scientic
disciplines, generally starting from high school. Nevertheless, the top ranked
programs in both elds have evolved separately: although almost all of the computer algebra systems oer some specialized library for the study of geometry (cf.
Mathematica, Maple, Derive, ...), none of them use the dynamic paradigm, that
is, in their geometric constructions free elements cannot be graphically dragged
making dependent elements behave accordingly. On the other side, standard
dynamic geometry environments, such as Cabri Geometry [11] and The Geometers Sketchpad [10], are selfcontained: whenever, albeit rarely, they need some
symbolic computation, they use their own resources.
Some attempts connecting both types of systems have been reported, mainly
coming from academia. Besides the above mentioned geometric libraries, specialized packages for geometry using the symbolic capabilities of computer algebra
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 227234, 2007.
c Springer-Verlag Berlin Heidelberg 2007


F. Botana, M.A. Ab
anades, and J. Escribano

systems exist (see, for instance, [1]). Nevertheless, they lack the dynamic approach, being more an environment for exact drawing than a tool for interactive
experiments. We can also cite [12] as an interesting work, where a dynamic geometric system is developed inside Mathematica through an exhaustive use of
The approach of using computer algebra algorithms in dynamic geometry has
been more fruitful. Two ways have been devised for dealing with this cooperation.
Some systems incorporate their own code for coping with algebraic techniques
in geometry ([14,8,9],...), while other systems emphasize on reusing software
([2,15,17],...). Both strategies have been partially successful on solving some of
the three main points in dynamic geometry, that is, the continuitydeterminism
dilemma, the proof and discovery abilities, and the complete determination of
loci (see [6] for an extensive study of the subject).
This paper describes a webbased resource allowing the remote discovery of
equations of algebraic loci specied through the wellknown Cabri and The
Geometers Sketchpad environments. Moreover, OpenMath has been chosen as
the communicating language allowing other OpenMath compliant systems to
make use of the tool.

Numerical vs. Symbolic Loci

An astonishing feature of dynamic geometry systems is their ability to draw the

path of an object dependent on another one while this last element is dragged
along a predetermined path. The trajectory of the rst object is called its locus.
Since their earliest versions, both Cabri and The Geometers Sketchpad oered
graphical loci generation through their trace mode. Roughly speaking, the strategy consists of sampling the path of the dragged object and constructing, for
each sample, the position of the point generating the locus. An interpolation of
these support points returns the locus as a graphical continuous object in the
screen. The heuristics used for the interpolation produce anomalous loci in some
border cases, and this strategy is not well suited to produce the equation of the
obtained loci. Reacting against this drawback, the newest version of Cabri incorporates a tool for computing approximate algebraic expressions of loci. Although
this new feature is not documented, as usual in the market considerations of the
Cabrilog company, the algorithm is sketched in [16] as follows: 1) random selection of about one hundred locus supporting points, and 2) calculation of the
(n + 1)(n + 2)/2 coecients of the bivariate locus equation of degree n, from
n = 1 onwards and using a system of equations derived from the support points,
until they approximately satisfy the locus equation.
There is no doubt about the interest of functions returning equations of loci, as
Cabri does although in an approximate way. Nevertheless, no comment is made
on the exactness of the equation (hence inducing an unexpert user to take it as
an accurate one). Furthermore, some loci are described by equations of dierent
degree if the user exploits the dynamic character of the environment. We have
shown in [3] the superior performance of symbolic approaches for dealing with

Computing Locus Equations for Standard Dynamic Geometry Environments


loci: our proposal also obtains the equations of algebraic loci and bans out some
anomalous results in standard systems. The equations are sound (since the used
Groebner bases method is), and the method is fast enough to be integrated in a
dynamic environment, as tested with GDI (see [4] where the detailed algorithm
is described).

User Interface and Architecture

Although conceived with the spirit of a plug-in for Cabri and The Geometers
Sketchpad, simplicity and convenience of use were the guiding lines in the design
of GLI. Consequently, its interface ( has
been designed to look like a simple web page (see Figure 1). The user makes use
of an applet to create a text le that is uploaded to the server. The equation
and graph of the locus are then displayed in a new browser window.

Fig. 1. User interface

The Cabri or The Geometers Sketchpad le with the original specication of

the locus undergoes a double translating process. First the original le is processed as a text le producing an XML codication of an OpenMath element.
OpenMath is an extensible standard for representing the semantics of mathematical objects (see [13]). This OpenMath description is then translated into
webDiscovery code, which is the description that the nal application is designed


F. Botana, M.A. Ab
anades, and J. Escribano

to interpret. webDiscovery is a web application developed by Botana (see [5])

capable of performing a wide variety of discovery tasks whose kernel has been
appropriately modied to be integrated in GLI as the nal computational tool.
Unlike the les generated by Cabri, the les directly generated by The Geometers Sketchpad are coded. A JavaSketchpad .htm le has to be used instead.
The decision of making the whole translating process available to the user
was thought, on one hand, as a testimonial statement to the computational
community where lack of transparency stands in the way of attempts to use the
available tools. On the other hand, the OpenMath description of the geometric
locus is made available to other OpenMath compliant geometric systems.
GLI is based on webMathematica [18], a Java servlet technology allowing
remote access to the symbolic capabilities of Mathematica. Once the user has
created and uploaded the appropriate text le, a Mathematica Server Page is
launched, reading the le and initializing variables. An initialization le for CoCoA [7] containing the ideal generated by the appropriate dening polynomials
is also written out, and CoCoA, launched by Mathematica, computes a Groebner basis for this ideal. The returned factors are classied as points, lines, conics
or general curves. Although Mathematica provides an implementation of the
Groebner basis algorithm, CoCoA has been chosen mainly due to the experience of better performance in several appropriate examples. Additionally, the
Mathematica graphing abilities are used to plot the locus.


An Ellipse

The old exercise of drawing an ellipse using a pencil, two pins, and a piece
of string is frequently proposed as the rst one when young students begin
practising loci computation.

Fig. 2. The ellipse as a locus

Computing Locus Equations for Standard Dynamic Geometry Environments


It is well known that both environments, Cabri and The Geometers Sketchpad, return a surprising answer when simulating the construction, giving just
half of the ellipse. Figure 2 shows the construction made in The Geometers
Sketchpad (inside the square) and the answer returned by GLI. The plot range
in the graph produced by GLI depends on ad-hoc computations of the graph size
made by Mathematica. Despite being an independent process, the computations
of an optimum plot range can be directed by a line of code in the webDiscovery
description of the task. A default but modiable value has been included in all
webDiscovery descriptions.

con of Pascal

Given a xed point P3 and a variable one P6 on a circle, the limacon of Pascal
is the locus of points P14 such that P3 , P6 and P14 are collinear and the distance
from P6 to P14 is a constant, specied in Figure 4 by the segment P8 P9 (see for a historical reference).

Fig. 3. The limacon of Pascal in Cabri

As in the case of the preceding subsection, computing the locus of P14 gives
just a part of it (Figure 3, left). It is necessary to compute the locus of the other

intersection, P14
, in order to get the whole locus (Figure 3, right). It seems that
Cabri is making some extra assumptions, for instance, the point P14 is inside
the circle, whereas the imposed constraints subsume both points in just one.
Regarding the locus equation, Cabri returns two equations (instead of one!)
0.04x2 y + 0.04y 3 + 0.22x2 + 0.19xy + 0.82y 2 + 0.87x + 5.07y + 10 = 0
0.04x2 y + 0.04y 3 + 0.17x2 + 0.20xy + 0.95y 2 + 0.77x + 5.62y + 10 = 0.
Plotting these equations with Mathematica we get the curves in Figure 4, left,
while the curve returned by our system GLI is shown at the right. The equation
of the limacon, a quartic, is included as a factor of the solution (see Figure 5).
The extraneous factor of the circle is due to a degenerated condition: note that,


F. Botana, M.A. Ab
anades, and J. Escribano

Fig. 4. Plots of the limacon

since P6 is a variable point on the circle, it can coincide with P3 , so reducing

the locus to a circle centered at P3 and with radius P8 P9 . The generation of this
factor could be avoided in GLI by adding the condition NotEqual(P3,P6).

Fig. 5. A fragment of the equation returned for the limacon


A Simple Locus, Dierent Answers

We will use a very simple example to illustrate the dierent behavior of standard
systems and the one proposed here. Let us consider a line P1 P2 with a variable
point on it, P4 , taken as the center of a circle with radius P5 P6 . Compute the
locus of a point P9 bound to the circle when P4 moves along the line. The locus found by The Geometers Sketchpad is, as expected, a line parallel to P1 P2
(Figure 6). Nevertheless, GLI answers that The locus is (or is contained in) the
whole plane. This is due to the dierent treatment of point P9 : in standard systems its denition is taken not just as a point on the circle, but other constraints
not explicitly given are assumed. However, from an strictly algebraic point of
view, P9 is any point on the circle. So, when the circle moves, the locus is a non
linear part of the plane, and our non semialgebraic approach detects this fact
answering as stated.
Note that if Cabri is used and the rst line is dened through a point and
a graphically specied slope, the translation of the construction will fail since
the current version of GLI does not support this Cabri primitive. When working
with a non allowed locus le an error message will appear in the corresponding
text area in the applet. The user is then instructed to inspect the JAVA console

Computing Locus Equations for Standard Dynamic Geometry Environments


Fig. 6. A simple locus

to see a sequential description of the translation process, with which one can
determine the primitive not admitted by the current version of GLI that has
been used.

Extending the Scope of Loci Computations

Cabri and The Geometers Sketchpad can only nd loci of points that have been
eectively constructed in their systems, that is, points who parametrically depend on another one. Hence, points which are implicitly dened by simultaneous
multiple conditions cannot be used for generating loci. A classical result such
as the theorem of WallaceSteiner (stating that the locus of points X such that
their orthogonal projections to a given triangle are collinear is the triangles circumcircle) cannot be discovered unless the user previously knows the result! The
symbolic kernel of GLI has been designed to be easily modied to support this
type of implicit loci. Further versions of GLI are under development to support
an extended class of computable loci.


The work presented here, although of a small magnitude, shows the possibilities of the interconnection between dynamic geometry and computer algebra
systems. Moreover, we think that the generalization of broad band internet connections will make remote access to applications the main general trend, of which
GLI is a perfect example. The decision of making OpenMath the communication
language between the systems involved could be seen as secondary from a computational point of view. However, the use of standard semantic representation
of mathematical objects is, as we see it, the main challenge in the computational community. GLI wants to be an example of that too. Moreover, the use
of OpenMath as intercommunicating language opens the door to further connections with dierent geometry related systems. As future work, a twofolded
ongoing research is being conducted to extend GLIs domain, to other Dynamic
Geometry Systems on one hand, and to nonpolynomial equations and inequalities on the other. This latter extension of GLI will allow a considerable increase
in the set of possible relations between the dierent geometric elements and
hence of its applications.


F. Botana, M.A. Ab
anades, and J. Escribano

Acknowledgments. This work has been partially supported by UCM research

group ACEIA and research grants MTM2004-03175 (Botana) and MTM200502865 (Abanades, Escribano) from the Spanish MEC.

1. Autin, B.: Pure and applied geometry with Geometrica. Proc. 8th Int. Conf. on
Applications of Computer Algebra (ACA 2002), 109110 (2002)
2. Botana, F., Valcarce, J.L.: A dynamic-symbolic interface for geometric theorem
discovery. Computers and Education, 38(13), 2135 (2002)
3. Botana, F.: Interactive versus symbolic approaches to plane loci generation in
dynamic geometry environments. Proc. I Int. Workshop on Computer Graphics
and Geometric Modeling (CGGM 2002), LNCS, 2330, 211-218 (2002)
4. Botana, F., Valcarce, J.L.: A software tool for the investigation of plane loci. Mathematics and Computers in Simulation, 61(2), 141154 (2003)
5. Botana, F.: A Webbased intelligent system for geometric discovery. Proc. I Int.
Workshop on Computer Algebra Systems and Applications (CASA 2003), LNCS,
2657, 801-810 (2003)
6. Botana, F., Recio, T.: Towards solving the dynamic geometry bottleneck via a
symbolic approach. Proc. V Int. Workshop on Automated Deduction in Geometry
(ADG 2004), LNAI, 3763, 92110 (2006)
7. Capani, A., Niesi, G., Robbiano, L.: CoCoA, a system for doing Computations in
Commutative Algebra. Available via anonymous ftp from:
8. Gao, X.S., Zhang, J.Z., Chou, S.C.: Geometry Expert. Nine Chapters, Taiwan
10. Jackiw, N.: The Geometers Sketchpad v 4.0. Key Curriculum Press, Berkeley
11. Laborde, J. M., Bellemain, F.: Cabri Geometry II. Texas Instruments, Dallas (1998)
12. Miyaji, C., Kimura, H.: Writing a graphical user interface for Mathematica using
Mathematica and Mathlink. Proc. 2nd Int. Mathematica Symposium (IMS97),
345352 (1997)
14. RichterGebert, J., Kortenkamp, U.: The Interactive Geometry Software Cinderella. Springer, Berlin (1999)
15. RoanesLozano, E., RoanesMacas, E., Villar, M.: A bridge between dynamic
geometry and computer algebra. Mathematical and Computer Modelling, 37(910),
1005-1028 (2003)
16. Schumann, H.: A dynamic approach to simple algebraic curves. Zentralblatt f
Didaktik der Mathematik, 35(6), 301316 (2003)
17. Wang, D.: GEOTHER: A geometry theorem prover. Proc. 13th International Conference on Automated Deduction (CADE 1996), LNCS, 1104, 166170 (1996)

Symbolic Computation of Petri Nets

Andres Iglesias1 and Sinan Kapcak2

Department of Applied Mathematics and Computational Sciences,

University of Cantabria, Avda. de los Castros,
s/n, E-39005, Santander, Spain
Department of Mathematics, Izmir Institute of Technology,
Urla, Izmir, Turkey

Abstract. Petri nets are receiving increasing attention from the scientic community during the last few years. They provide the users with a
powerful formalism for describing and analyzing a variety of information
processing systems such as nite-state machines, concurrent systems,
multiprocessors and parallel computation, formal languages, communication protocols, etc. Although the mathematical theory of Petri nets
has been intensively analyzed from several points of view, the symbolic
computation of these nets is still a challenge, particularly for generalpurpose computer algebra systems (CAS). In this paper, a new Mathematica package for dealing with some Petri nets is introduced.


Petri nets (PN) are receiving increasing attention from the scientic community
during the last few years. Most of their interest lies on their ability to represent a number of events and states in a distributed, parallel, nondeterministic or
stochastic system and to simulate accurately processes such as concurrency, sequentiality or asynchronous control [1,3]. Petri nets provide the users with a very
powerful formalism for describing and analyzing a broad variety of information
processing systems both from the graphical and the mathematical viewpoints.
Since its inception in the early 60s, they have been successfully applied to many
interesting problems including nite-state machines, concurrent systems, multiprocessors and parallel computation, formal languages, communication protocols
and many others.
Although the mathematical fundamentals of Petri nets have been analyzed by
using many powerful techniques (linear algebraic techniques to verify properties
such as place invariants, transition invariants and reachability; graph analysis
and state equations to analyze their dynamic behavior; simulation and Markovchain analysis for performance evaluation, etc.), and several computer programs
for PN have been developed so far, the symbolic computation of these nets is still
a challenge, particularly for general-purpose computer algebra systems (CAS).
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 235242, 2007.
c Springer-Verlag Berlin Heidelberg 2007


A. Iglesias and S. Kapcak

In this paper, a new Mathematica package for dealing with some Petri nets
is introduced. The structure of this paper is as follows: Section 2 provides a
gentle introduction to the basic concepts and denitions on Petri nets. Then,
Section 3 introduces the new Mathematica package for computing them and
describes the main commands implemented within. The performance of the code
is also discussed in this section by using some illustrative examples. Conclusions
and further remarks close the paper.

Basic Concepts and Denitions

A Petri net (PN) is a special kind of directed graph, together with an initial
state called the initial marking (see Table 1 for the mathematical details). The
graph of a PN is a bipartite graph containing places {P1 , . . . , Pm } and transitions
{t1 , . . . , tn }. Figure 1 shows an example of a Petri net comprised of three places
and six transitions. In graphical representation, places are usually displayed as
circles while transitions appear as vertical rectangular boxes. The graph also
contains arcs either from a place Pi to a transition tj (input arcs for tj ) or from
a transition to a place (output arcs for tj ). These arcs are labeled with their
weights (positive integers), with the meaning that an arc of weight w can be
understood as a set of w parallel arcs of unity weight (whose labels are usually
omitted). In Fig. 1 the input arcs from P1 to t3 and P2 to t4 and the output arc
from t1 to P1 have weight 2, the rest having unity weight.

Fig. 1. Example of a Petri net comprised of three places and six transitions

A marking (state) assigns to each place Pi a nonnegative integer, ki . In this

case, we say that Pi is marked with ki tokens. Graphically, this idea is represented
by ki small black circles (tokens) in place Pi . In other words, places hold tokens
to represent predicates about the world state or internal state. All markings
are denoted by vectors M of length m (the total number of places in the net)
such that the i-th component of M indicates the number of tokens in place Pi .
From now on the initial marking will be denoted as M0 . For instance, the initial
marking (state) for the net in Figure 1 is {2, 1, 0}.

Symbolic Computation of Petri Nets


Table 1. Mathematical denition of a Petri net

A Petri net (PN) is an algebraic structure PN = (P, T, A, W, M0 ) comprised of:
a nite set of places, P = {P1 , P2 , . . . , Pm },
a nite set of transitions, T = {t1 , t2 , . . . , tn },
a set of arcs, A, either from a place to a transition (input arcs) or
from a transition to a place (output arcs):
A (P T) (T P)
a weight function: W : A IN q
(with q = #(A))
an initial marking: M0 : P IN m

If PN is a nite capacity net, we also consider:

a set of capacities, C : P IN m
a nite collection of markings (states) Mi : P IN m

The dynamical behavior of many systems can be expressed in terms of the

system states of their Petri net. Such states are adequately described by the
changes of markings of a PN according to a ring rule for the transitions: a
transition tj is said to be enabled if each input place Pi of tj is marked with wi,j
tokens, where wi,j is the weight of the arc from Pi to tj . For instance, transitions
t2 , t3 and t5 are enabled, while transitions t4 and t6 are not. Note, for example,
that transition t4 has weight 2 while place P2 has only 1 token, so arc from P2 to
t4 is disabled. If transition tj is enabled, it may or may not be red (depending
on whether or not the event represented by such a transition occurs). A ring of
transition tj removes wi,j tokens from each input place Pi of tj and adds wj,k
tokens to each output place Pk of tj , wj,k being the weight of the arc from tj to
Pk . In other words, if transition tj is red, all input places of tj have their input
tokens removed and a new set of tokens is deposited in the output places of tj
according to the weights of the arcs connecting those places and tj . For instance,
transition t3 removes one token from place P1 and adds one token to place P2 ,
thus changing the previous marking of the net.
A transition without any input place is called a source transition. Note that
source transitions are always enabled. In Figure 1 there is only one source transition, namely t1 . A transition without any output place is called a sink transition.
The reader will notice that the ring of a sink transition removes tokens but does
not generate new tokens in the net. Sink transitions in Figure 1 are t2 , t4 and
t6 . A couple (Pi , tj ) is said to be a self-loop if Pi is both an input and an output
place for transition tj . A Petri net free of self-loops is called a pure net. In this
paper, we will restrict exclusively to pure nets.
Some PN do not put any restriction on the number of tokens each place can
hold. Such nets are usually referred to as unnite capacity net. However, in most
practical cases it is more reasonable to consider an upper limit to the number of
tokens for a given place. That number is called the capacity of the place. If all


A. Iglesias and S. Kapcak

places of a net have nite capacity, the net itself is referred to as a nite capacity
net. All nets in this paper will belong to this later category. For instance, the
net in Figure 1 is a nite capacity net, with capacities 2, 2 and 1 for places P1 ,
P2 and P3 , respectively.
If so, there is another condition to be fullled for any transition tj to be
enabled: the number of tokens at each output place of tj must not exceed its
capacity after ring tj . For instance, transition t1 in Figure 1 is initially disabled
because place P1 has already two tokens. If transitions t2 and/or t3 are applied
more than once, the two tokens of place P1 will be removed, so t1 becomes
enabled. Note also that transition t3 cannot be red initially more than once, as
capacity of P2 is 2.

The Mathematica Package for Petri Nets

In this section a new Mathematica package for dealing with Petri nets is introduced. For the sake of clarity, the main commands of the package will be
described by means of its application to some Petri net examples. In particular,
in this paper we will restrict to the case of pure and nite capacity nets, a kind
of nets with many interesting applications. We start our discussion by loading
the package:
In[1]:= <<PetriNets
According to Table 1, a Petri net (like that in Figure 1 and denoted onwards
as net1) is described as a collection of lists. In our representation, net1 consists
of three elements: a list of couples {place, capacity}, a list of transitions and a
list of arcs from places to transitions along with its weights:
In[2]:= net1={{{p1,2},{p2,2},{p3,1}},{t1,t2,t3,t4,t5,t6},
Note that the arcs are represented by triplets {place, transition, weight},
where positive value for the weights mean output arcs and negative values denote input arcs. This notation is consistent with the fact that output arcs add
tokens to the places while input arcs remove them. Now, given the initial marking {2, 1, 0} and any transition, the FireTransition command returns the new
marking obtained by ring such a transition:
In[3]:= FireTransition[net1,{2,1,0},t2];
Out[3]:= {1,1,0}
Given a net and its initial marking, an interesting question is to determine
whether or not a transition can be red. The EnabledTransitions command
returns the list of all enabled transitions for the given input:
In[4]:= EnabledTransitions[net1,{2,1,0}];
Out[4]:= {t2,t3,t5}
The FireTransition command allows us to compute the resulting markings
obtained by applying these transitions onto the initial marking:

Symbolic Computation of Petri Nets


In[5]:= FireTransition[net1,{2,1,0},#]& /@ %;
Out[5]:= {{1,1,0},{0,2,0},{2,0,1}}
Note that, since transition t1 cannot be red, an error message is returned:
In[6]:= FireTransition[net1,{2,1,0},t1];
Out[6]:= FireTransition: Disabled transition: t1 cannot be red for the given net
and the {2,1,0} marking.

Fig. 2. The reachability graph for the Petri net net1 and the initial marking {2, 1, 0}

From Out[4] and Out[5], the reader can easily realize the successive application
of the EnabledTransitions and FireTransition commands allows us to obtain
all possible markings and all possible rings at each marking. However, this is
a tedious and time-consuming task to be done by hand. Usually, such markings
and rings are graphically displayed in what is called a reachability graph. The
next input returns the reachability graph for our Petri net and its initial marking:
In[7]:= ReachabilityGraph[net1,{2,1,0}];
Out[7]:= See Figure 2
Figure 2 can be interpreted as follows: the outer column on the left provides
the list of all possible markings for the net. Their components are sorted from the
left to the right according to the standard lexicographic order. For any marking,
the row in front gives the collection of its enabled transitions. For instance, the
enabled transitions for the initial marking {2, 1, 0} are {t2, t3, t5} (as expected
from Out[4]), while they are {t1, t4, t6} for {0, 2, 1}. Given a marking and one


A. Iglesias and S. Kapcak

Fig. 3. Example of a Petri net comprised of ve places and six transitions

Fig. 4. Reachability graph for the Petri net in Figure 3

of its enabled transitions, you can determine the output marking of ring such
transition by simply moving up/down in the transition column until reaching
the star symbol: the marking in that row is the desired output. By this simple
procedure, results such as those in Out[5] can readily be obtained.

Symbolic Computation of Petri Nets


Fig. 5. Reachability graph after modifying the weight of the arc from P5 to t6

A second example of a Petri net is shown in Figure 3. This net, comprised of

ve place and six transitions, has many more arcs than the previous example.
Consequently, its reachability graph, shown in Figure 4, is also larger. The Mathematica codes for dening the net and getting this graph are similar to those for
the rst example and, hence, have been intentionally omitted.
The net in Figure 3 exhibits a number of remarkable features: for instance,
places P1 , P2 and P5 have more than one output transition, leading to nondeterministic behavior. Such a structure is usually referred to as a conict, decision or choice. On the other hand, this net has no source transitions. This fact
is reected in the reachability graph, which has a triangular structure: entries
appear only below the diagonal. As opposed to this case, the net in Figure 1 has
one single source transition (namely, t1), the only element above the diagonal in
its reachability graph.
It is worthwhile to mention that the place P1 has only input arcs, meaning
that its number of initial tokens can only decrease, but never increase. This
means that the capacity of P1 might be less without aecting current results.


A. Iglesias and S. Kapcak

On the other hand, the reachability graph in Figure 4 has some markings no
transitions can be applied onto. Examples of such markings are {1, 3, 3, 2, 0},
{1, 2, 3, 2, 0} or {0, 4, 3, 1, 0} (although not the only ones). They are sometimes
called end markings. Note that not end markings appear in the rst net of this
paper. Note also that the transition t6 is never red (it never appears in the
graph of Figure 4). By simply decreasing the weight of the arc from P5 to t6
to the unity, the transition becomes enabled, as shown in the new reachability
graph depicted in Figure 5.

Conclusions and Further Remarks

In this paper, a new Mathematica package for dealing with nite capacity Petri
nets has been introduced. The main features of the package have been discussed
by its application to some simple yet illustrative examples. Our future work
includes the application of this package to real problems, the extension to other
cases of Petri nets, the implementation of new commands for the mathematical
analysis of these nets and the characterization of the possible relationship (if
any) with the functional networks and other networked structures [2,4,5].
This research has been supported by the Spanish Ministry of Education and
Science, Project Ref. #TIN2006-13615. The second author also thanks the nancial support from the Erasmus Program of the European Union for his stay
at the University of Cantabria during the period this paper was written.

1. Murata, T.: Petri nets: Properties, analysis and applications. Proceedings of the
IEEE, 77(4) (1989) 541-580
2. Echevarra, G., Iglesias, A., G
alvez, A.: Extending neural networks for B-spline
surface reconstruction. Lectures Notes in Computer Science, 2330 (2002) 305-314
3. German, R.: Performance Analysis of Communication Systems with Non-Markovian
Stochastic Petri Nets. John Wiley and Sons, Inc. New York (2000)
4. Iglesias, A., G
alvez, A.: A New Articial Intelligence Paradigm for Computer-Aided
Geometric Design. Lectures Notes in Articial Intelligence, 1930 (2001) 200-213
5. Iglesias, A., Echevarra, G., G
alvez, A.: Functional networks for B-spline surface
reconstruction. Future Generation Computer Systems, 20(8) (2004) 1337-1353

Dynaput: Dynamic Input Manipulations

for 2D Structures
of Mathematical Expressions
Deguchi Hiroaki
Kobe University, 3-11 Tsurukabuto, Nada-ku, Kobe 657-8501, Japan

Abstract. This paper describes a prototype of input interface of GUI

for mathematical expressions. Expressions are treated as sets of objects.
To handle 2D structures, we have added new areas, called peripheral
area, to the objects. Based on new operations for these areas, the system
provides dynamic interactive environment. The internal data structure
for the interface is also presented in this paper. The tree structure of
the objects is very simple, and has high potentiality for being converted
to a variety of formats. Using this new dynamic input interface, users
can manipulate 2D structures directly and intuitively, and they can get
mathematical notations of wished format.
Keywords: GUI, input interface, direct manipulation, text entry,
pen-based computing.


How to handle mathematical expressions on screens of computers has been discussed from 1960s[1]. While many systems have been proposed, there are no
easy-to-use systems for beginners. Users must choose not too bad(wrong) one
from existing systems. One of the reasons for the situation is that APIs of the
existing systems are not designed adequately in consideration of handling 2D
For example, a cursor(caret) of GUI text editors has 2D coordinates, but the
information is not used eectively. A Cursor is placed on or before or after one
of characters. Because of texts linear structure, the position of a cursor is able
to be converted to 1D information. Therefore, although the cursor of GUI text
editors looks like 2D cursor, it is 1D cursor in practice. And, many systems which
handle mathematical expressions are based on this type of cursor models.

Input Interfaces of Computer Algebra Systems

Computer algebra systems handle mathematical expressions, but their input interfaces are generally based on template models, such as word processors equation editor part which are treated like sub programs. An equation editor with
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 243250, 2007.
c Springer-Verlag Berlin Heidelberg 2007


H. Deguchi

template model is able to handle mathematical expressions by using templates

of 2D structures. And the input interface of the system is designed under the
inuence of a paradigm based on the cursor model as mentioned above.
Templates for 2D structures have box structure, each box of which is designed
for cursor model based input interfaces. Box structures can be nested inside
other boxes. 2D structures are constructed from such nested boxes. Usually
these boxes are displayed on the screen. There are symbols users would like to
display, and boxes these systems have to display, on the screen. Such systems
are not intuitive, because of these boxes users dont necessarily wish to display.
Therefore, user interfaces of many computer algebra systems are not suitable to
handle 2D structures.
As for input devices, template based systems require both pointing devices
and keyboards. Keyboards are required for inputting texts, and pointing devices
are required for selecting templates of 2D structures. It is not easy-to-use that
users are forced to use two or more devices.

The Present Work

The goal of this work is to provide easy-to-use environment to novice users,

such as students studying mathematics of primary (or higher) education. In our
former researches[2,3,4,5], we have developed MathBlackBoard as a user interface of computer algebra systems. In MathBlackBoard, only a pointing device is
required, and keyboards are not necessarily indispensable devices. And mathematical expressions located in the editing area of MathBlackBoard are able to be
dragged and be dropped. The user interface of MathBlackBoard is easy-to-use
at least for students of junior high schools[6,8].
This paper describes a new input interface[7] which replaces template based
interfaces. In this paper, we introduce a dynamic input interface and its data
structure. The interface has patents pending in Japan. The new version of MathBlackBoard(Fig. 1) has been developed as a prototype of the interface. In the
Section 2 of this paper, the new model of GUI operations is shown, and operations of existing systems and of the new system are discussed. In the Section 3,
data structure for dynamic input interface is described. Finally the Section 4
describes a conclusion.

GUI Operations

Using computers with GUI, users can manipulate objects on screens directly.
Especially, drag and drop operations are intuitive. However, it is hard to say
that they are eectively utilized in interfaces treating mathematical expressions.
For example, in some systems, to connect a selection to a target, users can select
and drag the selection, but can drop them only onto prepared boxes related to
the target. These boxes have to be prepared by using templates before drag and
drop operations.

Dynaput: Dynamic Input Manipulations for 2D Structures


Fig. 1. MathBlackBoard


Drag and Drop

In general, drag and drop means drag an object and drop it to dierent places
or onto other objects. Each object on GUI screens has its own selection area
which is used to be selected or to be dropped onto. And when a dragged object
is released by manipulation of pointing devices, all of the selection areas are
scanned. If the pointer lies inside the selection area of any object, the dragged
object is dropped onto such pointed object. Otherwise, the dragged object is
dropped to the place marked by the pointer.
Using these operations, users can drag and drop expressions onto a box prepared or into an insertion point which is used in linear structure of texts. Insertion points have 1D information (or convertable to 1D information) of their
position, and are used with text stream, where text includes characters and symbols. When a dragged object is released, all of the boxes are scanned. And then,
if a box is pointed, all of the insertion points in the box are scanned.
In the model of templates and their box structures, that is used with drag
and drop operations generally, mathematical expressions are constructed from
nested boxes which contain other boxes or text stream as contents.

Drag and Draw

As mentioned above, existing drag and drop operations and template models are
not suitable for 2D structure of mathematical expressions, because a mathematical formula is constructed as box-structured texts.
To extend drag and drop operation, we have added new elements to objects.
The most important element added is area for target of drag and drop operations.
Notice that the areas are used to be targets of drag and drop, not to be boxstructured containers. These new added areas are called peripheral area. Left


H. Deguchi

Fig. 2. Left: A symbol object x with a selection area, peripheral areas, and a baseline. Right: Base points and their link relations.

hand of Fig. 2 shows the symbol object x with M as a selection area, R0R5
as peripheral areas, and BL as a baseline.
By using objects with peripheral areas, users can drag mathematical expressions and drop them onto the peripheral area of the target, for connecting the
selection to the target with the desired position. Each object of the system has
information of its connected child-nodes. And, it has information of location
and display size of its child-nodes. Right hand of Fig. 2 shows the symbol object
x with P0P5 as base points. Each base point of P0P5 is related to each
peripheral area of R0R5. Base points are used as link points to connect their
child-nodes. For each base point, information of link relation for its child-node is
congured(right hand of Fig. 2). Size of a dotted line square means child-nodes
display size. And relation between a dotted line square and related base point
means the relative point of the child-node.
The operations for objects with these new elements are able to be explained
as drag and drop operations. But, in this case, drag and drop is an operation
between the selection and the peripheral area of the target. It is not an operation between the selection and the target itself. Therefore it could be a new
GUI operation. In case the selection is dropped onto peripheral areas, the GUI
operation is called drag and draw. The drag and draw operation is an operation between the selection and the target. In this case, drag and draw means
drag the selection and draw it near to the target. The system with drag and
draw operations is suitable for mathematical expressions, because objects are
associated with other objects in the nature of 2D structures by using peripheral
areas and their related elements.

Dynaput Operation

The drag and draw operations are suitable to handle 2D structure of mathematical expressions. And there are other elements to extend GUI operations for
beginners. The new elements are feedbacks from drag operations and feedbacks
from draw operations.

Dynaput: Dynamic Input Manipulations for 2D Structures


Feedbacks from drag operations should show what object is dragged. Such
information helps novice users to know which object is selected and is dragging.
MathBlackBoard is the system which provides this kind of information, from the
early stages of development.
Feedbacks from draw operations also show useful information, categories of
which are where the selection object will be connected to, and which size the
selection object will be changed. With information from feedbacks, users can
check the place where the dragged object will be connected to, or the preview
of size, before dragged object is released by manipulation of pointing devices.
Thus users can check results of various cases by moving pointing devices, without carrying out determination operation that is to release the pressed button
of the pointing device in general. If the preview is what the user wished, determination operation is to be performed by manipulating pointing devices. After
the determination, the selection is connected to the target, and is placed on the
desired position with the previewed size.
The environment with these two kinds of feedbacks provides interactive
dynamic input interface. In the dynamic input interface of MathBlackBoard,
operations include not only an inputting aspect but also an editing aspect.
Users can input expressions and edit expressions in the same way. For example, input operation in MathBlackBoard is performed as follows: drag the object
in the palette area of Fig. 1 and drop it onto other objects, peripheral areas,
or the blackboard area of Fig. 1. And edit operation is performed as follows:
drag the object in the blackboard area and drop it onto other objects, peripheral areas, or the blackboard area.
Since same operations in the user interface mean both of inputting and editing,
as described above, that could be new GUI operations. Because the operations
are not simple input operations but dynamic input operations, the operations
are called dynaput instead of input. Dynaput is a coined word combining
dyna (from dynamic) and put (from input or put).


Data Structure
Layout Tree

Symbol objects for dynaputting are structured as tree. Each node has information of numbered direction for linking to child-nodes(left hand of Fig. 3). In
Fig. 3, the thick lines mean the default direction. In this case, the default direction is 2. Right hand of Fig. 3 shows a layout tree of the following expression:
f (x) =


ai xi


The structure of layout tree is based on the layout of symbols. Mathematical

semantic representation is not used in the layout tree structure. Our strategy for
handling expressions on computer screens is that what it means appears when
you need. The selected tree is traversed when the user invokes a command.


H. Deguchi

Fig. 3. Left: Numbered directions. (The thick line means the default direction.) Right:
An example of layout tree. (The numbers beside lines mean directions.)

The layout trees can be converted easily to presentation markup languages,

such as TeX and MathML. Fig. 4 shows tree traversal and TeX output. The
child-node in default direction is visited after all of child-nodes in other directions
are visited. Parent-node performs pre/post processes. For example, when it has
child-node in the non-default directions, the parenthesis { is outputted before
visit, and } is outputted after all child-nodes in the direction are visited. The
outputs of other symbols (such as or ) depend on conditions of the content
of the symbol object and the direction number of the child-node.

Fig. 4. Tree traversal and TeX output

The MathML
 (Presentation Markup) output is similar in method to the TeX
output. The part of a MathML output example is as follows:

Dynaput: Dynamic Input Manipulations for 2D Structures


Layout tree traversal with outputting text characters is the common method
of these. The tasks for outputting are simple because of the simple structure of
layout trees. Thus layout trees are converted to presentation markup languages

Binary Tree of Symbol Objects

Layout trees can be converted to mathematical expressions for computer algebra

systems to evaluate, in another way. Layout trees are converted to binary trees
of symbol objects before converting to expressions for evaluation.
At rst, a layout tree is parsed into reverse Polish notation. The layout
tree is scanned from root in the default direction. An example of reverse Polish
notation of the layout tree in Fig. 3 is as follows:

][a][x][i][power][invisible times][apply][=]

[f][x][apply function][

As to and a, only child-nodes of direction 2 are visited in that scan
process. The ignored child-nodes are called after the process, if it is needed. That
is decided by the parser. Temporary objects (such as power or apply) are
generated in the scan process.
And then, a binary tree of the symbol objects(left hand of Fig. 5) is constructed from the reverse Polish notation. Right hand of Fig. 5 shows another
example. The parser can decide meaning of f (x).

Fig. 5. Left: An example of binary tree. Right: A left sub tree of another example.

Finally, the binary tree of the symbol objects is traversed, and outputs which
depend on contents
 of symbol objects are generated. Notice that the ignored
child-nodes of and a are called, when the binary tree is traversed.
The binary trees of symbol objects are used internally and temporarily in
the process of the converting. Users can get mathematical notations of wished
format after the process, because binary trees are able to be converted to various notations of mathematical expressions including inx, prex, and postx


H. Deguchi


The new dynamic input interface has been developed. We have added new elements to objects of the system. For such objects, new GUI operations have
been dened. Drag and draw is one of drag and drop operations. In case the
selection is dragged and drawn near to the target, the operation is called drag
and draw. The exact actions of the operations are to drag over the peripheral
areas and to drop onto these areas.
Dynaput has also been dened as operations which include input and edit,
and provide dynamic input interface by previewing results. To provide dynaput
operations, the interface should have operations drag and draw and peripheral
areas for symbol objects. Using dynaput operations, users can input and edit
mathematical expressions intuitively by using only pointing devices.
In this user interface, mathematical expressions are constructed from treestructured symbol objects. Boxes as containers and text stream as contents,
which are used in other systems with templates, are not used. The structure
of layout tree is based on 2D structure of symbols on computer screens, and
mathematical meanings of objects are ignored temporarily.
The presentation markup outputs, such as TeX and MathML, are generated
easily because of the simple structure of layout trees. Mathematical notations
for computer algebra systems evaluations are also converted from layout trees
via tree transformation. What it means appears when you need is our strategy
for handling 2D structures of expressions.
Using this new input interface with the operation dynaput, users can manipulate mathematical formulas intuitively, and users are able to acquire mathematical expressions of various formats.
A demo video of MathBlackBoard is available on the following URL:

1. Kajler, N., Soier, N.: A Survey of User Interfaces for Computer Algebra Systems.
J. Symb. Comput. 25(2) (1998) 127159
2. Matsushima J.: An Easy-to-Use Computer Algebra System by Using Java. Master
Thesis, Kobe University (1998) [in Japanese]
3. Deguchi H.: Blackboard Applet. Journal of Japan Society for Symbolic and Algebraic
Computation 9(1) (2002) 3237 [in Japanese]
4. Deguchi H.: MathBlackBoard. Journal of Japan Society for Symbolic and Algebraic
Computation 11(3,4) (2005) 7788 [in Japanese]
5. Deguchi H.: MathBlackBoard as User Interface of Computer Algebra Systems. Proceedings of the 10th Asian Technology Conference in Mathematics (2005) 246252
6. Deguchi H.: A Practical Lesson Using MathBlackBoard. Journal of Japan Society
for Symbolic and Algebraic Computation 12(4) (2006) 2130 [in Japanese]
7. Deguchi H.: A Dynamic Input Interface for Mathematical Expressions. Proceedings
of the Human Interface Symposium 2006 (2006) 627630 [in Japanese]
8. Deguchi H., Hashiba H.: MathBlackBoard as Eective Tool in Classroom. ICCS
2006, Part II Springer LNCS 3992 (2006) 490493

On the Virtues of Generic Programming for

Symbolic Computation

Xin Li, Marc Moreno Maza, and Eric

University of Western Ontario, London N6A 1M8
{xli96, moreno, schost}

Abstract. The purpose of this study is to measure the impact of C

level code polynomial arithmetic on the performances of AXIOM highlevel algorithms, such as polynomial factorization. More precisely, given
a high-level AXIOM package P parametrized by a univariate polynomial
domain U, we have compared the performances of P when applied to
dierent Us, including an AXIOM wrapper for our C level code.
Our experiments show that when P relies on U for its univariate polynomial computations, our specialized C level code can provide a signicant speed-up. For instance, the improved implementation of square-free
factorization in AXIOM is 7 times faster than the one in Maple and
very close to the one in MAGMA. On the contrary, when P does not
rely much on the operations of U and implements its private univariate
polynomial operation, then P cannot benet from our highly optimized
C level code. Consequently, code which is poorly generic reduces the
speed-up opportunities when applied to highly ecient and specialized
Keywords: Generic programming, fast arithmetic, ecient implementation, high performance, polynomials.


Generic programming, and in particular type constructors parametrized by types

and values, is a clear need for implementing computer algebra algorithms. This
has been one of the main motivations in the development of computer algebra
systems and languages such as AXIOM [10] and Aldor [15] since the 1970s.
AXIOM and Aldor have a two-level object model of categories and domains
which allows the implementation of algebraic structures (rings, elds, . . . ) and
their members (polynomial domains, elds of rational functions, . . . ). In these
languages, the user can implement domain and category constructors, that is,
functions returning categories or domains. For instance, one can implement a
function UP taking a ring R as parameter and returning the ring of univariate
polynomials over R. This feature is known as categorical programming.
Another goal in implementing computer algebra algorithms is that of eciency. More precisely, it is desirable to be able to realize successful implementations of the best algorithms for a given problem. Sometimes, this may
sound contradictory with the generic programming paradigm. Indeed, ecient
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 251258, 2007.
c Springer-Verlag Berlin Heidelberg 2007


X. Li, M.M. Maza, and E.

implementations often require specialized data-structures (e.g., primitive arrays

of machine words for encoding dense univariate polynomials over a nite eld).
High-performance was not the primary concern in the development AXIOM. For
instance, until recently [11], AXIOM had no domain constructor for univariate
polynomials with dense representation.
The MAGMA [2,1] computer algebra system, developed at the University of
Sydney since the 1990s, has succeeded in providing both generic types and highperformance. As opposed to many previous systems, a strong emphasis was put
on performance: asymptotically fast state-of-the art algorithms are implemented
in MAGMA, which has become a De facto reference regarding performance.
MAGMAs design uses the language of universal algebra as well. Users can
dynamically dene and compute with structures (groups, rings, . . . ), that belong
to categories (e.g., permutation groups), which themselves belong to varieties
(e.g., the variety of groups); these algebraic structures are rst-class objects.
However, some aspects of categorical programming available in AXIOM are not
present: users cannot dene new categories; the interfacing with C seems not
possible either.
In this paper, we show that generic programming can contribute to highperformance. To do so, we rst observe that dense univariate and multivariate
polynomials over nite elds play a central role in computer algebra, thanks
to modular algorithms. Therefore, we have realized highly optimized implementations of these polynomial data-types in C, Aldor and Lisp. This work is
reported in [5] and [12].
The purpose of this new study is to measure the impact of our C level code
polynomial arithmetic on the performances of AXIOM high-level algorithms,
such as factorization. More precisely, given a high-level AXIOM package P (or
domain) parametrized by a univariate polynomial domain U, we have compared
the performances of P when applied to dierent Us, including an AXIOM wrapper for our C level code.
Our experiments show that when P relies on U for its univariate polynomial
computations, our specialized C level code can provide a signicant speed-up.
On the contrary, when P does not rely much on the operations of U and implements its private univariate polynomial operations, then P cannot benet from
our highly optimized C level code. Consequently, code which is poorly generic reduces the speed-up opportunities when applied to highly ecient and specialized
polynomial data-types.

Software Overview

We present briey the AXIOM polynomial domain constructors involved in our

experimentation. Then, we describe the features of our C code that play a central
in this study: nite eld arithmetic and fast univariate polynomial arithmetic.
We notably discuss how the choice of special primes enables us to obtain fast
algorithms for reduction modulo p.

On the Virtues of Generic Programming for Symbolic Computation



AXIOM Polynomial Domain Constructors

Let R be an AXIOM Ring. The domain SUP(R) implements the ring of univariate
polynomials with coecients in R. The data representation of SUP(R) is sparse,
that is, only non-zero terms are encoded. The domain constructor SUP is written
in the AXIOM language.
The domains DUP(R) implements exactly the same operations as SUP(R).
More precisely, these two domains satisfy the category UnivariatePolynomialCategory(R). However, the representation of the latter domain is dense: all
terms, null or not, are encoded. The domain constructor DUP was developed in
the AXIOM language, see [11] for details.
Another important domain constructor in our study is PF: for a prime number
p, the domain PF(p) implements the prime eld Z/pZ.
Our C code is dedicated to multivariate polynomials with dense representation
and coecients in a prime eld To make this code available at the AXIOM level,
we have implemented a domain constructor DUP2 wrapping our C code. For a
prime number p, the domains DUP2(p) and DUP(PF(p)) implement the same
category, that is, UnivariatePolynomialCategory(PF(p)).

Finite Field Arithmetic

The implementation reported here focuses on some special small nite elds. By
a small nite eld, we mean a eld of the form K = Z/pZ, for p a prime that
ts in a 26 bit word (so that the product of two elements reduced modulo p ts
into a double register). Furthermore, the primes p we consider have the form
k2 + 1, with k a small odd integer (typically k 7), which enables us to write
specic code for integer Euclidean division.
The elements of Z/pZ are represented by integers from 0 to p 1. Additions
and subtractions in Z/pZ are performed in a straightforward way: we perform
integer operations, and the result is then reduced modulo p. Since the result of
additions and subtractions is always in (p 1), . . . , 2(p 1), modular reduction
requires at most a single addition or subtraction of p; for the reduction, we use
routines coming from Shoups NTL library [9,14].
Multiplication in Z/pZ requires more work. A standard solution, present in
NTL, consists in performing the multiplication in double precision oating-point
registers, compute numerically the quotient appearing in the Euclidean division
by p, and nally deduce the remainder.
Using the special form of the prime p, we designed the following faster
approximate Euclidean division, that shares similarities with Montgomerys
REDC algorithm [13]; for another use of arithmetic modulo special primes,
see [4]. Let thus Z be in 0, . . . , (p 1)2 ; in actual computations, Z is obtained
as the product of two integers less than p. The following algorithm computes an
approximation of the remainder of kZ by p, where we recall that p has the form
k2 + 1:
1. Compute q =  2Z .
2. Compute r = k(Z q2 ) q.


X. Li, M.M. Maza, and E.

Proposition 1. Let r be as above and let r0 < p be the remainder of kZ by p.

Then r r0 mod p and r = r0 p, with 0 < k + 1.
Proof. Let us write the Euclidean division of kZ by p as kZ = q0 p + r0 . This
implies that

q0 + r0
q = q0 +
holds. From the equality qp + r = q0 p + r0 , we deduce that we have

q0 + r0
r = r0 p with =
The assumption Z (p 1)2 enables us to conclude that < k + 1 holds.

In terms of operations, this reduction is faster than the usual algorithms, which
rely on either Montgomerys REDC or Shoups oating-point techniques. The
computation of q is done by a logical shift; that of r requires a logical AND (to
obtain Z 2q), and a single multiplication by the constant c. Classical reduction
algorithms involve 2 multiplications, and other operations (additions and logical
operations). Accordingly, in practical terms, our approach turned out to be more
There are however drawbacks to this approach. First, the algorithm above does
not compute Z mod p, but a number congruent to kZ modulo p (this multiplication by a constant is also present in Montgomerys approach). This is however
easy to circumvent in several cases, for instance when doing multiplications by
precomputed constants (this is the case in FFT polynomial multiplication, see
below), since a correcting factor k 1 mod p can be incorporated in these constants. The second drawback is that the output of our reduction routine is not
reduced modulo p. When results are reused in several computations, errors accumulate, so it is necessary to perform some error reduction at regular time steps,
which slows down the computations.

Polynomial Arithmetic

For polynomial multiplication, we use the Fast Fourier Transform (FFT) [6,
Chapter 8], and its variant the Truncated Fourier Transform [8]. Indeed, since
we work modulo primes p of the form k2 + 1, Lemma 8.8 in [6] shows that Z/pZ
admits 2 th primitive roots of unity, so that it is suitable for FFT multiplication
for output degrees up to 2 1.
Both variants feature a O(d log(d)) asymptotic complexity; the latter oers a
smoother running time, avoiding the usual abrupt jumps that occur at powers
of 2 in classical Fast Fourier Transforms.
Using fast multiplication enables us to write a fast Euclidean division for
polynomials, using Cook-Sieveking-Kungs approach through power series inversion [6, Chapter 9]. Recall that this algorithm is based on the remark that the
quotient q in the Euclidean division u = qv + r in K[x] satises
revdeg udeg v (q) = revdeg u (u) revdeg v (v)1 mod xdeg udeg v+1 ,

On the Virtues of Generic Programming for Symbolic Computation


where revm (p) denotes the reverse polynomial xm p(1/x). Hence, computing the
quotient q is reduced to a power series division, which itself can be done in time
O(d log(d)) using Newtons iteration [6, Chapter 9].
Newtons iteration was implemented using middle product techniques [7],
which enable us to reduce the cost of a direct implementation by a constant
factor (these techniques are particularly easy to implement when using FFT
multiplication, and are already described in this case in [14].
Our last ingredient is GCD computation. We implemented both the classical
Euclidean algorithm, as well as its faster divide-and-conquer variant using socalled Half-GCD techniques [6, Chapter 11]. The former features a complexity
in O(d2 ), whereas the latter has cost in O(d log(d)2 ), but is hindered by a large
multiplicative constant hidden by the big-O notation.

Code Connection

Open AXIOM is based on GNU Common Lisp (GCL), GCL being developed
in C [12]. We follow the GCL developers approach to integrate our C level
code into GCLs kernel. The crucial step is converting dierent polynomial data
representations between AXIOM and the ones in our C library via GCL level.
The overhead of these conversions may signicantly reduce the eectiveness of
our C implementation. Thus, good understanding of data structures in AXIOM
and GCL is a necessity to establish an ecient code connection.


In this section, we compare our specialized domain constructor DUP2 with our
generic domain constructor DUP and the standard AXIOM domain constructor
SUP. Our experimentation takes place into the polynomial rings:
Ap = Z/pZ[x],
Bp = (Z/pZ[x]/m)[y],
for a machine word prime number p and an irreducible polynomial m Z/pZ[x].
The ring Ap can be implemented by any of the three domain constructors DUP2,
DUP and SUP applied to PF(p), whereas Bp is implemented by either DUP and SUP
applied to Ap . In both Ap and Bp , we compare the performances of factorization
and resultant computations provided by theses dierent constructions. These
experimentations deserve two goals.
(G1 ) When there is a large proportion of the running time which is spent in
computing products, remainders, quotients, GCDs in Ap , we believe that
there are opportunities for signicant speed-up when using DUP2 and we
want to measure this speed-up w.r.t. SUP and DUP.
(G2 ) When there is a little proportion of the running time which is spent in
computing products, remainders, quotients, GCDs in Ap , we want to check
whether using DUP2, rather than SUP and DUP, could slow down computations.

X. Li, M.M. Maza, and E.


For computing univariate polynomial resultants over a eld, AXIOM runs the
package PseudoRemainderSequence implementing the algorithms of Ducos [3].
This package takes R: IntegralDomain and polR: UnivariatePolynomialCategory(R) as parameters. However, this code has its private divide operation
and does not rely on the one provided by the domain polR. In fact, the only nontrivial operation that will be run from polR is addition! Therefore, if polR has
a fast division with remainder, this will not benet to resultant computations
performed by the package PseudoRemainderSequence. Hence, in this case, there
is very little opportunities for DUP2 to provide speed-up w.r.t. SUP and DUP.
For square-free factorization over a nite eld, AXIOM runs the package
UnivariatePolynomialSquareFree. It takes RC: IntegralDomain P: UnivariatePolynomialCategory(RC) as parameters. In this case, the code relies on
the operations gcd and exquo provided by P. Hence, if P provides fast GCD computations and fast divisions, this will benet to the package UnivariatePolynomialSquareFree. In this case, there is a potential for DUP2 to speed-up
computations w.r.t. SUP and DUP.



Time [sec]

Time [sec]









2500 3000

Fig. 1. Resultant







1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Fig. 2. Square-free factorization in


We start the description of our experimental results with resultant computations in Ap = Z/pZ[x]. As mentioned above, this is not a good place for obtaining
signicant performance gain. Figure 1 shows that computations with DUP2 are
just slightly faster than those with SUP. In fact, it is satisfactory to verify that
using DUP2, which implies data-type conversions between the AXIOM and C
data-structures, does not slow down computations.
We continue with square-free factorization and irreducible factorization in
Ap . Figure 2 (resp. Figure 3) shows that DUP2 provides a speed-up ratio of 8
(resp. 7) for polynomial with degrees about 9000 (resp. 400). This is due to
the combination of the fast arithmetic (FFT-based multiplication, Fast division,
Half-GCD) and highly optimized code of this domain constructor.
In the case of irreducible factorization, we could have obtained a better ratio
if the code was more generic. Indeed, the irreducible factorization over nite
elds in AXIOM involves a package which has its private univariate polynomial arithmetic, leading to a problem similar to that observed with resultant

On the Virtues of Generic Programming for Symbolic Computation






Time [sec]

Time [sec]













Fig. 3. Irreducible factorization








Total Degree

Fig. 4. Resultant












Time [sec]

Time [sec]







Total Degree

Fig. 5. Irreducible factorization












Total Degree


Fig. 6. Square-free factorization in


computations. The package in question is ModMonic, parametrized by R: Ring

and Rep: UnivariatePolynomialCategory(R), which implements the Frobenius map.
We conclude this section with our benchmarks in Bp = (Z/pZ[x]/m)[y]. For
resultant computations in Bp the speed-up ratio obtained with DUP2 is better
than in the case of Ap . This is because the arithmetic operations of DUP2 (addition, multiplication, inversion) perform better than those of SUP or DUP. Finally,
for irreducible factorization in Bp , the results are quite surprising. Indeed, AXIOM uses Tragers algorithm (which reduces computations to resultants in Bp ,
irreducible factorization in Ap and GCDs in Bp ) and, based on our previous
results, we could have anticipated a good speed-up ratio. Unfortunately, the
package AlgFactor, which is used for algebraic factorization, has its private
arithmetic. More precisely, it re-denes Bp with SUP and factorizes the input
polynomial over this new Bp .


The purpose of this study is to measure the impact of our C level specialized
implementation for fast polynomial arithmetic on the performances of AXIOM


X. Li, M.M. Maza, and E.

high-level algorithms. Generic programming is well designed in the AXIOM

system. The experimental results demonstrate that by replacing a few important
operations in DUP(PF(p)) with our C level implementation, the original AXIOM
univariate polynomial arithmetic over Z/pZ has been speed up by a large factor
in general. For algorithm such as univariate polynomial square free factorization
over Z/pZ, the improved AXIOM code is 7 times faster than the one in Maple
and very close to the one in MAGMA (see Figure 6).

1. W. Bosma, J. J. Cannon, and G. Matthews. Programming with algebraic structures: design of the Magma language. In ISSAC94, pages 5257. ACM Press,
2. The Computational Algebra Group in the School of Mathematics and Statistics at
the University of Sydney. The MAGMA Computational Algebra System for Algebra, Number Theory and Geometry.
3. L. Ducos. Optimizations of the subresultant algorithm. J. of Pure Appl. Alg.,
145:149163, 2000.
4. T. F
arnqvist. Number theory meets cache locality: ecient implementation of
a small prime FFT for the GNU Multiple Precision arithmetic library. Masters
thesis, Stockholms Universitet, 2005.
Schost. Implementation techniques for
5. A. Filatei, X. Li, M. Moreno Maza, and E.
fast polynomial arithmetic in a high-level programming environment. In ISSAC06,
pages 93100. ACM Press, 2006.
6. J. von zur Gathen and J. Gerhard. Modern Computer Algebra. Cambridge University Press, 1999.
7. G. Hanrot, M. Quercia, and P. Zimmermann. The middle product algorithm, I.
Appl. Algebra Engrg. Comm. Comput., 14(6):415438, 2004.
8. J. van der Hoeven. The truncated Fourier transform and applications. In ISSAC04,
pages 290296. ACM Press, 2004.
9. The Number Theory Library. V. Shoup, 19962006.
10. R. D. Jenks and R. S. Sutor. AXIOM, The Scientic Computation System.
Springer-Verlag, 1992.
11. X. Li. Ecient management of symbolic computations with polynomials, 2005.
University of Western Ontario.
12. X. Li and M. Moreno Maza. Ecient implementation of polynomial arithmetic
in a multiple-level programming environment. In A. Iglesias and N. Takayama,
editors, ICMS 2006, pages 1223. Springer, 2006.
13. P. L. Montgomery. Modular multiplication without trial division. Math. of Comp.,
44(170):519521, 1985.
14. V. Shoup. A new polynomial factorization algorithm and its implementation. J.
Symb. Comp., 20(4):363397, 1995.
15. S. M. Watt, P. A. Broadbery, S. S. Dooley, P. Iglio, S. C. Morrison, J. M. Steinbach,
and R. S. Sutor. A rst report on the A# compiler. In ISSAC94, pages 2531.
ACM Press, 1994.

Semi-analytical Approach for Analyzing

Vibro-Impact Systems

Algimantas Cepulkauskas
, Regina Kulvietiene1 , Genadijus Kulvietis1 ,
and Jurate Mikucioniene2

Vilnius Gediminas Technical University, Sauletekio 11, Vilnius 2040, Lithuania

{algimantas cepulkauskas, regina kulvietiene,
genadijus kulvietis}
Kaunas University of Technology, Kestucio 27, Kaunas 44025, Lithuania

Abstract. A semi-analytical approach, combining the features of analytical and numerical computations, is proposed. Separating the linear
and nonlinear parts of equations of motion, the harmonic balance method
and computer algebra have been synthesized for obtaining analytical solutions of the nonlinear part, whereas the linear part was solved by the
numerical methods. On the basis of this technique, the numerical investigation of abrasive treatment process dynamics has been performed and
regimes ensuring the most eective treatment process determined.


Mechanical systems exhibiting impacts, so-called impact oscillators in the English literature at the present time, or vibro-impact systems in the Russian literature, are strongly nonlinear or piecewise linear, due to sudden changes in
velocities of vibrating bodies at the instant of impact or friction forces when the
velocity of motion changes its polarity. Their practical signicance is considerable and the investigation of the motion of such systems was begun about fty
years ago [7].
Several methods of the theoretical analysis were developed and dierent models of impacts were presented [1,6,7]. The method of tting which uses the
Newton restitution coecient seems to be most important. It is accurate and
applicable under the assumption that the time during an impact is negligible. It
can solve certain assumed periodic impact motion and its stability, but this procedure can be realized in the explicit form only for simple mechanical systems,
simple types of periodic motion, and for undamped impactless motion.
As usual, the solution method must correlate with the type of motion equations and concurrently with the character of the initial mechanic system. The
harmonic balance method was chosen for the investigation of a vibratory abrasive
treatment process [2,8] as it is easily applied to systems where calculations are
made by computer algebra methods and, in this case, considerably less computer
memory is needed than by other methods [3]. As a result, we obtain a nonlinear algebra equation system with many very short expressions. So it is possible
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 259262, 2007.
c Springer-Verlag Berlin Heidelberg 2007


A. Cepulkauskas
et al.

and expedient to process the results by numerical methods. The analytic calculation system VIBRAN is extremely eective in this case. Since an adequate
method for solving the motion equation of dynamically complicated systems has
to contain a large amount of analytic as well as numerical calculations, there
must be a strong program connection among them. The analytic calculation
system VIBRAN was selected in order to ensure this connection [3].The system
is designed to generate subroutines in FORTRAN, to select and reject excessive
operations while generating programs in accordance with analytic expressions.
Besides, this system stands out by exibility of input-output and the unique
means for operations with sparse matrices.

Implementation of Computer Algebra

Consider the system of s degrees of freedom with the Lagrangian function [5]:
L = L(qi , qi , t) (i = 1, 2, ...s),


where L is the Lagrangian function, qi , qi , t are generalized coordinates and

velocities of the system and time; s is the number of degrees of freedom.
The equations of motion of such a system are [4]:

d L

= Fqi .
dt qi
These equations can be divided into the linear and nonlinear parts by formal
replacements L = LL + LN and Fqi = FLi + FN i . The equations of motion may
now be expressed in the form:


d LL
d LN

FLi +

FN i = 0,
dt qi
dt qi
where Fqi , FN i , FLi are generalized force, nonlinear (polynomial with respect to
generalized coordinates and periodical or Fourier expansion of time) and linear
parts; and LN , LL are nonlinear and linear parts of the Lagrangian function,
The linear part can be formalized for numerical analysis without diculty and
we used special VIBRAN programs to analyze the nonlinear part of the system.
The proposed method provides shorter expressions for analytical computation
and allows the analysis of systems of higher order. After some well-known perturbations, the equations of motion can be rewritten in the matrix form:
[M ]{
q} + [B]{q}
+ [C]{q} = {H(q, q,
t)} + {f (t)},


where f(t) is the periodic function; H(q, q,

t) is the nonlinear part of a system,
calculated by a special VIBRAN program.
Solution of the abovementioned system can be expressed using the harmonic
balance method in the form [3,4]:
{q} = {A0 } + {A1 } cos (t) + {A2 } sin (t) + ...,


Semi-analytical Approach for Analyzing Vibro-Impact Systems


where {Ai } are the unknown vectors that can be found by the nonlinear algebraic
According to the harmonic balance method, these equations for the rst three
vector coecients in the matrix form are:
[U ] {A} {f } {H(A)} = {0} ,


where fi are coecients of Fourier expansion of the function f(t). Analogously,

equations for other harmonics could be found by the VIBRAN program. The
expressions of Hi and their derivatives required are expressed in closed form
using computer algebra techniques by the FORTRAN code generation procedure. Special modications were made to the terms with dry friction, and the
integration procedure was developed by the Malkin method [7].

Application of Abrasive Treatment Process Dynamics

in the Investigation

The new method for the treatment of miniature ring-shaped details was proposed
in [8], where the internal surface of the detail is treated as well as an external
one. During the vibratory treatment process the working medium particles are
constantly striking each other. As a result, slight scratches and crater-shaped
crevasses occur on the surface of treated details that form the surface microrelief. In this way, abrasive friction and impacts on the treated details perform
the treatment process.
The equations of motion describe the dynamics of a vibratory abrasive treatment process [5]:

1 + bx 1 + bk (x 1 x a ) + (c + c1 )x1
m1 x

= c(t) Fm (x1 ) F1 sign(x 1 x a ) + b(t),


ma x
a + bx a + bk (x a x 1 ) + cxa = c(t) + F2 sign(x 1 x a ) + b(t).
where m1 is the mass of components treated; ma is the mass of abrasive particles; the load mechanical properties are evaluated by elasticity c and working
medium resistance b; F1 , F2 are forces of dry friction between a component and
abrasive; bk is viscosity resistance; the elasticity of magnetic eld is evaluated by
a stiness coecient c1 , the detail was additionally excited by generating a variable component of magnetic eld Fm (x1 ). Its stiness properties were obtained
experimentally by the least squares method.
The kinematics excitation of a loaded vessel is (t) = A sin t.
Analytic expressions obtained according to the VIBRAN program conclude
the part of analytic calculations of H(A) [3]. The corresponding derivatives are
very simple and there are only 25 nonzero terms.
All the properties complying with the dynamic pattern of the process are
investigated in the numerical way and the program itself is composed in the
FORTRAN language. For this reason, in order to calculate the factors of analytic expressions and their partial derivatives, two FORTRAN subroutines have


A. Cepulkauskas
et al.

been generated: one for compiling a dictionary and another for calculating the
expressions themselves. Besides, the program, created by applying the harmonic
balance method for systems of dierential equations in addition to the equations
where amplitudes and constant components are found, also presents equation
derivatives from unknown quantities. In this case, one or some criteria of further
numerical parameter optimization may be calculated.


On the basis of solving nonlinear dierential equations by a harmonic balance

method and synthesis of the analytic calculation system VIBRAN, the investigation method of nonlinear systems with a dry friction eect has been created. This
method combines the advantages of analytic calculation methods and computer
algebra. They are a compound of the principle of parallel analytic-numerical calculation, where analytic rearrangements are applied only to the nonlinear part
of the system, while concurrently the linear part of the system could be easily
solved in the numerical way. The proposed method provides shorter expressions
for analytic computation and allows the analysis of systems of higher order.

1. Baron, J.M.: Abrasive and Magnetic Treatment of Details and Cutters, St. Petersburg, Mashinostrojenie (in Russian), (1986)
2. Blekhman, I.I.: Forming the properties of nonlinear mechanical systems by means of
vibration, Proc. IUTAM/IFToMM Symposium on Synthesis of Nonlinear Dynamical
Systems, Solid Mechanics and Its Applications, vol. 73, Kluwer Academic Publ.,
Dordrecht (2000) 1-13.
3. Cepulkauskas, A., Kulvietiene, R., Kulvietis G.: Computer Algebra for Analyzing
the Vibrations of Nonlinear Structures. Lecture Notes in Computer Science, Vol.
2657. Springer-Verlag, Berlin Heidelberg New York (2003) 747-753
4. Klymov, D.O., Rudenko, V.O.: Metody kompiuternoj algebry v zadachah mechaniki
Nau , O(r)scow, (in russian), (1989)
5. Kulvietiene, R., Kulvietis, G., Fedaravicius, A., Mikucioniene, J.: Numeric-Symbolic
Analysis of Abrasive Treatment Process Dynamics, Proc. Tenth World Congress of
the Theory of Machines and Mechanisms, Vol. 6, Oulu, Finland, (1999) 2536-2541
6. Lewandowski, R.: Computational Formulation for periodic vibration of Geometrically Nonlinear Structures Part 1: Theoretical Background. - Int. J. Solid Structures, 34 (15) (1997) 1925-1947
7. Malkin, I.G.: Some Problems of Nonlinear Oscillation Theory, Moscow, Gostisdat
(in russian), (1956)
8. Mikucioniene, J.: Investigation of vibratory magnetic abrasive treatment process
dynamics for miniature details, Ph.D. Thesis, KTU, Technologija, Kaunas, (1997)

Formal Verification of Analog and Mixed Signal Designs

in Mathematica
Mohamed H. Zaki, Ghiath Al-Sammane, and Sofi`ene Tahar
Dept. of Electrical & Computer Engineering, Concordia University
1455 de Maisonneuve W., Montreal, Quebec, H3G 1M8, Canada

Abstract. In this paper, we show how symbolic algebra in Mathematica can be

used to formally verify analog and mixed signal designs. The verification methodology is based on combining induction and constraints solving to generate correctness for the system with respect to given properties. The methodology has the
advantage of avoiding exhaustive simulation usually utilized in the verification.
We illustrate this methodology by proving the stability of a modulator.
Keywords: AMS Designs, Formal Verification, Mathematica.

1 Introduction
With the latest advancement of semiconductors technology, the integration of the digital, analog and mixed-signal (AMS) designs into a single chip was possible and led into
the development of System on Chip (SoC) designs. One of the main challenges of SoC
designs is the verification of AMS components, which interface digital and analog parts.
Traditionally, analyzing the symbolically extracted equations is done through simulation [1]. Due to its exhaustive nature, simulation of all possible scenarios is impossible,
and hence it cannot guarantee the correctness of the design. In contrast to simulation,
formal verification techniques aim to prove that a circuit behaves correctly for all possible input signals and initial conditions and that none of them drives the system into
an undesired behavior. In fact, existing formal methods [2] are time bounded, where
verification is achieved only on a finite time interval. We overcome this limitation by
basing our methodology on mathematical induction, hence any proof of correctness of
the system is time independent. In this paper, we show how symbolic algebra in Mathematica can be used to formally verify the correctness of AMS designs. We illustrate
our methodology by applying it to prove the stability of a modulator [3].
The proposed verification methodology is based on combining induction and constraints solving to generate correctness proof for the system. This is achieved in two
phases; modeling and verification, as shown in Figure 1. Starting with an AMS description (digital part and analog part) and a set of properties, we extract, using symbolic
simulation, a System of Recurrence Equations (SRE) [4]. These are combined recurrence relations that describe each property in terms of the behavior of the system. SRE
is used in the verification phase along with an inductive based proof with constraints
defined inside Mathematica (details can be found in [4]). If a proof is obtained, then the
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 263267, 2007.
c Springer-Verlag Berlin Heidelberg 2007


M.H. Zaki, G. Al-Sammane, and S. Tahar

property is verified. Otherwise, we provide counterexamples for the non-proved properties using reachability criteria. If the counterexample is realistic (strong instance),
then we have identified a problem (bug) in the design, otherwise the counterexample is
spurious (weak instance) and should be eliminated from the verification process.

AMS Description


System of Recurrence
Equations (SRE)



Inductive Proof
With Constraints



Strong Instance

Fig. 1. Overview of the Methodology

2 Implementation in Mathematica
An SRE is a system of the form: Xi (n) = fi (X j (n )), ( j, ) i , n Z, where
fi (X j (n )) is a generalized If-formula (see [5] for a complete definition). The set i
is a finite non empty subset of 1, . . . , k N. The integer is called the delay. A property
is a relation of the form: P = quanta(X, cond, expr), where quanta {, }, X is a set
of variables. cond is a logical proposition formula constructed over X and expr is an
If-formula that takes values in the Boolean Domain B.
Proving Properties. Mathematical induction is then used to prove that a property P(n)
holds for all nonnegative integers n n0 , where n0 is the time point after which the
property should be True:
Prove that P(n0 ) is True.
Prove that n > n0 , P(n) P(n + 1).
The induction algorithm is implemented in Mathematica using functions like Reduce,
Assuming and Re f ine. It tries to prove a property of the form quanta(X, cond, expr),
otherwise it gives a counterexample using FindCounterExample:
If Prove(quanta(X, cond, expr)) = True then
FindCounterExample(cond expr, var)
Finding Counterexamples. The basic idea is to find particular variable values for
which the property is not satisfied. This is implemented using the Mathematica function FindInstance[expr, vars, assum]. It finds an instance of vars that makes expr True

Formal Verification of Analog and Mixed Signal Designs in Mathematica


if an instance exists, and gives {} if it does not. The result is of the following form
{{v1 instance1, v2 instance2, . . . , vm instancem }}, where vars={v1 , v2 , . . . , vm }.
FindInstance can find instances even if Reduce cannot give a complete reduction. For
example, the Mathematica command FindInstance [x2 ay2 == 1 && x > y, {a, x, y}]
returns {{a 12 , x 12 , y 1}}
We need to insure that the instance is reachable by the SRE before considering it as a
counterexample. For example, we verify the recurrence equation Un = Un1 + 1 against
the property n > 0. Pn = Un > 0. FindInstance transforms Pn to an algebraic one and
gives the instance Un1 2. However, this instance will never be reachable by Un for
U0 = 0. Depending on the reachability, we name two types of SRE instances:
Strong instance: if it is given as a combination of of the design input values. Then,
the counterexample is always reachable.
Weak instance: if it is a combination of input values and recurrence variables values.
In this case,there in no guarantee that the counterexample is reachable.
If the recurrence equations are linear and if the conditions of the If-formula are
monotones, then we can search directly for a reachable strong instance. We can solve
these equations in Mathematica using the function RSolve[Eq1, Eq2 , . . ., Xi (n), . . ., n]. It
returns an explicit solution of the SRE {Eqn } in terms of time relations where the time
n is an explicit parameter. We use RSolve to identify a time point at which a desired
behavior is reached.

3 First-Order Modulator
modulators [3] are used in designing data converters. It is stable if the integrator output remains bounded under a bounded input signal. Figure 2 shows a first-order of
one-bit with two quantization levels, +1V and 1V. The quantizer (input signal y(n))
should be between 2V and +2V in order to not be overload. The SRE of the is :
y(n) = y(n 1) + u(n) v(n 1)
v(n 1) = IF(y(n 1) > 0, 1, 1)




Fig. 2. First-order Modulator

The stability is expressed with the following properties:

Property 1. |u| 1 |y(0)| 1 |y(n)| 2. To ensure that the modulator will always be stable starting from initial conditions. In Mathematica to prove the property at
time n we write:


M.H. Zaki, G. Al-Sammane, and S. Tahar

in[1]:= Reduce[
ForAll[{u,y[n-1]}, And[-1< u < 1, -2< y[n-1] < 2 ],
And[(-1+u+y[n-1]  2) ,(1+u+y[n-1]  -2)]], {u,y[n-1]}, Reals]
out[1]:= True

Property 2. |u| > 1 |y(0)| 1 |y(n)| > 2. If the input to the modulator does not
conform to the stability requirement in Property 1, then the modulator will be unstable:
in[1]:= FindInstance[And[ 1<u , 1> y>0 ,(-1+u+y>2) ],u,y]
out[1]:= {u 72 , y 12 }

As y = 12 is already a valid state for y[n], then the instance is weak. We refine the
instance by adding it to the constraints list and restart the proof:
in[1]:= Assuming[And[ u
out[1]:= True

Thus, the instance u



1> y>0 ] ,Refine[(-1+u+y>2)]]

is a strong instance for any y[n].

Property 3. |u| 1 |y(0)| > 2 n0 > 0 n > n0 . |y(n)| < 2. If the input of the
quantizer is distorted and cause the modulator to be temporarily unstable, the system
will return to stable region and stay stable afterwards; which means that there exist
an n for which the modulator will be stable for all n > n0 . Rsolve is used along with
FindInstance to search for this n0 . We have two cases: y[n 1] > 0 and y[n 1] < 0. In
Mathematica to prove the property, we write:
in[1]:= Eq=y[n+1]==(-1+u+y[n]);
RSolve[Eq&&y[0]== a ,y[n],n]
out[1]:= y[n] a-n+n u
in[2]:= Reduce[a+n+n u>-2 && u>-1 && a Reals,n]
out[2]:= a Reals && u > (-1) && n > 2a
in[3]:= FindInstance[a < -2 && n > 2 && 1 > u > 0.5 &&
n > 2a
, {a, u,n}]
out[3]:= {a -5.5, u 0.75, n 4}

Thus, we have found a time value which provides a proof for the property: n > 2a
1+u .
As the property is formalized using the existential quantifier, it is enough to find one
instance: n0 4 .

4 Conclusions
We have presented how Mathematica can be used efficiently to implement a formal verification methodology for AMS designs. We used the notion of SRE as a mathematical
model that can represent both the digital and analog parts of the design. The induction
based technique traverses the structure of the normalized properties and provides a correctness proof or a counterexample, otherwise. Our methodology overcomes the time
bound limitations of conventional exhaustive methods. Additional work is needed in

Formal Verification of Analog and Mixed Signal Designs in Mathematica


order to integrate the methodology in the design process, like the automatic generation
of the SRE model from design descriptions given in HDL-AMS languages.

1. Gielen, G.G.E., Rutenbar, R.A.: Computer-aided Design of Analog and Mixed-signal Integrated Circuits. In: Proceedings of the IEEE. Volume 88. (2000) 18251852
2. Zaki, M.H., Tahar, S., Bois, G.: Formal Verification of Analog and Mixed Signal Designs:
Survey and Comparison. In: NEWCAS06, Gatineau, Canada, IEEE (2006)
3. Schreier, R., Temes, G.C.: Understanding Delta-Sigma Data Converters. IEEE Press-Wiley
4. Al-Sammane, G., Zaki, M.H., Tahar, S.: A Symbolic Methodology for the Verification of
Analog and Mixed Signal Designs. In: DATE07, Nice, France, IEEE/ACM (2007)
5. Al-Sammane, G.: Simulation Symbolique des Circuits Decrits au Niveau Algorithmique. PhD
thesis, Universite Joseph Fourier, Grenoble, France (2005)

Ecient Computations of Irredundant

Triangular Decompositions with the
RegularChains Library
Changbo Chen1 , Francois Lemaire2 , Marc Moreno Maza1 , Wei Pan1 ,
and Yuzhen Xie1

University of Western Ontario, London N6A 1M8, Canada

Universite de Lille 1, 59655 Villeneuve dAscq Cedex, France

Abstract. We present new functionalities that we have added to the

RegularChains library in Maple to eciently compute irredundant
triangular decompositions. We report on the implementation of dierent
strategies. Our experiments show that, for dicult input systems, the
computing time for removing redundant components can be reduced to
a small portion of the total time needed for solving these systems.
Keywords: RegularChains, quasi-component, inclusion test, irredundant triangular decomposition.


Ecient symbolic solving of parametric polynomial systems is an increasing

need in robotics, geometric modeling, stability analysis of dynamical systems
and other areas. Triangular decomposition provides a powerful tool for these
systems. However, for parametric systems, and more generally for systems in
positive dimension, these decompositions have to face the problem of removing
redundant components. This problem is not limited to triangular decompositions
and is also an important issue in other symbolic decomposition algorithms such
as those of [9,10] and in numerical approaches [7].
We study and compare dierent criteria and algorithms for deciding whether
a quasi-component is contained in another. Then, based on these tools, we obtain
several algorithms for removing redundant components in a triangular decomposition. We report on the implementation of these dierent solutions within the
RegularChains library [5].
We have performed extensive comparisons of these approaches using wellknown problems in positive dimension [8]. Our experiments show that, the removal of the redundant components is never a bottleneck. Moreover, we have
developed a heuristic inclusion test which provides very good running time performances and which fails very rarely in detecting an inclusion. We believe
that we have obtained an ecient solution for computing irredundant triangular
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 268271, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Ecient Computations of Irredundant Triangular Decompositions


Inclusion Test of Quasi-components

In this section we describe our strategies for the inclusion test of quasicomponents based on the RegularChains library. We refer to [1,6,5] for the
notion of a regular chain, its related concepts, such as initial, saturated ideals,
quasi-components and the related operations.
Let T, U K[X] be two regular chains. Let hT and hU be the respective
products of their initials. We denote by sat(T ) the saturated ideal of T . We
discuss how to decide whether the quasi-component W (T ) is contained in W (U )
or not. An unproved algorithms for this inclusion test is stated in [4]; it appeared
not to be satisfactory in practice, since it is relying on normalized regular chains,
which tend to have much larger coecients that non-normalized regular chains
as veried experimentally in [2] and formally proved in [3].
Proposition 1. The inclusion W (T ) W (U ) holds if and only if the following
both statements hold

(C1 ) for all p U we have p sat(T ),
(C2 ) we have W (T ) V (hU ) = .
If sat(T ) is radical, then condition (C1 ) can be replaced by:
(C1 ) for all p U we have p sat(T ),
which is easier to check. Checking (C2 ) can be approached in dierent
ways, depending on the computational cost that one is willing to pay. The
RegularChains library provides an operation Intersect(p, T ) returning regular
chains T1 , . . . , Te such that we have
V (p) W (T ) W (T1 ) W (Te ) V (p) W (T ).
A call to Intersect can be seen as relatively cheap, since Intersect(p, T ) exploits
the fact that T is a regular chain. Checking
(Ch ) Intersect(hU , T )=,
is a good criterion for (C2 ). However, when Intersect(hU , T ) does not return
the empty list, we cannot conclude. To overcome this limitation, we rely on
Proposition 2 and the operation Triangularize of the RegularChains library.
For a polynomial system, Triangularize(F ) returns regular chains T1 , . . . , Te such
that V (F ) = W (T1 ) W (Te ).
Proposition 2. The inclusion W (T ) W (U ) holds if and only if the following
both statements hold

(C1 ) for all p U we have p sat(T ),

(C2 ) for all S Triangularize(T {hU }) we have hT sat(S).
This provides an eective algorithm for testing the inclusion W (T ) W (U ).
However, the cost for computing Triangularize(T {hU }) is clearly higher than
that for Intersect(hU , T ), since the former operation cannot take advantage of
the fact T is a regular chain.


C. Chen et al.

Removing Redundant Components

Let F K[X] and let T = T1 , . . . , Te be a triangular decomposition of V (F ),

that is, a set of regular chains such that we have V (F ) = W (T1 ) W (Te ).
We aim at removing every Ti such that there exists Tj , with i = j and W (Ti )
W (Tj ). Based on the results of Section 2, we have developed the following strategies for testing the inclusion W (T ) W (U ).
heuristics-no-split: It checks whether (C1 ) and (Ch ) hold. If both hold,
W (T ) W (U ) has been established, otherwise no conclusions can be made.
heuristically-with-split: It tests the conditions (C1 ) and (Ch ). Checking
(C1 ) is achieved by means of the operation Regularize [5,6]: for a polynomial
p and a regular chain T , Regularize(p, T ) returns regular chains T1 , . . . , Te
such that we have
W (T ) W (T1 ) W (Te ) W (T ),
for each 1 i e the polynomial p is either 0 or regular modulo sat(Ti ).
Therefore, Condition (C1 ) holds i for all Ti returned by Regularize(p, T )
we have p  0 mod sat(Ti ).
certified: It checks conditions (C1 ) and (C2 ). If both hold, then W (T )
W (U ) has been established. If at least one of the conditions (C1 ) or (C2 )
does not hold, then the inclusion W (T ) W (U ) does not hold either.
The following polynomial systems are well-known systems which can be found
at [8]. For each of them, the zero set has dimension at least one. Table 1 and
Table 2 report the number of components and running time of dierent approaches
for these input systems, based on which we make the following observations:
1. The heuristic removal without split performs very well. First, for all examples, except sys 8, it discovers all redundant components. Second, for all
examples, except sys 8, its running time is a relatively small portion of the
solving time (third column of Table 1).
2. Theoretically, the heuristic removal with split can eliminate more redundancies than the other strategies. Indeed, it can discover that a quasi-component
Table 1. Triangularize without removal, certied removal
Name (No removal) Proposition 2
 RC time(s)  RC time(s)
1 genLinSyst-3-2 20
17 1.182
Butcher 15
7 0.267
MacLane 161 12.733
27 7.144
neural 10 14.349
4 8.948
6 27.870
5 58.396
Liu-Lorenz 23 29.044
16 121.793
7 71.364
5 7.727
Pappus 393 37.122 120 141.702
Liu-Lorenz-Li 22 1796.622
9 96.364
10 KdV572c11s21
41 8898.024
7 6.980

Ecient Computations of Irredundant Triangular Decompositions


Table 2. Heuristic removal, without and with split, followed by certication

(C1 ) and (Ch )
Sys (without split)
 RC time(s)

 RC time(s)
16 123.052
120 135.780
9 101.668

(C1 ) and (Ch )
(with split)
 RC time(s)
8 54.455
18 96.492
124 48.756
10 105.598

 RC time(s)
8 109.928
18 203.937
120 148.341
10 217.837

is contained in the union of two others, meanwhile these three components

are pairwise noninclusive.
3. In practice, the heuristic removal with split does not discover more irredundant components than the heuristic removal without split, except for systems
5 and 6. However, the running time overhead is large.
4. The direct deterministic removal is also quite expensive on several systems
(5, 6, 8). Unfortunately, the heuristic removal without split, used as precleaning process does not really reduce the cost of a certied removal.

1. P. Aubry, D. Lazard, and M. Moreno Maza. On the theories of triangular sets. J.
Symb. Comp., 28(1-2):105124, 1999.
2. P. Aubry and M. Moreno Maza. Triangular sets for solving polynomial systems: A
comparative implementation of four methods. J. S. Com., 28(1-2):125154, 1999.
Schost. Sharp estimates for triangular sets. In ISSAC 04, pages
3. X. Dahan and E.
103110. ACM, 2004.
4. D. Lazard. A new method for solving algebraic systems of positive dimension.
Discr. App. Math, 33:147160, 1991.
5. F. Lemaire, M. Moreno Maza, and Y. Xie. The RegularChains library. In Ilias S.
Kotsireas, editor, Maple Conference 2005, pages 355368, 2005.
6. M. Moreno Maza. On triangular decompositions of algebraic varieties. Technical
Report TR 4/99, NAG Ltd, Oxford, UK, 1999.
7. A.J. Sommese, J. Verschelde, and C.W. Wampler. Numerical decomposition of the
solution sets of polynomial systems into irreducible components. SIAM J. Numer.
Anal., 38(6):20222046, 2001.
8. The SymbolicData Project., 20002006.
9. D. Wang. Elimination Methods. Springer, 2001.
10. G. Lecerf. Computing the equidimensional decomposition of an algebraic closed
set by means of lifting bers. J. Complexity, 19(4):564596, 2003.

Characterisation of the Surfactant Shell Stabilising

Calcium Carbonate Dispersions in Overbased Detergent
Additives: Molecular Modelling and Spin-Probe-ESR
Francesco Frigerio and Luciano Montanari
ENI R&M, SDM-CHIF, via Maritano 26,
20097 San Donato Milanese, Italy
{francesco.frigerio, luciano.montanari}

Abstract. The surfactant shell stabilising the calcium cabonate core in overbased
detergent additives of lubricant base oils was characterised by computational and
experimental methods, comprising classical force-field based molecular simulations and spin-probe Electron Spin Resonance spectroscopy. An atomistic model
is proposed for the detergents micelle structure. The dynamical behaviour observed during diffusion simulations of three nitroxide spin-probe molecules into
micelle models could be correlated to their mobility as determined from ESR
spectra analysis. The molecular mobility was found to be dependent on the
chemical nature of the surfactants in the micelle external shell.

1 Introduction
The lubrication of modern internal combustion engines requires the addition of specific additives to the base oils to improve the overall performance (minimization of
corrosion, deposits and varnish formation in the engine hot areas) [1]. Calcium sulphonates are the most widely used metallic detergent additives. They are produced by
sulfonation of synthetic alkylbenzenes. The simplest member would be a neutral alkylbenzene sulphonate with an alkyl solubilizing group approximately C18 to C20 or
higher to provide adequate oil solubility. In addition to metallic detergents such as the
neutral sulphonate, modern oil formulations contain basic compounds which provide
some detergency. Their main function, however, is to neutralize acid and to prevent
corrosion from acid attack. It is economically advantageous to incorporate as much
neutralizing power in the sulphonate molecule as possible: excess base in the form of
calcium carbonate can be dispersed in micelles [2, 3] to produce the so-called overbased sulphonates. Dispersions of calcium carbonate stabilized by calcium sulphonates have been characterized [4] using different techniques: electron microscopy [5],
ultracentrifugation [6], and neutron scattering [7]. SAXS results show that overbased
calcium sulphonates appear as polydisperse micelles having an average calcium carbonate core radius of 2.0 nm with a standard deviation of 0.4 nm [8]. The overbased
calcium sulphonates form reverse micelles in oil, consisting of amorphous calcium
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 272279, 2007.
Springer-Verlag Berlin Heidelberg 2007

Characterisation of the Surfactant Shell Stabilising Calcium Carbonate Dispersions


carbonate nanoparticles surrounded by sulphonate surfactants. The polar heads (sulphonate) are attached to the metal core, while the hydrocarbon tails of hydrophobic
nature stabilize the colloidal particle in the non-polar oil medium. Coupling three surface analyses techniques (XPS, XANES and ToF-SIMS) it was observed that there is
a presence of some residual calcium hydroxide in the micellar core which is located
prevalently at the surroundings of the micelle core [9]. ToF-SIMS shows that the molecular structures of detergent molecules are in good agreement with micelles synthesis data; little is still known on surfactant shell physical nature.
The compactness of the surfactant shells could play an important role to prevent
negative consequences due to the interaction of carbonate core with other additives
used in oil formulation or with water molecules. Such interactions cause the calcium
carbonate separation by precipitation. In this study the molecular dynamics within the
surfactant shell was probed in a combined computational and experimental approach
by a small nitroxide such as TEMPO (2,2,6,6-tetramethylpiperidine-N-oxyl) and two
nitroxide-labelled fatty acids (5- and 16-doxyl-stearic acids). In fact, nitroxides are
known to exhibit ESR spectra that depend on the mobility of the spin-probe and the
micro-viscosity of the probe environment. They have been used to evaluate the microstructure of the absorbed layer of surfactants and polymers at the solid-liquid interface
[10-14] and also inside composite polymers [15, 16].
Detailed three-dimensional models were previously proposed for small overbased
micelles containing various classes of surfactants [17-21]. Experimental measurements collected in our laboratory (data not shown) pointed to a couple of important
micelle features: a flat disk shape for the inner core and a tightly packed outer shell.
The core is mainly composed of amorphous calcium carbonate and it is surrounded by
a distribution of surfactant molecules, arranged as a single layer with polar groups
contacting the core surface. The diffusion of TEMPO and labelled fatty acids through
the overbased micelle surfactant shell was simulated by a classical molecular mechanics methodology. The force-field based simulations protocols were applied to detailed
atomistic models, which were built limited to the central portion of the micelle structure and contain all its essential features. The slow molecular motions of the stable
surfactant layer around the rigid inorganic core could be reproduced by performing
molecular dynamics calculations. Furthermore, the movements of the nitroxide spinprobes were studied by an approach combining forced diffusion and constraint-free
molecular dynamics. Stable locations of such small molecules could be defined for
each micelle model under investigation.
It is assumed that the polar head groups of the probe molecules (nitroxide for
TEMPO and carboxylic for fatty acids) tend to be placed on the surface of the calcium
carbonate cores. ESR spectra give information about the viscosity of the local environment at different distances from carbonate surface (at the boundary for TEMPO,
while at 5- and 16- carbon positions for the two spin-labelled fatty acids). In our laboratory different surfactant molecules were used in the synthesis of overbased detergents with high calcium carbonate content, expressed as Total Base Number (the
amount of soluble colloidal carbonate and of metal oxide and hydroxide, measured as
equivalent milligrams of KOH per gram of sample [22]). Three overbased detergents
with TBN=300 and a mixture of mono- and di-alkyl-benzene-sulphonate were


F. Frigerio and L. Montanari

2 Experimental Methods
An approach combining computer graphics and atomistic simulations [17, 19]
produced a detailed model for the central portion of the overbased micelle structure.
The essential model features (thickness and internal structure of the core, concentration and location of excess hydroxide ions, density and molecular arrangement of the
shell) were inferred from analytical determinations performed on the detergent micelles that were produced in our laboratories (data not shown). Different relative concentrations of the surfactants were used to build the external shell of three micelle
models, referred to as model a, b, c throughout this paper. The starting molecular and
ionic building blocks were selected and manually manipulated within the InsightII
[23] graphical interface in order to set up initial atomic distributions. A partial micelle
model was built as an inorganic slab surrounded by two surfactant layers, one on top
of each of its largest surfaces. Three-dimensional periodic boundary conditions were
constantly applied during simulations in order to avoid model truncation effects.
Atomic parameters were assigned from the pcff force field [24]. Afterwards an amorphous calcium carbonate core (with a small concentration of hydroxide ions) and a
tight surfactant shell were generated by applying stepwise Monte Carlo docking and
molecular dynamics [25] to limited portions of the micelle models. These simulations
were performed by using InsightII and Discover [23], respectively. Along this part of
the model building process, the uninvolved atoms were kept frozen. Nitroxide spinprobes were then added to the system assembly, after subjecting the obtained micelle
models to extensive energy relaxation. The starting configurations contain a small
cluster of spin-probe molecules, packed in a single layer contacting the micelle surfactant shell. Respectively, 14 molecules of TEMPO, 6 of 5-doxyl-stearic acid and 6
of 16-doxyl-stearic acid were used. The forced diffusion procedure available within
InsightII [23] was carefully tailored to suit the overbased micelle model features. Its
application followed the thorough energy minimisation of each one of the starting
system configurations. Since the molecular motions within tightly packed assemblies
are very slow, they were accelerated by adding a directional force for a very short
time period at the beginning of the simulations, with the effect of gently pushing the
nitroxide spin-probes toward the micelle core. In this way the extremely long process
of generating spontaneous but unfrequent diffusion pathways could be avoided and
the simulations were concentrated on the more interesting task of studying the small
molecule motions throughout the micelle surfactant shell. The potential energy of the
system and the relative distances between the nitroxide groups of the spin-probes and
the micelle core center were derived by analysing the trajectories collected during the
following free molecular dynamics simulations. These values were finally compared
to the results from equivalent simulations, performed without the previous application
of the forced diffusion protocol and used as a reference (non-diffusive) state.
All spin-probe molecules (Aldrich Chemie) were diluted at 0.3 mM concentration
into a mixture of SN150/SN500 (produced by ENI according to ASTM standard
specification) lubricant bases (2/1 by weight). The overbased detergents were dissolved (at 30% by weight) into the spin-probe/lubricant solutions. The ESR spectra
were collected with a Bruker ESP 300E spectrometer conditioned at a temperature
of 50C.

Characterisation of the Surfactant Shell Stabilising Calcium Carbonate Dispersions


3 Results
The diffusion pathways of TEMPO and of 5- and 16- spin-probe labelled stearic acids
were followed and compared in the three model-built partial micelles (identified as a,
b, c) with share an identical inorganic core but differ by the molecular distribution of
surfactants in the external shell.
Epot vs -dist
TEMPO in a, b, c




Fig. 1. Plot (left): -distance () vs Epotential (kcalmol-1) from the simulation of TEMPO into
micelle models a (grey), b (blue), c (magenta). Pictures (right): simulation boxes with 14 molecules (orange, Van der Waals) diffused into models a (green), b (yellow), c (cyan), composed
of inorganic core (Van der Waals) and surfactant layers (ball and stick).

The force constant application produced diffusive pathways during dynamics trajectories (Fig. 1) for the micelle models containing TEMPO. The average penetration
depth of the TEMPO nitroxide group into the micelle model is shortest and most energetically unfavourable (Fig. 1, left) within model a, while the results are slightly
better with model b and at their best within model c. The potential energy cost payed
for the production of TEMPO diffusive pathways appears high and generally increasing with the force constant value. For fast comparison among the micelle models only
very short trajectories were analysed. Further, longer molecular dynamics simulations
(data not shown) completely release all strain accumulated during the first part of the
Epot vs -dist
d5sta in a, b, c



Fig. 2. Plot (left): -distance () vs Epotential (kcalmol-1) from the simulation of 5-doxylstearic acid into micelle models a (grey), b (blue), c (magenta). Pictures (right): simulation
boxes with 6 molecules (orange, Van der Waals) diffused into models a (green), b (yellow), c
(cyan), composed of inorganic core (Van der Waals) and surfactant layers (ball and stick).


F. Frigerio and L. Montanari

diffusive process, while the final configurations do not differ significantly. One of the
final, energetically relaxed TEMPO diffusion configurations is depicted for each of
the three models a, b, c (Fig. 1, right). While most spin-probe molecules are located
into the surfactant shell, only a few of them can get in contact with the carbonate core.
These results can be compared to the dynamic behaviour of 5-doxyl-stearic acid
(Fig. 2). The average equilibrium penetration depth towards the micelle core (Fig. 2,
left) is similar to what observed with TEMPO, but the generally lower potential
energy cost reveals an easier diffusion through the surfactant shell. Anyway the structure of the two spin-probes is different: the stearic acid bears a nitroxide group laterally grafted to its long tail, therefore the reported distances from the micelle core
(Fig. 2) apply to a longer molecule than in the case of TEMPO (Fig. 1). However, the
order of spin-probe diffusion efficiencies through the three micelle models is again
found as: a < b < c. Three of the resulting relaxed configurations are reported (Fig. 2,
right). Differently from TEMPO, only a small fraction of the labelled stearic acid
molecules is able to reach a deep location into the surfactant shell.
Epot vs -dist
16sta in a, b, c



Fig. 3. Plot (left): -distance () vs Epotential (kcalmol-1) from the simulation of 16-doxylstearic acid into micelle models a (grey), b (blue), c (magenta). Pictures (right): simulation
boxes with 6 molecules (orange, Van der Waals) diffused into models a (green), b (yellow), c
(cyan), composed of inorganic core (Van der Waals) and surfactant layers (ball and stick).

Comparable results were obtained from the analysis of the 16-doxyl-stearic acid
dynamics trajectories (Fig. 3). The average equilibrium penetration depth (Fig. 3, left)
plotted against the potential energy cost still reveals some differences: model c is
slightly favoured over b, and this last over a. Limited penetration into the surfactant
shell is generally observed, compared to 5-doxyl-stearic acid. This can be attributed to
the nitroxide group location further away from the the spin-probe polar head in the
16-doxyl-stearic acid. Three energetically relaxed configurations, resulting from the
interaction of a cluster of 16-doxyl-stearic acid with models a, b, c, are reported
(Fig. 3, right). As previously evidenced, the penetration of the surfactant shell by
these spin-probe molecules is limited and they do not get in close contact with the
core surface, differently from what happens with TEMPO.
The ESR spectra recorded for spin-probe molecules in solutions containing overbased micelles a, b and c are presented in Fig. 4, 5, and 6. Frequently such spectra
show the superimposition of two components, an isotropic triplet produced by freely
moving molecules and an anisotropic feature typical of a species located in a rigid

Characterisation of the Surfactant Shell Stabilising Calcium Carbonate Dispersions


environment. The characteristic shape of that anisotropic signal originates from the
rotation of cylindrical molecules. In the adopted conditions 2A// hyperfine parallel
coupling could be measured on most recorded spectra; its values are reported on
Table 1.

Fig. 4. ESR spectra of a solution of TEMPO in base lubricant oil containing, respectively,
overbased micelles a (top), b (middle), c (bottom)

The results obtained with TEMPO (Fig. 4) clearly show the two components described above. The 2A// values (Table 1) for the three overbased micelles are quite similar to each other, though a somewhat lower rigidity is suggested for model a (Fig. 4,
top). The aspect of ESR spectrum for model b (Fig. 4, middle) is dominated by the isotropic signal, revealing a lower population of the anisotropic species, compared to the
other two overbased micelle solutions.

Fig. 5. ESR spectra of a solution of 5-doxyl-stearic acid in base lubricant oil containing, respectively, overbased micelles a (top), b (middle), c (bottom)

The 2A// values (Table 1) measured for 5-doxyl-stearic acid (Fig. 5) reveal a less
rigid environment around the nitroxide group, as compared to TEMPO. This is
slightly more evident for models b and c (Fig. 5, middle and bottom).

Fig. 6. ESR spectra of a solution of 16-doxyl-stearic acid in base lubricant oil containing, respectively, overbased micelles a (top), b (middle), c (bottom)


F. Frigerio and L. Montanari

With the 16-labelled stearic acid spin-probe (Fig. 6) a 2A// value could be measured (Table 1) only in the spectrum recorded for micelle a (Fig. 6, top), whereas in the
other cases no hyperfine coupling could be detected by the peak analysis.
Table 1. Hyperfine parallel coupling 2A// values measured from the ESR spectra of overbased
micelle solutions with three different spin-probes

micelle a
micelle b
micelle c


5-doxyl- stearic acid


16-doxyl-stearic acid

The differences in mobility observed by comparing the ESR spin probe spectra
mainly ensued from their diverse spin-label distances from the micelle core polar surface. The carboxyl group of both labelled fatty acids was strongly attracted but could
not reach it through the surfactant shell. On the contrary TEMPO was able to penetrate deeply and produced the highest 2A// values. The larger coupling value for 5doxyl spin probe in micelle a is due to peculiar alkyl chain features of that surfactant
shell. This effect is observed only with labelled fatty acids and is put into greater evidence when the nitroxide sits at a long distance from the micelle core as in 16-doxylstearic acid: the 2A// value is higher than for 5-doxyl-stearic acid in micelle a, while
in micelles b and c the mobility is comparable to that of a free molecule in solution.

4 Discussion
The development of new generations of surfactants for lubricant oil additives requires
an accurate characterisation of the reverse micelle structure of the overbased detergents [26]. In this study a combination of experimental and computational results
helped defining a correlation between the chemical nature of the stabilising surfactant
shell and the environmental rigidity imposed upon diffusing small molecules. The
surfactant shell compactness, responsible for the remarkable stabilisation of the
strongly basic micelle core in a non-polar environment, can be reasonably distinguished from shell viscosity. The first property is commonly attributed to a tight
packing of surfactant aromatic moieties [19, 21], while the second is mainly influenced by the molecular features of their alkyl chains. The ESR spectra analysis contributed a quantitative measurement of the micelle shell viscosity, revealing a subtle
modulation of the mobility experienced by spin-probe molecules in different locations
throughout the surfactant shell. The spin-probes diffusion was found to depend on
both their molecular shape and the polar groups location along the structure. The molecular dynamics simulations of such process provided a pictorial description of the
surfactant shell viscosity effects on diffusion. Moreover, its distance dependence from
the micelle core was quantitatively confirmed. Small molecules like TEMPO were
able to penetrate into a highly rigid environment next to the core surface, while labelled fatty acids were shown to fill the available room among surfactant alkyl chains,
further away from the core. Compared with TEMPO, the nitroxide grafted next to the
fatty acid polar group (5-doxyl-stearic acid) experienced higher mobility. When

Characterisation of the Surfactant Shell Stabilising Calcium Carbonate Dispersions


attached to the apolar end (16-doxyl-stearic acid) molecular freedom was found as
high as in solution and an environmnetal rigidity effect was revealed only by the peculiar surfactant shell structure of micelle a. In conclusion, the previously defined
structural features of overbased reverse micelles [2, 17-21, 26] have been further
detailed by this study, in view of developing improved performances as detergent
additives of lubricant base oils.

1. Liston, T. V., Lubr. Eng. 48 (1992) 389-397
2. Roman, J.P., Hoornaert, P., Faure, D., Biver, C., Jacquet, F., Martin, J.M., J. Coll. Interface Sci. 144 (1991) 324-339
3. Bandyopadhyaya, R., Kumar, R., Gandhi, K.S., Langmuir 17 (2001) 1015-1029
4. Hudson, L.K., Eastoe, J., Dowding, P.J., Adv. Coll. Interface Sci. 123-126 (2006) 425-431
5. Mansot, J.L. and Martin, J.B., J. Microsc. Spectrosc. Electron., 14 (1989) 78
6. Tricaud, C., Hipeaux, J.C., Lemerle, J., Lubr. Sci. Technol. 1 (1989) 207
7. Markovic, I., Ottewill, R.H., Coll. Polymer. Sci. 264 (1986) 454
8. Giasson, S., Espinat, D., Palermo, T., Ober, R., Pessah, M., Morizur, M.F., J. Coll. Interface Sci. 153 (1992) 355
9. Cizaire, L.; Martin, J.M.; Le Mogne, Th., Gresser, E., Coll. Surf. A 238 (2004) 151
10. Berliner, L.J.: Spin Labeling, Theory and Applications, Academic Press, New York (1979)
11. Dzikovski, B.G., Livshits, V.A., Phys. Chem. Chem. Phys. 5 (2003) 5271
12. Wines, T.H., Somasundaran, P., Turro, N.J., Jockusch, S., Ottaviani, M.F., J. Coll.
Interface Sci. 285 (2005) 318
13. Kramer, G., Somasundaran, P., J. Coll. Interface Sci. 273 (2004) 115
14. Tedeschi, A.M., Franco, L., Ruzzi, M., Padano, L., Corvaja, C., DErrico, G., Phys. Chem.
Chem. Phys. 5 (2003) 4204
15. Maddinelli, G., Montanari, L., Ferrando, A., Maestrini, C., J. Appl. Polym. Sci. 102 (2006)
16. Randy, B, Rabek, J.F.: ESR Spectroscopy in Polymer Research, Springer-Verlag, New
York (1977)
17. Tobias, D.J., Klein, M.L., J. Phys. Chem. 100 (1996) 6637-6648
18. Griffiths, J.A., Bolton, R., Heyes, D.M., Clint, J.H., Taylor, S.E., J. Chem. Soc. Faraday
Trans. 91 (1995) 687-696
19. Griffiths, J.A., Heyes, D.M., Langmuir, 12 (1996) 2418-2424
20. Bearchell, C.A., Danks, T.N., Heyes, D.M., Moreton, D.J., Taylor, S.E., Phys. Chem.
Chem. Phys. 2 (2000) 5197-5207
21. Bearchell, C.A., Heyes, D.M., Moreton, D.J., Taylor, S.E., Phys. Chem. Chem. Phys. 3
(2001) 4774-4783
22. Arndt, E.R., Kreutz, K.L. J.Coll. Interface Sci. 123 (1988) 230
23. Accelrys, Inc., San Diego
24. Hill, J.-R., Sauer, J., J. Phys Chem. 98 (1994) 1238-1244
25. Allen, M.P., Tildesley, D.J.: Computer simulations of liquids, Clarendon, Oxford (1987)
26. Hudson, L.K., Eastoe, J., Dowding, P.J., Adv Colloid Interface Sci 123-126 (2006) 425431

Hydrogen Adsorption and Penetration of Cx (x=58-62)

Fullerenes with Defects
Xin Yue 1, Jijun Zhao2,*, and Jieshan Qiu1,*

State Key Laboratory of Fine Chemicals, Carbon Research Laboratory, School of Chemical
Engineering, Center for Nano-Materials and Science, Dalian University of Technology, Dalian,
116024, China
State Key Laboratory of Materials Modification by Laser, Electron, and Ion Beams, School
of Physics and Optoelectronic Technology and College of Advanced Science and Technology;
Dalian University of Technology, Dalian, 116024, China,

Abstract. Density functional theory calculations were performed to investigate

the endohedral and exohedral adsorption of a H2 molecule on the classical and
nonclassical fullerenes Cx (x=58, 59, 60, 62) with seven-, eight-, and ninemembered rings. The amplitude of adsorption energies are within 0.03eV and
the molecule-fullerene interaction belongs to van der Waals type. Penetration of
a H2 molecule through different fullerene cages was discussed and the
corresponding energy barriers were obtained. We find that the existence of
large holes reduces the penetration barrier from 12.6 eV for six-membered ring
on perfect C60 cage to about 8eV for seven-membered rings and to about 5eV
for eight-membered rings.

1 Introduction
Soon after the discovery of carbon fullerenes, it was found that a variety of atoms
and molecules can be incorporated into the hollow carbon cages to form endohedral
complex structures, which lead to new nanoscale materials with novel physical and
chemical properties [1-3]. Endohedral fullerenes are not only of scientific interest
but are of technological importance for their potential usage in various fields such
as molecular electronics [4], magnetic resonance imaging [5], quantum computer
[6-9], and nuclear magnetic resonance (NMR) analysis [10, 11]. On the other hand,
tremendous efforts have been devoted to the hydrogen storage in carbon
nanostructures like nanotubes [12]. Thus, the study of endohedral fullerene
complexes with encapsulation of H2 molecule is focus of interests from different
In order to achieve endohedral fullerene complex with hydrogen molecule
encapsulated inside, the surface of the fullerene cages must be opened to have a
sufficiently large orifice to let the H2 molecule penetrate. Murata et al. investigated

Corresponding authors.

Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 280 287, 2007.
Springer-Verlag Berlin Heidelberg 2007

Hydrogen Adsorption and Penetration of Cx (x=58-62) Fullerenes with Defects


the synthesis, structure, and properties of novel open-cage fullerenes with heteroatom
on the rim of the orifice [13] as well as the feasibility of inserting small atoms or
molecules through the orifice of an open-cage C60 derivative. Hatzimarinaki et al.
reported a novel methodology for the preparation of five-, seven-, and nine-membered
fused rings on C60 fullerene [14].
Recently, molecular hydrogen was successfully placed inside open-cage
fullerenes [13, 15-21]. Murata et al. [16] reported the first syntheses and X-ray
structures of organic and organometallic derivatives of C60 and the usage of the
encapsulated molecular hydrogen as a magnetic shielding probe. After the
encapsulation of H2, the endohedral cages were then closed through a
molecularsurgery method on a gram scale with maximum 100% H2 incorporation
[20]. Stimulated by these experimental progresses, ab initio computational studies
have been reported for endohedral H2@C60 complex. Slanina et al. performed
theoretical calculations of the encapsulation energy using modified Perdew-Wang
and Becke functionals (MPWB1K) [22]. Shigetaa et al. studied dynamic charge
fluctuation of endohedral fullerene with H2 [23].
In addition to the opening and closing of fullerene cages via chemical
approaches, it is possible to have the as-prepared defect fullerene cages with large
holes [24-26]. For example, Qian et al. detected pronounced peak of C62- on the LDFTMS mass spectrum and performed DFT calculation of the C62 cage with one
4MR [24]. Deng et al. observed the odd-numbered clusters C59 in laser desorption
ionization of C60 oxides [26]. Accordingly, ab initio calculatons have been carried
out for the geometries, energies, and stabilities of these defective fullerene C60
cages [27-29]. Hu et al. computed fullerene cages with large hole. [27, 28].
Lee studied the structure and stability of the defective fullerenes of C59, C58 and
C57 [29].
Despite the existing theoretical efforts, within the best of our knowledge, there is
no ab initio calculation on the hydrogen adsorption and encapsulation in the defect
fullerenes. These nonclassical fullerenes with seven-membered ring (7MR), eightmembered ring (8MR), and so on, may serve well as model systems for the open-cage
fullerenes obtained from other methods. Thus, it would be interesting to study the
relationship between the size of the orifice ring and the barrier for H2 molecule
penetrating from outside to inside of fullerene. In this paper, we address these issues
by conducting DFT calculations on the adsorption and penetration of H2 molecule on
C60 and nonclassical fullerenes with 7MR, 8MR, and 9MR.

2 Computational Methods
All-electron DFT calculations were carried out employing the generalized gradient
approximation (GGA) with the PW91 functional [30] and the double numerical
plus polarization (DNP) basis set that are implemented in the DMol program [31].
Self-consistent field (SCF) calculations were carried out with a convergence criterion
of 10-6 a.u. on the total energy. To ensure high quality results, the real-space global
orbital cutoff radius was chosen to be as high as 5.0 . It is known that DFT method


X. Yue, J. Zhao, and J. Qiu

within GGA approximation is usually insufficient for describing the weakly van der
Waals (vdW) interaction. A recent DFT calculation of the hydrogen adsorption on
carbon and boron nitride nanotubes [32] demonstrated that PW91 functional can
roughly reproduce the strength of the vdW interaction between a H2 molecule and a
C6H6 benzene molecule by highly accurate HF-MP2 calculations.

3 Results and Discussion

In this work, we considered eight fullerene cages including perfect C60 and those
defect fullerenes. The configurations of the defect fullerene cages were taken from
Ref. [29] for C58 and C59 with 7MR, 8MR, and 9MR, and from Ref. [24] for C62 with
4MR. On the one side, cages with the vacancy defect (unsaturated atom) were created
by removing one atom from C60, such as C59 4-9 (with one 4MR and one 9MR) and
C59_5-8 (with one 5MR and one 8MR). On the other hand, topological defects
including larger rings (7MR and 8MR) or smaller 4MR were created on the fullerene
cages of C58, C60, and C62. For C60, we considered perfect C60 (Ih) as well as a C60 cage
with two 7MR (along with one 4MR), which is denoted as C60 4-7-7. For C59, the
cage with one 4MR and one 9MR is denoted as H2@C59_4-9, and the cage with one
5MR and one 8MR as H2@C59_5-8. For C58, the cage with two 5MR and one 7MR is
denoted as H2@C58_5-5-7, the cage with two 4MR, one 8MR, and one 5MR as
H2@C58 4-4-8(5), and the cage with two 4MR, one 8MR, and one 6MR as H2@C58 44-8(6).
At the beginning, eight fullerene cages were optimized at level of PW91/DNP.
Hydrogen molecule was then placed in the center of each cage as initial
configuration of the endohedral complexes. These endohedral H2@Cx complexes
were fully optimized. The optimized structures are shown in Figure 1. Moreover,
exohedral adsorption of H2 molecule on these eight cages was also considered. The
adsorption energy of hydrogen molecule is defined as the difference between the
total energy of the H2-cage complex specie and summation of the total energies of
the individual H2 molecule (EH2) and the fullerene cage (Ecage). Hence, the adsorption
energies for both endohedral (Eendo) and exohedral adsorption (Eexo) are computed




To study the penetration behavior of a H2 molecule from the endohedral site to the
exohedral site, we first adjust the orientation of the central H2 molecule to be
perpendicular to the largest hole on the surface of the fullerene cages. Then, singlepoint energies of the H2-cage complex (H2@C60, H2@C60_4-7-7, H2@C59_4-9,
H2@C59_5-8, H2@C58_5-5-7, H2@C58_4-4-8(5), and H2@C58_4-4-8(6)) were computed
along the penetration path by gradually moving the H2 molecule from the cage center
to the outside of the fullerene cage through the largest hole by a step of 0.3 up to

Hydrogen Adsorption and Penetration of Cx (x=58-62) Fullerenes with Defects


the longest distance of 9 from the cage center. The main theoretical results are
summarized in Table 1 and Figure 2.









Fig. 1. Optimized configurations of H2@C62, H2@C60, H2@C60_4-7-7, H2@C59_4-9, H2@C59_58, H2@C58_5-5-7, H2@C58_4-4-8(5)
Table 1. Total energies of the eight optimized cages of perfect and defect fullerenes CX (X=58,
59, 60, 62) and the corresponding endohedral complex H2@CX. The total energy of a H2
molecule from our DFT calculation at the same level is -1.1705707 Hartree. Endohedral
adsorption energy (Eendo) and exohedral adsorption (Eexo) for H2 on CX (X=58, 69, 60) cage as
well as energy barrier for penetration of H2 through the largest hole on the cage.





(meV )





The total energy between perfect C60 and defect C60_4-7-7 is 3.47 eV. In other
words, formation of two 7MR and one 4MR on perfect C60 requires 3.47 eV, while
previous calculation found that formation of two 7MR on a (6,6) carbon nanotube is
2.74 eV [33]. The total energy of C59_5-8 is lower than C59_4-9 by 0.91 eV, close to
the theoretical value of 0.89 eV by Lee et al. at level of B3LYP/6-31G* [29]. For C58,
C58_5-5-7 is more stable than C58_4-4-8(5) by 1.40 eV and than C58_4-4-8(6) by 4.67
eV, rather close to previous results of 1.34 eV and 4.77 eV by Lee [29].


X. Yue, J. Zhao, and J. Qiu

As shown in Table I, for all the cases studied, the exohedral adsorption of H2
molecule on the surface of fullerene cage is exothermic, with Eexo ranging from -16.6
to -26.3 meV. The exohedral adsorption energy of H2 molecule is insensitive to the
atomic configuration of fullerene cages. Experimentally, the adsorption of a H2
molecule on the graphite surface is -42 meV. It is known that GGA usually
underestimate the surface adsorption energy of vdW type [34]. Therefore, the present
GGA calculation might somewhat underestimate the adsorption energy of H2
On the contrary, the endohedral adsorption is either exothermic or endothermic,
with Eendo ranging from -12.6 to 31.8 meV. The incorporation of a H2 molecule in C60
(perfect or defect) and C62 cages is exothermic, while encapsulation of a H2 molecule
in C58 and C59 cages is endothermic. This finding can be roughly understood by the
difference in the interior space of the fullerene cages. In other words, C60 and C62
cages are larger and have more space for the encapsulation of H2 molecule. In a
previous study [22], the best estimate for the encapsulation energy for H2@C60 was at
least 173 meV.
The energy barrier for the penetration of H2 molecule through the largest hole of
the eight different fullerene cages are presented in Table 1 and the corresponding
single-point energies for the penetration paths are shown in Figure 2. First of all, in
Figure 2 we find all the energy paths for the H2 penetration are smooth and
have clear highest peak on them, which correspond to the energy barriers given in
Table 1. Among them, the energy barrier for penetrating the six-membered ring on
C60 cage is highest, i.e., 12.6 eV, and the energy barrier for penetrating the eightmembered ring on C58_4-4-8(5) cage is lowest, i.e, 4.6 eV. The energy barriers for
other cages with 8MR such as C59_5-8 and C58_4-4-8(6) are close, both of 5.2 eV.
For those defect fullerene cages with 7MR, such as C60_4-7-7 and C58_5-5-7, the
energy barriers are around 8 eV. In other words, the penetration barrier reduces
from 12.6 eV for 6MR on perfect C60 cage to about 8eV for 7MR and to about 5eV
for 8MR. However, it is interesting to find that the penetration barrier through the
largest 9MR on the C59_4-9 cage is relatively high, i.e., 9.1 eV.
To summarize, the encapsulation and penetration of H2 molecule on the perfect and
defect C60 cages were investigated using density functional theory at level of
PW91/DNP. Fullerene cages of Cx with x=58, 59, 60, 62 containing 7MR, 8MR, and
9MR were considered. The interaction for H2 molecule adsorption on fullerene cages
is relatively weak and of vdW type. The exohedral adsorption for H2 molecule on the
surface of fullerene cage is exothermic, while the endohedral adsorption is exothermic
for C60 and C62 or endothermic for C58 and C59 cages. The penetration barrier from
endohedral to exohedral site significantly reduces from 12.6 eV for 6MR perfect cage
to about 8 eV for 7MR and to about 5 eV for 8MR on defect cages. However, these
reduced energy barriers for 7MR and 8MR are still too high for a H2 molecule to
penetrate at ambient conditions. Finally, it is worthy to point out that the present
calculations focus on the physisorption and penetration of H2 molecule while the
possible chemisorption of the H2 molecule and corresponding transition states were
not considered.

Hydrogen Adsorption and Penetration of Cx (x=58-62) Fullerenes with Defects







Energy (eV)





Distance (A)
Fig. 2. Relative energy of H2@CX complex as function of the distance for H2 from cage center
along the path towards center of the largest ring. The zero energy is set to be the total energy
for the H2 in the center of cage H2@C62, H2@C60, H2@C60_4-7-7, H2@C59_4-9, H2@C59_5-8,
H2@C58_5-5-7, H2@C58_4-4-8(5).

Acknowledgements. This work was supported by the National Natural Science

Foundation of China (No.29976006), the Natural Science Foundation of Liaoning


X. Yue, J. Zhao, and J. Qiu

Province of China (No.9810300701), Program for New Century Excellent Talents in

University of China, and the Ministry of Education of China.

1. Funasaka, H., Sugiyama, K., Yamamoto, K., Takahashi, T.: Magnetic Properties of RareEarth Metallofullerenes, J. Phys. Chem. 99 (1995) 1826-1830
2. Michael, D., John: A. M.: Isolation and Properties of Small-Bandgap Fullerenes, Nature
393, (1998) 668-671
3. Boltalina, O. V., Ioffe, I. N., Sorokin, I. D., Sidorov, L. N.: Electron Affinity of Some
Endohedral Lanthanide Fullerenes, J. Phys. Chem. A 101 (1997) 9561-9563
4. Kobayashi, S., Mori, S., Iida, S., Ando, H., Takenobu, T., Taguchi, Y., Fujiwara,
Taninaka, A., Shinohara, A. H., Iwasa, Y.: Conductivity and Field Effect Transistor of
La2@C80 Metallofullerene, J. Am. Chem. Soc. 125 (2003) 8116-8117
5. Kato, H., Kanazawa, Y., Okumura, M., Taninaka, A., Yokawa, T., Shinohara, H.:
Lanthanoid Endohedral Metallofullerenols for MRI Contrast Agents, J. Am. Chem. Soc.
125 (2003) 4391-4397
6. Harneit, W.: Fullerene-Based Electron-Spin Quantum Computer, Phys. Rev. A 65 (2002)
7. Suter, D., Lim, K.: Scalable Architecture for Spin-based Quantum Computers with a
Single Type of Gate, Phys. Rev. A 65 (2002) 052309-052313
8. Twamley, J.: Quantum-Cellular-Automata Quantum Computing with Endohedral
Fullerenes, Phys. Rev. A 67 (2003) 052318-052329
9. John, J. L., Morton, Al. M., Tyryshkin, A., Ardavan, K., Porfyrakis, S. A., Lyon, G. A.
Briggs, D.: High Fidelity Single Qubit Operations Using Pulsed Electron Paramagnetic
Resonance, Phys. Rev. Lett. 95 (2005) 200501-4
10. Saunders, M., Cross, R. J., Jimnez, V., Shimshi, H. A., Khong, R. A.: Noble Gas Atoms
Inside Fullerenes, Science 271 (1996) 1693-1697
11. Martin, S, J. Hugo, V. A., James, C. R., Stanley, M., Daro, F., Frank. I.: Probing the
Interior of Fullerenes by 3He NMR Spectroscopy of Endohedral 3He@C60 and 3He@C70,
Nature, 367 (1994) 256-258
12. Ding, R. G., Lu, G. Q., Yan, Z. F., Wilson, M. A.: Recent Advances in the Preparation and
Utilization of Carbon Nanotubes for Hydrogen Storage, J. Nanosci. Nanotech. 1 (2003)
13. Murata, Y., Murata, M., Komatsu, K..: Synthesis, Structure, and Properties of Novel
Open-Cage Fullerenes Having Heteroatom(s) on the Rim of the Orifice, Chem. Eur. J. 9
(2003) 1600-1609
14. Maria H., Michael O.:Novel Methodology for the Preparation of Five-, Seven-, and NineMembered Fused Rings on C60, Org. Lett. 8 (2006) 1775-1778
15. Murata, M.., Murata, Y., Komatsu, K.: Synthesis and Properties of Endohedral C60
Encapsulating Molecular Hydrogen, J. Am. Chem. Soc. 128 ( 2006) 8024-8033
16. Murata, Y., Murata, M., Komatsu, K.: 100% Encapsulation of a Hydrogen Molecule into
an Open-Cage Fullerene Derivative and Gas-Phase Generation of H2@C60, J. Am. Chem.
Soc. 125 (2003) 7152-7153
17. Carravetta, M., Murata, Y., Murata, M., Heinmaa, I., Stern, R., Tontcheva, A., Samoson,
A., Rubin, Y., Komatsu, K.., Levitt, M. H.: Solid-State NMR Spectroscopy of Molecular
Hydrogen Trapped Inside an Open-Cage Fullerene, J. Am. Chem. Soc. 126 (2004) 40924093

Hydrogen Adsorption and Penetration of Cx (x=58-62) Fullerenes with Defects


18. Iwamatsu, S.I., Murata, S., Andoh, Y., Minoura, M., Kobayashi, K., Mizorogi, N., Nagase,
S.: Open-Cage Fullerene Derivatives Suitable for the Encapsulation of a Hydrogen
Molecule, J. Org. Chem. 70 (2005) 4820-4285
19. Chuang, S. C., Clemente, F. R., Khan, S.I., Houk, K. N., Rubin, Y.: Approaches to Open
Fullerenes: A 1,2,3,4,5,6-Hexaadduct of C60, Org. Lett. 8 (2006) 4525-4528
20. Komatsu, K., Murata, M., Murata, Y.: Encapsulation of Molecular Hydrogen in Fullerene
C60 by Organic Synthesis, Science 307 (2005) 238-240
21. Komatsu, K., Murata, Y.: A New Route to an Endohedral Fullerene by Way of
-Framework Transformations, Chem. Lett. 34 (2005) 886-891
22. Slanina, Z. K., Pulay, P., Nagase, S.: H2, Ne, and N2 Energies of Encapsulation into
Evaluated with the MPWB1K Functional, J. Chem. Theory Comput. 2 (2006) 782-785
23. Shigetaa Y., Takatsukab K.: Dynamic Charge Dluctuation of Endohedral Fullerene with
Coencapsulated Be Atom and H2, J. Chem. Phys. 123 (2005) 131101-131104
24. Qian, W., Michael D., Bartherger, S.J., Pastor, K.N., Houk, C. L., Wikins, Y. R.: C62, a
Non-Classical Fullerene Incorporating a Four-Membered Ring, J. Am. Chem. Soc. 122
(2002) 8333-8334
25. OBrien, S. C., Heath, J. R., Curl, R. F., Smalley, R. E.: Photophysics of Buckminster
Fullerene and Other Carbon Cluster Ions, J. Chem. Phys. 88 (1988) 220-230
26. Deng, J. P., Ju, D.D., Her, G. R., Mou, C. Y., Chen, C. J., Han, C. C.: Odd-Numbered
Fullerene Fragment Ions from Ca Oxides, J. Phys. Chem. 97 (1993) 11575-11577
27. Hu, Y. H., Ruckensten, E.: Ab Initio Quantum Chemical Calculations for Fullerene Cages
with Large Holes, J. Chem. Phys. 119 (2003) 10073-10080
28. Hu, Y.H., Ruckensten, E.: Quantum Chemical Density-Functional Theory Calculations of
the Structures of Defect C60 with Four Vacancies, J. Chem. Phys. 120 (2004) 7971-7975
29. Lee, S. U., Han, Y.K.: Structure and Stability of the Defect Fullerene Clusters of C60: C59,
C58, and C57, J. Chem. Phys. 121 (2004) 3941-3492
30. Perdew, J. P., Wang Y.: Accurate and Simple Analytic Representation of the Electron-Gas
Correlation Energy, Phys. Rev. B 45 (1992) 13244-13249
31. Delley, B.: An All-Electron Numerical Method for Solving the Local Density Functional
for Polyatomic Molecules, J. Chem. Phys. 92(1990) 508-517;
32. Zhou, Z., Zhao, J. J., Chen, Z. F., Gao, X. P., Yan, T. Y., Wen, B. P., Schleyer, V.R.:
Comparative Study of Hydrogen Adsorption on Carbon and BN Nanotubes, J. Phys.
Chem. B 110 (2006 )13363-13369
33. Zhao, J. J., Wen, B., Zhou, Z., Chen, Z. F., Schleyer, P. R.: Reduced Li diffusion barriers
in composite BC3 nanotubes, Chem. Phys. Lett. 415 (2005) 323-326
34. Zhao, J. J., Buldum, A., Han, J., Lu, J. P.: Gas molecule adsorption in carbon nanotubes
and nanotube bundles, Nanotechnology 13 (2002) 195-200

Ab Initio and DFT Investigations of the Mechanistic

Pathway of Singlet Bromocarbenes Insertion into
C-H Bonds of Methane and Ethane
M. Ramalingam1, K. Ramasami2, P. Venuvanalingam3, and J. Swaminathan4

Rajah Serfoji Government College, Thanjavur-613005, India
Nehru Memorial College, Puthanampatti-621007, India
Bharathidasan University, Tiruchirapalli-620024, India
Periyar Maniammai College of Technology for Women, Vallam613403, India

Abstract. The mechanistic pathway of singlet bromocarbenes (1CHBr and

CBr2) insertions into the C-H bonds of methane and ethane have been analysed
at ab initio (HF and MP2) and DFT (B3LYP) level of theories using 6-31g (d,
p) basis set. The QCISD//MP2/6-31g(d, p) level predicts higher activation
barriers. NPA, Mulliken and ESP charge analyses have been carried out along
the minimal reaction path by the IRC method at B3LYP and MP2 levels for
these reactions. The occurrence of the TSs either in the electrophilic or
nucleophilic phase and net charge flow from alkane to carbene in the TS has
been identified through NBO analyses.

Keywords: bromocarbenes; ab initio; DFT,; insertions; IRC.

1 Introduction
The carbenes and halocarbenes are known as reactive intermediates with intriguing
insertion, addition, and rearrangement reactions. How structural factors (bond angle,
electron-withdrawal by induction and electron-donation by resonance) influence the
relative stabilities of these states is still under scrutiny [1]. The synthetic organic
chemistry [2], organometallic chemistry [3] and other areas, principally; the ArndtEistert chain homologation procedure, the Rimer-Tiemann reaction (formylation of
phenols), cyclopropanation of alkenes [4] and subsequent rearrangements, [5] ketene
and allene [6] preparation, synthesis of strained ring systems, ylide generation and
subsequent rearrangements, cycloaddition reactions [7] and photoaffinity labeling. [8]
are some of the vital fields of wide applications of the carbenes and halocarbenes
Among the different types of reactions of singlet carbenes, the highly characteristic
concerted insertion reactions into Y-H bonds (Y=C, Si, O etc.), involving a threecenter cyclic transition state [9] seem to be important in synthetic organic chemistry
[2]. In the halocarbenes, the halogens would interact with the carbenic carbon through
the oppositely operating electronic [mesomeric (+M) - donor and inductive (-I) -
acceptor] effects. Based on this, the electrophilicity of carbenes has been reported to
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 288295, 2007.
Springer-Verlag Berlin Heidelberg 2007

Ab Initio and DFT Investigations of the Mechanistic Pathway


decrease with increased bromination resulting in a substantially high activation barrier

[10]. Interestingly both the electrophilicity and nucleophilicity nature of carbenes
have been encountered in the insertion reactions [11]. Hence the focal theme of this
investigation is the characterization of these two features in terms of the quantum of
charge transfer among the reactants during the course of the reaction and to determine
the energetics, reaction enthalpies and activation barriers for the singlet
bromocarbenes insertion reactions into C-H bonds of methane and ethane. If we
monitor the total charge on the carbene moiety as the reaction progresses (by
following, the Intrinsic Reaction Coordinate -IRC [12]) we should be able to detect a
turning point, signifying the end of the first electrophilic phase and the onset of the
second nucleophilic phase. In order to properly confirm the two-phase mechanism, we
carry out the charge versus reaction path probe for the insertion reactions into C-H
bonds of said alkanes. In this study we investigate the reactions CBrX + HY with
X = H, Br and Y = CH3, C2H5. The rapidity of carbene reactions has challenged
experimental techniques and hence this theoretical ab initio quantum mechanical

2 Computational Details
Geometries of the reactants, the transition states and the products have been optimized
first at HF/6-31g (d, p) level using Gaussian03W suite of program [13]. The resultant
HF geometries obtained were then optimized at MP2 and B3LYP [14-18] levels. The
standard 6-31g (d, p) [19, 20] basis set has been adopted in all the calculations for
better treatment of 1, 2-hydrogen shift during the insertion process. Further single
point energy calculations have been done at the QCISD level on the MP2 optimized
geometries of the species on the lowest energy reaction pathway.[21] All the
stationary points found, except those obtained at the QCISD level, were characterized
as either minima or transition states (TSs) by computing the vibrational harmonic
frequencies TSs have a Hessian index of one while minima have zero hessian index.
All TSs were further characterized by animating the imaginary frequency in
MOLDEN [22] and by checking with intrinsic reaction coordinate (IRC) analyses.
The calculated vibrational frequencies have been used to compute the thermodynamic
parameters like enthalpies of the reaction. The intrinsic reaction coordinate analyses
have been done for the transition structures obtained at the MP2 level [23]. The
Mulliken [24], NPA [25] and charges derived by fitting the electrostatic potential [26]
methods have been followed for the atomic charges computations, along the reaction

3 Results and Discussion

The C-H bonds of methane and ethane undergo insertion reactions with 1CHBr/1CBr2
forming mono/dibromoalkanes. Reactants first form a pre-reactive complex. The
complex proceeds to form a concerted transition state that then develops into a


M. Ramalingam et al.

product. The energy profile diagram for the insertion reactions of 1CHBr and 1CBr2
into methane is shown in Fig. 1, in which the energies of complex, transition state and
the product are shown with reference to the reactants.

Relative energy (kcal/mol)






Reaction coordinate

Fig. 1.
- - - - - Energy profile for 1CHBr + CH4 CH3-CH2Br at MP2/6-31g**
Energy profile for 1CBr2 + CH4 CH3-CHBr2 at MP2/6-31g**

The optimized geometries of the TSs located in the reaction pathway for 1CHBr and
CBr2 insertion reactions are presented in Fig. 2.

Fig. 2. Geometrical parameters (distance in ) and barriers of the transition states for CHBr
and CBr2 insertion into C-H bond of methane and ethane at B3LYP and MP2 levels. (MP2
values in the parentheses).

3.1 Singlet Bromocarbenes Insertion into Methane and Ethane

The B3LYP, MP2 and QCISD results alone have been taken for discussions in this
investigation since HF overestimates the transition states [27, 28]. The B3LYP/6-31g
(d, p) activation energies for the insertions of 1CHBr and 1CBr2 into C-H of methane

Ab Initio and DFT Investigations of the Mechanistic Pathway


are 4.28(TS-1) and 20.42 (TS-2) kcal/mol respectively. The MP2 value for 1CHBr
insertion is ca. 1 kcal/mol higher and that for 1CBr2 insertion is ca. 3 kcal/mol lower
than those of the corresponding B3LYP values. Replacement of hydrogen by bromine
in 1CHBr decreases its electrophilicity [29], and deactivates the electrophilic attack to
certain extent by the carbene moiety in the first phase of insertion. So the barrier
heights increase dramatically from 4.28 to 20.42 kcal/mol at B3LYP and 5.36 to
17.36 kcal/mol at MP2 calculations respectively for methane. The barriers computed
at QCISD/6-31g (d, p)//MP2/6-31g (d, p) level are 9.68 kcal/mol and 23.93 kcal/mol
for 1CHBr and 1CBr2 insertion into methane. The TSs are first order saddle points as
determined by numerical vibrational frequency.
In the case of ethane, the barrier heights for 1CHBr insertion are 1.47 and 3.27
kcal/mol respectively at B3LYP and MP2 levels (TS-3). These values have been
enhanced to 15.49 and 12.41 kcal/mol (TS-4) correspondingly for the 1CBr2 insertion.
The relevant geometrical parameters of the transition states for the 1CHBr and 1CBr2
insertions to methane and ethane have been shown in Fig. 2, Table 1 and 2. The TS
for 1CBr2 insertion into methane (TS-2) comes much later along the reaction
coordinate than that for 1CHBr insertion (TS-1) as reflected in the relative C2-H3
bond distances of 1.430 (1.345) and 1.274 (1.202) and the charges on H3,
0.279(0.278) and 0.255 (0.216) respectively. Similar trend has been observed for the
singlet bromocarbenes insertion into ethane.
Table 1. Geometrical parameters (distances in ), barriers and heat of reaction (Hr) in
kcal/mol at the TSs of 1CHBr with alkanes at B3LYP (MP2)/6-31g (d, p)










( (2.336)






qct quantum of charge transfer from alkane to carbene at the TSs

Table 2. Geometrical parameters (distances in ), barriers and heat of reaction (Hr) in
kcal/mol at the TSs of 1CBr2 with alkanes at B3LYP (MP2)/6-31g (d, p)





















qct quantum of charge transfer from alkane to carbene at the TSs


M. Ramalingam et al.

3.2 Energetics
In general the activation barrier depends upon the polarity of the C-H bond of alkane
and the type of bromocarbene (1CHBr or 1CBr2) to be inserted. The above statement
draws support from the fact that the pair of electrons on the carbene carbon involved
in the bonding process with the C-H of alkane is more and more stabilized with the
degree of bromination. This results in the inhibition of the ease of bond formation due
to the less availability of the electron pair on the cabene carbon. The NBO [30]
analysis quantifies this aspect in terms of the energies of the electron pairs on 1CHBr
and 1CBr2 as -0.4057(-0.5595) and -0.4535 (-0.6019) au respectively according to
B3LYP (MP2) theories with 6-31g (d, p) basis set. The enthalpies of the insertion
reactions of 1CHBr and 1CBr2 into methane are -87.41(-96.40) and 68.95(-80.85)
and that for ethane are 89.31(-99.31) and 71.27(-84.86) kcal/mol at B3LYP (MP2)
levels respectively. The reaction enthalpies (Tables 1 and 2) show that the
exothermicity of the insertion reactions indicating that the transition states analyzed
resemble the reactants rather than the products [31]. The proximity of the transition
states to the reactants deviates with the degree of bromination of methylene.
Irrespective of the level of theory (B3LYP or MP2) followed, the insertions of 1CHBr
form the transition states earlier than that of 1CBr2 as revealed by exothermicity
3.3 Transition State Geometries
A scrutiny of the bond breaking and bond forming steps corresponding to C2-H3 and
C6-H3 respectively during the insertion process reveals that it is a concerted reaction.
It is observed that the formation of C6-H3 bond is earlier than the C2-C6 bond in the
TS in terms of the bond distances (Tables 1 and 2). The C6-H3, C2-H3 and C2-C6
bond distances in TSs of 1CBr2 insertion reactions confirm the late transition state in
comparison to the corresponding values in the 1CHBr insertion reactions. In order to
improve the accuracy of the energetic parameters, single point computations at
QCISD level has also been adopted and values are listed in Tables 1and 2. The barrier
heights predicted at QCISD level are higher than the MP2 values both for methane
and ethane.
3.4 NBO Analyses
NBO analyses of charge distribution in the transition states give some insight into the
insertion reactivity. For all the transition states the second-order perturbative analyses
were carried out for all possible interactions between filled Lewis-type NBOs and
empty non-Lewis NBOs. These analyses show that the interaction between the C2H3 bond of alkane and the empty p orbital of the carbenic carbon (CH PC ) and
the charge transfer from lone pair of electrons of the carbenic carbon to the
antibonding orbital of C2-H1(nC *CH) seems to give the strongest stabilization.
Finally we observed that there was a net charge flow from the alkane moiety to the
inserting carbene moiety. The quantum of charge transfer from alkane to carbene
supporting the donor-acceptor interaction in the transition states for all the insertion
reactions both at B3LYP and MP2 levels have been collected in Tables 1 and 2. The

Ab Initio and DFT Investigations of the Mechanistic Pathway


inverse relationship between the quantum of charge transfer and the activation
barriers reveals the fact that for the favorable insertion, the nucleophilicity of the
alkane should have been enhanced either sterically or electronically. This correlation
holds good for the reactions analysed in this investigation
3.5 IRC - Charge Analyses
The total charge on the carbene moiety along the IRC for the insertion reactions of
methane and ethane respectively, as calculated by Mulliken [23], NPA [24] and ESP
[25] methods using theoretical models has been shown in Fig.3. We have chosen
density functional (B3LYP) plot showing charge on the carbene moiety in addition to
the MP2 plot, which serve as our ab initio standard.

Charge on CHBr2(a.u.)

Charge on CHBr(a.u.)







Charge on CHBr2 (a.u.)

Charge on CHBr(a.u.)















Fig. 3. ()-NPA, (y)-Mulliken and (-)-ESP charge analyses respectively. () and (c)
correspond to the transition states and the turning points respectively. Electrophilic-phase
region right to the turning point Nucleophilic-phase region left to the turning point.

We discuss first the insertion reactions with methane, 1CBrX (X = H, Br) + CH4.
The charge/IRC curves of these reactions are shown in Figs. 3. These two reactions
provide clear evidence for the two-phase mechanism in that there is a distinct turning
point(minimum) in all the charge/IRC curves for the two Hamiltonians (MP2 and
B3LYP) regardless of the model used to compute the atomic charges. For the 1CHBr
insertion (Fig.3 a), the charge minimum occurs after the transition state (TS), whereas
with 1CBr2 (Fig. 3b) the minimum occurs just before the TS. Thus for the 1CHBr
insertion, the TS lies within the first, i.e., electrophilic phase, whereas for 1CBr2 the
TS is reached at the starting point of the nucleophilic phase. This indicates that the TS
for insertion of 1CHBr into the C-H bond in methane occurs much earlier along the
reaction coordinate than does the TS for the corresponding 1CBr2 insertion. This
indication is fully supported both by the TS geometries - for example, the C-H bond
undergoing the insertion is much shorter in the 1CHBr (1.202) TS than in the TS for
CBr2 (1.345) insertion (Fig. 2) and by the heat of reaction and barrier height


M. Ramalingam et al.

(Tables 1 and 2), which are more negative and much smaller, respectively for 1CHBr
(-96.40; 5.36kcal/mol respectively) than for 1CBr2. (-80.85; 17.36 kcal/mol
respectively). This is in agreement with the Hammond postulate [32]. From the
viewpoint of reactivity, it may be said that the vacant p-orbital on 1CHBr is more
available than that on 1CBr2, thus facilitating the initial electrophilic phase of the
reaction. In other words, reactivity increases in the order 1CBr2 < 1CHBr. There is an
agreement in the overall shape and depth of the curves themselves between the
MP2 and B3LYP plots. However the turning points (minima) in the B3LYP plots are
less pronounced. The NPA and ESP curves are identical in shape at MP2 level.
In the case of ethane, 1CBrX (X = H, Br) + C2H6 the position of the turning points
and the charge/IRC curves for these insertions at MP2 level are shown in (Fig. 3).
Unlike for the 1CHBr insertion into methane, the TS-3 occurs at the turning point
(Fig. 1c), which is in neither electrophilic phase nor nucleophilic phase. But for 1CBr2
insertion into ethane at MP2 (Fig. 1d), the TS is observed at the starting point of the
nucleophilic phase conforming to the belated TS formation in comparison with the TS
for insertion of 1CHBr (Fig. 1c). In general, the nucleophilic phase dominates for
CBr2 insertions, whereas the electrophilic phase dominates 1CHBr insertions.

4 Summary
The singlet bromocarbenes insertions into the C-H bond of methane and ethane have
been analyzed and the influence of bromine on the transition states, energetics,
geometrical parameters etc., has been investigated both at B3LYP and MP2 levels of
theory using 6-31g (d, p) basis set. For the bromocarbenes B3LYP, MP2 and QCISD
level theories predict the activation barriers of different heights, varying both with the
extent of bromination and the type of alkane. The NBO analyses have been done with
a view to analyzing the charge transfer processes during the insertion reactions. The
charge/IRC plots provide clear evidences for the two-phase mechanism namely an
electrophilic phase and a nucleophilic phase for insertions of both 1CHBr and 1CBr2
into the C-H bond of methane and ethane respectively. B3LYP functional used in this
work, gives the same picture of the investigated insertion reactions as the more
traditional MP2 method for both geometries and heats of reaction.


Irikura, K, K., Goddard, W.A., Beauchamp, J.L.: J.Am.Chem.Soc. 114 (1992) 48

Kirmse, W.: Carbene Chemistry. 2nd Edn. Academic Press, New York (1971)
Fischer, E.O., Maasbol, A.: Angew. Chem., Int. Ed. Engl. 3 (1964) 580
Salaun, J.: Chem. Rev. 89 (1989) 1247
Brookhart, M., Studabaker, W.B.: Chem. Rev. 87 (1987) 411
Walbrick, J. M., Wilson, J.W., Jones, W.M.: J. Am. Chem. Soc. 90 (1968) 2895
Padwa, A., Hornbuckle, S.F.: Chem. Rev. 91 (1991) 263
Baldvin, J.E., Jesudason, C.D., Moloney, M., Morgan, D.R., Pratt, A.J.: Tetrahedron 47
(1991) 5603
9. Von, W., Doering, E., Prinzbach, H.: Tetrahedron 6 (1959) 24
10. Russon, N., Siclia, E., Toscano, M.: J. Chem. Phys. 97 (1992) 5031

Ab Initio and DFT Investigations of the Mechanistic Pathway



Dobson, R.C., Hayes, D.M., Hoffmann, R.: J. Am.Chem. Soc. 93 (1971) 6188
Fukui, K.: J. Phys. Chem. 74 (1970) 4161
Gaussian 03, Revision C.02, Gaussian, Inc., Wallingford CT (2004)
Lee, C., Yang, W., Parr, R.G.: Physical Review B 37 (1988) 785
Becke, D.: Phys. Rev. A 38 (1988) 3098
Miehlich, B., Savin, A., Stoll, H., Preuss, H.: Chem. Phys. Lett. 157 (1989) 200
Becke, A.D.: J. Chem. Phys. 98 (1993) 5648
Becke, A.D.: J. Chem. Phys. 104 (1996) 1040
Franel, M.M., Pietro, W.J., Hehre, W.J., Bimcley, J.S., Gordon, M.S., DeFrees, D.J.,
Pople, J.A.: J. Chem. Phys., 77 (1982) 3654.
Hariharan, P.C., Pople, J.A.: Chem. Phys. Lett. 66 (1972) 217
Pople, J. A., Gordon, M. H., Raghavachari, K.:.J. Chem. Phys. 87 (1987) 5968
Schaftenaar,G., Noordik,J.H.: J Comput Aid Mol Design 14 (2000) 123
Gonzalez, C., Schlegel, H.B.: J. Chem. Phys.94 (1990) 5523
Mulliken, R.S.: J. Chem. Phys. 23 (1955) 1833
Reed, A.E., Carpenter, J.E., Weinhold, F., Curtiss, L.A.: Chem. Rev. 88 (1988) 8991
Breneman, C.M., Wiberg, K.B.:J. Comput. Chem. 11 (1990) 361
Ramalingam, M., Ramasami, K., Venuvanalingam, P., Sethuraman, V.: J.
Mol.Struct.(Theochem) 755 (2005) 169
Bach, R.D., Andres, J.L., Su, M.D., McDouall, J.J.W.: J. Am. Chem. Soc. 115 (1993)
Gilles, M.K., Lineberger, W.C., Ervin, K.M.: J. Am. Chem. Soc.115 (1993) 1031
Glendening, E.D., Reed, A.E., Carpenter, J.E., Weinhold, F., Curtiss, L.: Chem. Rev. 88
(1988) 899 NB Version 3.1
Carpenter, J.E.: Ph.D. Thesis, University of Wisconsin (Madison, WI) (1987)
Hammond, G.S.: J. Am. Chem. Soc. 77 (1955) 334

Theoretical Gas Phase Study of the Gauche and Trans

Conformers of 1-Bromo-2-Chloroethane and Solvent
Ponnadurai Ramasami
Faculty of Science, Department of Chemistry, Univeristy of Mauritius, Rduit, Republic of

Abstract. This is a systematic gas phase study of the gauche and trans conformers of 1-bromo-2-chloroethane. The methods used are second order
Mller-Plesset theory (MP2) and density functional theory (DFT). The basis set
used is 6-311++G(d,p) for all atoms. The functional used for DFT method is
B3LYP. G2/MP2 and CCSD(T) calculations have also been carried out using
MP2 optimised structure. The results indicate that there is more preference for
the trans conformer. The energy difference between the trans and gauche conformers (Etg) and related rotational thermodynamics are reported. The MP2/6311++G(d,p) energy difference (Etg) for 1-bromo-2-chloroethane is 7.08
kJ/mol. The conformers of 1-bromo-2-chloroethane have also been subjected to
vibrational analysis. This study has also been extended to investigate solvent effects using the Self-Consistent Reaction Field methods. The structures of the
conformers are not much affected by the solvents but the energy difference
(Etg) decreases with increasing polarity of the solvent. The results from the
different theoretical methods are in good agreement.

1 Introduction
100 years ago, Bischoff found that C-C single bond in ethane is not completely free
[1]. Due to the hindered internal rotation, 1,2-disubstituted ethanes are the simplest
molecules showing conformational isomerism thus leading to the gauche and trans
conformers. It is generally found that the trans conformer is more stable than the
gauche form and this is due to steric hindrance in the gauche conformation [2]. Theoretical calculations of the energy difference between the trans and gauche conformers
(Etg) have been actively pursued for over 40 years, as they are important parameters
to the conformational analysis of molecules [3].
In previous communications, energy differences (Etg) have been calculated for
1,2-dihaloethanes (XCH2CH2X, X=F, Cl, Br and I) [4] and for 1-fluoro-2-haloethanes
(FCH2CH2X, X=Cl, Br and I) [5] in the gas phase. These studies indicate that except
for 1,2-difluoroethane, the trans conformer is more stable that the gauche conformer.
The energy difference (Etg) increases with the size of the halogen. The atypical
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 296303, 2007.
Springer-Verlag Berlin Heidelberg 2007

Theoretical Gas Phase Study


behaviour of 1,2-difluoroethane has been associated with the gauche effect [6-10] but
the latter is not observed for the 1-fluoro-2-haloethanes. Solvent effects have also
been explored for 1,2-dichloroethane and 1,2-dibromoethane [11]. The study indicates
that an increase in the solvent polarity decreases the energy difference (Etg). It is
worth to point out at this stage that literature involving theoretical studies is limited
with respect to solvent effects [11,12] although polarity of the solvent is known to
affect conformational equilibrium [13].
As part of a continuing series of studies on internal rotation [4-5,11], 1-bromo-2chloroethane has been the target of this work. 1-Bromo-2-chloroethane, being a 1,2disubstituted ethane, can exist as the gauche (C1 symmetry) and trans (Cs symmetry)
conformers as illustrated in Figure 1.





Gauche conformer (C1 symmetry)


Trans conformer (Cs symmetry)

Fig. 1. Gauche and Trans conformers of 1-bromo-2-chloroethane

These gauche and trans conformers of 1-bromo-2-chloroethane have been studied

with a view to obtain (i) the optimised structural parameters, (ii) the energy difference (Etg) and (iii) related thermodynamics properties for torsional rotation. Apart
from energy calculations, the conformers of 1-bromo-2-chloroethane have also been
subjected to vibrational analysis. Solvent effects, using Self-Consistent Reaction
Field (SCRF) methods [14], have also been explored with the dielectric constant
of the solvent varying from 5 to 40. The results of the present investigation are reported herein and to the best of our knowledge there has not been such a type of

2 Calculations
All the calculations have been carried out using Gaussian 03W [15] program suite
and GaussView 3.0 [16] has been used for visualising the molecules. The calculations
have been carried out using second order Mller-Plesset perturbation theory (MP2)
and density functional theory (DFT). The basis set used is 6-311++G(d,p) for all
atoms. The functional used for DFT method is B3LYP. A conformer has first been
optimised and the optimised structure has then been used for frequency calculation
using the same method and basis set involved for optimisation. G2/MP2 and


P. Ramasami

CCSD(T) calculations have also been carried out using MP2/6-311++G(d,p)

optimised structure. For all the converged structures, frequencies calculations have
also been carried out in order to ensure that this conformation corresponds to a
minimum. The SCRF methods used are Isodensity Model (SCRF=IPCM) [14] and
Self-Consistent Isodensity Model (SCRF=SCIPCM) [14]. MP2/6-311++G(d,p) gas
phase optimised structures have been used for the single point calculations for the
Isodensity Model and B3LYP/6-311++G(d,p) full geometry optimisation calculations
have been carried for the Self-Consistent Isodensity Model.

3 Results and Discussion

The optimised structural parameters, which are of interest for the two conformers of
1-bromo-2-chloroethane, are summarised in Table 1. Analysis of Table 1 allows some
conclusions to be made. Firstly, for both conformers the predicted C-Cl and C-Br
bond lengths are longer from B3LYP calculation although the C-C and C-H bond
lengths are nearly same. Secondly the bond angles CCCl and CCBr are larger for the
gauche conformer than the trans conformer. This can be explained in terms of greater
amount of steric repulsion between the halogen atoms in the gauche conformation.
Further these bond angles are larger from B3LYP calculation. However, these bond
angles are nearly same for the trans conformer for both methods. Thirdly, the torsional angle ClCCBr for the gauche conformer is larger from B3LYP calculation.
Lastly the moments of inertia are generally greater from MP2 calculation with
Table 1. Optimised structural parameters for the gauche and trans conformers of 1-bromo-2chloroethane using 6-311++G(d,p) as the basis set





r (C-Cl)/





r (C-C)/





r (C-Br)/





r (C-H)/















W (ClCCBr)/
























Dipole moment/ D

Theoretical Gas Phase Study


The energies of the gauche and trans conformers of the 1-bromo-2-chloroethane

are given in Table 2. These energies have been obtained after full geometry optimisation which has been verified by frequency calculation. G2/MP2 and CCSD(T) energies are also given in Table 2. As part of G2/MP2 and CCSD(T) calculations, MP2/6311+G(3df,2p) and MP3/6-311++G(d,p) energies are included in Table 2. The energy
difference (Etg) and related rotational thermodynamic parameters are also summarised in Table 2. A glance at Table 2 clearly shows that the trans conformer is more
stable. The energy difference (Etg) predicted using B3LYP method is greater than
MP2 method for the same basis set. The free energy difference (Gtg) can be used to
estimate the relative percentage of the trans and gauche conformers. It is found that at
298 K, the percentage of the trans conformer is generally greater than 90%. At this
stage, it is interesting to compare the energy difference (Etg) of the unsymmetrical
1-bromo-2-chloroethane with the symmetrical 1,2-dichloroethane and 1,2dibromoethane. The MP2/6-311++G(d,p) values for these compounds are 6.08 and
8.79 kJ/mol respectively [4].
Table 2. Calculated energies and rotational thermodynamic parameters for the conformers of









(0 K)

(298 K)

(298 K)

MP2/ 6-311+G(3df,2p)











The gauche and trans conformers of 1-bromo-2-chloroethane have also been

subjected to vibrational analysis. The calculated frequencies are reported in Table 3
and the simulated spectra are illustrated in Figure 2. The 18 modes of vibrations
account for the irreducible representations v = 18A of the C1 point group of the
gauche conformer and v = 11A + 7A of the Cs point group of the trans conformer.
All the 18 fundamentals of the gauche and trans conformers have been assigned
appropriately. The values indicate that predictions with MP2 level of theory are
systematically larger than B3LYP level of theory. Since steric interaction between
the atoms is more in the gauche than trans conformer, the CCCl and CCBr bending
modes have higher frequencies in the gauche conformation than the trans conformation. The bending vibrational modes of the CH2 group are in the order scissoring >
wagging > twisting > rocking. The bending mode of the CH2 group attached to bromine atom is at a lower frequency compared to CH2 group attached to chlorine atom.
This can be explained on the basis of reduced mass for CH2 group when attached
to bromine atom. However the stretching vibrational modes of the CH2 group bonded
to the halogen atoms are reversed in terms of frequency. The calculated frequentcies for 1-bromo-2-chloroethane are in agreement with literature values obtained
experimentally [17].


P. Ramasami

Table 3. Calculated frequencies (cm-1) of the conformers of 1-bromo-2-chloroethane and their




Literature Assignments
CH2 a str

CH2 a str


CH2 s str


CH2 s str


CH2 scis


CH2 scis


CH2 wag


CH2 wag


CH2 twist


CH2 twist


CC str


CH2 rock


CH2 rock


CCl str


CBr str


CCCl deform


CCBr deform








CH2 a str


CH2 a str


CH2 twist


CH2 twist


CH2 rock


CH2 rock




CH2 s str


CH2 s str


CH2 scis


CH2 scis


CH2 wag


CH2 wag


CC str


CCl str


CBr str


CCCl deform


CCBr deform


-Values in bracket are infrared intensities in (km/mol)

-For the trans conformer, first 11 frequencies are of A symmetry and last 8 frequencies are of
A symmetry
-For the gauche conformer, all 18 frequencies are of A symmetry

This study has also been extended to study solvent effects. The structures of the
conformers are not much affected by the polarity of the polarity of the solvents. The
effects of solvent on the energy of the gauche and trans conformers and energy difference (Etg) are summarised in Table 4 and illustrated in Figure 3. It can be found that
solvent effects are small but they can be calculated and an increase in the polarity of

Theoretical Gas Phase Study


Arbritary units

the solvent decreases the energy difference (Etg). However in the polar solvents, the
decrease in the energy of the more polar gauche conformer is larger than the trans











Fig. 2. Simulated spectra of the gauche and trans conformers of 1-bromo-2-chloroethane

Energy difference, Etg (kJ/mol)










Dielectric constant

Fig. 3. Energy difference (Etg) for 1-bromo-2-chloroethane in solvents with different dielectric


P. Ramasami

Table 4. Energy and energy difference (Etg) for the conformers of 1-bromo-2-chloroethane in
solvents with different dielectric constants





















: Dielectric constant.

4 Conclusions
This theoretical study has lead to the determination of the optimised structural
parameters, the energy difference (Etg) and related thermodynamics parameters for
1-bromo-2-chloroethane. The results indicate that there is a preference for the trans
conformer both in the gaseous and solution phases. The calculated frequencies of the
conformers are in agreement with literature values. The energy difference (Etg) decreases as the solvent becomes more polar. The results of this study may be used as a
set of reference for the conformers of 1-bromo-2-chloroethane.

The author is grateful to anonymous reviewers for their comments to improve the
manuscript. The author acknowledges the facilities from the University of Mauritius.

1. Orville-Thomas W.J.: Internal Rotation in Molecules. Wiley, New York, 1974
2. Dixon D.A., Matsuzawa N., Walker S.C.: Conformational Analysis of 1,2-Dihaloethanes:
A Comparison of Theoretical Methods. J. Phys. Chem. 96 (1992) 10740-10746
3. Radom L., Baker J., Gill P.M.W., Nobes R.H., Riggs N.V.: A Theoretical Approach to
Molecular Conformational Analysis. J. Mol. Struc. 126 (1985) 271-290
4. Ramasami P.: Gauche and Trans Conformers of 1,2-Dihaloethanes: A Study by Ab initio
and Density Functional Theory Methods. Lecture Series on Computer and Computational
Sciences. Vol. 1, Brill Academic Publishers, The Netherlands (2005) 732-734
5. Ramasami P.: Gas Phase Study of the Gauche and Trans Conformers of 1-Fluoro-2Haloethanes CH2F-CH2X (X=Cl, Br, I) by Ab initio and Density Functional Methods: Absence of Gauche Effect. Lecture Notes in Computer Science. Vol. 3993, Springer, (2006)

Theoretical Gas Phase Study


6. Tavasli M., OHagan D., Pearson C. Petty M.C.: The Fluorine Gauche Effect. Langmuir
Isothems Reprot the Relative Conformational Stability of (+/-)-Erythro- and (+/-)-Threo9,10-Difluorostearic acids. Chem. Commun. 7 (2002) 1226-1227
7. Briggs C.R., Allen M.J., OHagan D., Tozer D.J., Slawin A.M., Geota A.E., Howard J.A.:
The Observation of a Large Gauche Preference when 2-Fluoroethylmanine and 2Fluoroethanol Become Protonated. Org. Biomol. Chem. 2 (2004) 732-740
8. Banks J.W., Batsanov A.S., Howard J.A.K., OHagan D., Rzepa H.S., Martin-Santamaria
S.: The Preferred Conformation of -Fluoroamides. J. Chem. Soc., Perkin Trans. 2. 8
(1999) 2409-2411
9. Wiberg K.B., Murcko M. A., Laidig E.K., MacDougall P. J.: Origin of the Gauche Effect in Substituted Ethanes and Ethenes. The Gauche Effect. J. Phys. Chem. 96 (1992)
6956-6959 and references therein
10. Harris W.C., Holtzclaw J.R., Kalasinsky V.F.: Vibrational Spectra and Structure of 1,2Difluoroethane: Gauche-Trans Conformers. J. Chem. Phys. 67 (1977) 3330-3338
11. Sreeruttun R. K., Ramasami P.: Conformational Behaviour of 1,2-Dichloroethane and 1,2Dibromoethane: 1H-NMR, IR, Refractive index and Theoretical Studies. Physics and
Chemistry of Liquids. 44 (2006) 315-328
12. Wiberg K.B., Keith T.A., Frisch M.J., Murcko M.: Solvent Effects on 1,2-Dihaloethane
Gauche/Trans Ratios. J. Phys. Chem. 99 (1995) 9072-9079
13. McClain B.L., Ben-Amotz D.: Global Quantitation of Solvent Effects on the Isomerization
Thermodynamics of 1,2-Dichloroethane and trans-1,2-Dichlorocyclohexane. J. Phys.
Chem. B 106 (2002) 7882-7888
14. Foresman J.B., Keith T.A., Wiberg K.B., Snoonian J., Frisch M.J.: J. Phys. Chem. 100,
(1996) 16098-16104 and references therein
15. Gaussian 03, Revision C.02, Frisch M.J., Trucks G.W., Schlegel H.B., Scuseria G.E.,
Robb M.A.,. Cheeseman J.R, Montgomery J.A., Jr., Vreven T., Kudin K.N., Burant J.C.,
Millam J.M., Iyengar S.S., Tomasi J., Barone V., Mennucci B., Cossi M., Scalmani G.,
Rega N., Petersson G.A., Nakatsuji H., Hada M., Ehara M., Toyota K., Fukuda R., Hasegawa J., Ishida M., Nakajima T., Honda Y., Kitao O., Nakai H., Klene M., Li X., Knox
J.E., Hratchian H.P.,. Cross J.B, Bakken V., Adamo C., Jaramillo J., Gomperts R., Stratmann R.E., Yazyev O., Austin A.J., Cammi, R. Pomelli C., Ochterski J.W., Ayala P.Y.,
Morokuma K., Voth G.A., Salvador P., Dannenberg J.J., Zakrzewski V.G., Dapprich S.,
Daniels A.D., Strain M.C., Farkas O., Malick D.K., Rabuck A.D., Raghavachari K.,
Foresman J.B., Ortiz J.V., Cui Q., Baboul A.G., Clifford S., Cioslowski J., Stefanov B.B.,
Liu G., Liashenko A., Piskorz P., Komaromi I., Martin R.L., Fox D.J., Keith T., Al-Laham
M.A., Peng C.Y., Nanayakkara A., Challacombe M., Gill P.M.W., Johnson B., Chen W.,
Wong M.W., Gonzalez C., and Pople J.A., Gaussian, Inc., Wallingford CT, 2004.
16. GaussView, Version 3.09, R. Dennington II, T. Keith, J. Millam, K. Eppinnett, W. L.
Hovell, R. Gilliland, Semichem, Inc., Shawnee Mission, KS, 2003.
17. Shimanouchi T.: Tables of Molecular Vibrational Frequencies Consolidated Volume I.
National Bureau of Standards (1972) 1-160

Dynamics Simulation of Conducting Polymer

Interchain Interaction Eects on Polaron
Jose Rildo de Oliveira Queiroz and Geraldo Magela e Silva
Institute of Physics
University of Braslia, 70.917-970,
Braslia, Distrito Federal, Brazil

Abstract. Eects of interchain interaction on the polaron-bipolaron

transition on conjugated polymer are investigated. We use the Su-Schrieer-Heeger model combined with the Pariser-Parr-Pople model modied to include interchain interaction, and an external electric eld. We
study the dynamics within the Time-Dependent Unrestricted HartreeFock approximation. We nd that removing an electron from interacting
conducting polymer chains bearing a single positively charged polaron
leads to the direct transition of polaron to bipolaron state. The transition which is produced is single-polaron to bipolaron transition whose
excitation spectrum explains the experimental data. We also nd that
depending on how fast the electron is removed, a structure that contains
a bipolaron coupled to a breather is created.
Keywords: Polaron, Dynamics, Interchain-Interaction, Transition.


Properties of organic light-emitting diodes, transistors and lasers are due to

conjugated polymers.[1,2] Their semiconductor properties are related to the nonlinear electronic response of the coupled electron-lattice system.[3] These nondegenerate ground state -electron materials are able to form, by the electronlattice interaction, self localized electron states called polaron and bipolaron.
Bipolarons and polarons are though to play the leading role in determining
the charge injection, optical and transport properties of conducting polymers.[4]
Bipolarons and polarons are self-localized particle-like defects associated with
characteristic distortions of the polymer backbone and with quantum states
deep in the energy gap due to strong electron-lattice coupling. A polaron has
a spin 1/2 and an electric charge e, whereas a bipolaron is spinless with a
charge 2e.
A critical problem in the understanding of these materials is the consistent
description of the dynamics of mechanism of creation, stability and transition of
polarons to bipolarons.
Y. Shi et al. (Eds.): ICCS 2007, Part II, LNCS 4488, pp. 304311, 2007.
c Springer-Verlag Berlin Heidelberg 2007

Dynamics Simulation of Conducting Polymer Interchain Interaction Eects


UV-Vis-NIR spectroscopy studies on poly(p-phenylene vinylene) combined

to the follow-up of the kinetics of doping with iodine vapor were reported and
interpreted as direct observations of the formation of polaronic charge carriers.[1]
However, by following dierent doping levels with I2 doping, bipolaron formation
is identied as well showing that polarons and bipolarons coexist in the oxidized
polymer. These results corroborate the ndings of Steinm
uller et al[5] where the
evolution of the gap states of bithiophene as a model system for polythiophene for
dierent n-doping levels was followed by ultraviolet photo-emission spectroscopy
(UPS) and electron-energy-loss spectroscopy (EELS).
The polaron-bipolaron transition problem was explicitly addressed by Cik et
al in poly(3-dodecyl thiophene) in connection with temperature changes.[6] They
found that when the sample was heated and subsequently cooled, there was an
amplication of the diamagnetic inter- and intra-chain bipolarons. Kaufman et al
study of polypirrole[7] by optical-absorption spectroscopy and ESR also pointed
that the metastable states possess spin, while the stable states do not.
Many eorts have been devoted to describe the polaron-bipolaron conundrum
theoretically. Electronic structure calculations,[8] extensions of the Su-SchrieerHeeger model,[9,10] the Pariser-Parr-Pople model,[11] as well as combinations of
them[12] have been used to determine the relative prevalence of each excited state
in various regimes. Several dierent approaches[9,12,13,14] point to bipolaron
system been more stable than the polaron system when dopants are taken into
Two mechanisms have been put forward to explain the transition from polaron to bipolaron states. Polarons recombination into bipolaron,[6,7,15] where
the bipolaron is generated when polarons with the same electric charge meet
each other; and single-polaron to bipolaron transition,[1,13,16] where the polaron structure is transformed by the addition of one extra charge.
Here, we report the results of dynamical calculations on polaron-bipolaron
transition mechanism with interacting chains. We use the Su-Schrieer-Heeger
model[17] modied to include the Coulomb interaction via extended Hubbard
model, Brazovskii-Kirova (BK) symmetry breaking terms, the action of an external electric eld, and interchain interactions.[12] The time-dependent equations
of motion for the lattice sites and the -electrons are numerically integrated
within the Time-Dependent Hartree-Fock approximation.
om et al have used a similar approach to treat polaron migration between chains (ref. [20]). Nevertheless, they did not consider electron Coulomb
interaction and symmetry breaking terms. Furthermore, open end boundary conditions were used.
In agreement with UV-Vis-NIR spectroscopy,[1] UPS and EELS measurements,[5] our theoretical studies of the transition indicate that the single-polaron
to bipolaron transition is the preferred mechanism of polaron-bipolaron transition in conjugated polymers.
We nd that a breather mode of oscillation is created at the lattice in connection with the transition around the bipolaron. The breather amplitude is


J.R. de Oliveira Queiroz and G.M. e Silva

associated with how fast the extra charge is added to the system. Moreover, the
created bipolaron is trapped by the breather.


A SSH-Extended Hubbard type Hamiltonian modied to include an external

electric eld and interchain interaction is considered. The Hamiltonian is given
H = H1 + H2 + Hint
Hj =

(tji,i+1 Cji+1,s Cji,s + H.c)

(nji )(nji )

(nji 1)(nji+1 1)


yj2i +


u 2ji ,

j = 1, 2


Hint =


t (C1i,s C2i,s + C2i,s C1i,s )



Vp (C1m,s C1m,s + C1m+1,s C1m+1,s )


(Ci,s ) is the creation (annihilation) operator of a electron with spin s at

the ith lattice site, ni,s Ci,s

Ci,s is the number operator, and ni = s ni,s .
yn un+1 un , where un is the displacement of nth CH-group from equilibrium
position in the undimerized phase. tjn,n+1 = exp(iA)[(1 + (1)n0 )t0 yjn ],
t0 is the transfer integral between the nearest neighbor sites in the undimerized
chains, t is the hopping integral between sites with the same index on dierent
chains from p site to q site, is the electron-phonon coupling, 0 is the BK
symmetry-breaking parameter. M is the mass of a CH group, K is the spring
constant of a -bond, U and V the on-site and nearest-neighbor Coulomb repulsion strengths, respectively. ea/(c), e is the absolute value of the electronic
charge, a the lattice constant, and c the light velocity. The relation between the
time-dependent vector potential A and the uniform electric eld E is given by
We use as parameters the commonly accepted values for conjugated
E = 1c A.
2 , = 4.1eV A
1 , U = 0 to
polymers: t0 = 2.5eV , t = 0.075eV , K = 21eV A
0 = 0.05t0 , Vp = 0.2eV , and a bare optical phonon
1.8t0 , V = U/2, 
a = 1.22A,
energy Q =  4K/M = 0.16eV .[19]

Dynamics Simulation of Conducting Polymer Interchain Interaction Eects


The dynamics of the lattice part is made with the Euler-Lagrange equations
and the Schr
odinger -electrons equation of motion is solved within the unrestricted time-dependent Hartree-Fock approximation. It should be pointed out
that both equations depend explicitly on the occupation number of the oneparticle electronic states.[12]
In order to perform the dynamics, an initial self-consistent state is prepared
solving the equations of motion for the lattice and -electrons simultaneously.[20]
Periodic boundary conditions are considered. The initial state is taken in equilibrium (E = 0). Therefore, we have u n = 0 for all n in the initial state.
The equations of motion are solved by discretizing the time variable with a
step t. The time step t is chosen so that the change of ui (t) and A(t) during
this interval is always very small in the electronic scale.[12]

Simulation Results

One more hole is injected in polymer chains bearing already positively charged
polarons. Since charged excitations defects can be created by quite dierent
means: photoexcitations, chemical doping or direct charge injection via electronic
device, we performed simulations where the extra electron is taken from the
system during dierent time intervals (T ). We varied T from 0 to 100 fs. The
shorter time intervals simulate photoexcitations and the direct charge injection.
The longer time intervals account for the dierent impurity addition procedures
associate with chemical doping. The electron is taken from the highest occupied
level using the following expression
OF (t) =

(t ti )
[1 + cos(


for t between ti and ti + T . Here, ti is the time when the hole injection begins
and OF (t) is the occupation number of the Fermi level.
We have considered two polymeric interacting chains with N = 60 sites each,
containing initially two positively charged polaron in all simulations. We use a
mean charge density i (t), derived from the charge density i (t), and the order
parameter yi (t) [yi (t) = ui+1 (t) ui (t)] to analyze the simulations.[19] The
dynamics of the system is followed during 100,000 time steps spanning 400 fs.
A smooth transition of one of the polarons to a bipolaron, in its respective
chain, is obtained after the adiabatic removal (T > 80 fs) of the third electron.
Figure 1 shows the time evolution of the energy levels neighboring and inside
the energy gap. It can be seen that the energy levels associated with the polaron
move in the middle-gap direction assuming a bipolaron conformation. The small
oscillation of the levels are due to lattice oscillations induced by the hole injection
Figure 2 presents bond length order parameter of chains 1 and 2. It should be
noted that we use periodic boundary conditions, therefore, the order parameter
of chain 1 (Fig. 2(a)) represents a polaron around site 1 (it begins at site 45,
it goes until site 60 and it continues from site 1 to site 15). Positively charged


J.R. de Oliveira Queiroz and G.M. e Silva

Energy Gap


Energy (eV)





Time (fs)



Fig. 1. Time evolution of energy levels inside and around the gap in an adiabatic
transition. The spin up levels are shown. The system changes from polaron levels (t <
80 fs) to bipolaron levels conguration (t > 100 fs).

polarons repel each other. They stay apart from each other as far as possible.
The polaron-bipolaron transition occurs in chain 2. This clear transition happens in chain 2 as an apparent spontaneous symmetry breaking. Nevertheless,
the presence of an impurity on chain 2 leads to a symmetry breaking and the
association of one polaron to it. It is obtained that the polaron associated with
the impurity makes the transition to bipolaron.
Eects of interchain interaction were addressed by varying the extent of the
interacting region (p and q in the Hamiltonian). For the transitions where two
chains interact only on half of their length (p=31 and q=60), one polaron stays
in the interacting region and the other stays in the non-interacting region due
again to Coulomb repulsion. It is obtained that the polaron-bipolaron transition
happens with the polaron in the interacting region. Therefore, the interchain
interaction is also eective in promoting the transition.
Figure 3 presents a very special case where two polarons merge to create a
bipolaron. This case is quite the originally suggested process for the polaronbipolaron transition.[21] Here, after the hole injection, there appears an exciton lasting for about 200 f s and then the bipolaron takes place in the lattice.
Nevertheless, it should be noted that this happens when one chain has a high
density of polarons and the other one has initially none of them. It can be
clearly seen that two polarons in chain 1 merges to a single bipolaron and another polaron appears in chain 2 du