Sunteți pe pagina 1din 9

Research Associate

E-Mail

lenz@in.tum.de

Room

0255

Phone

+49.89.289.25761

Fax

+49.89.289.18107

Address

Institut fr Informatik VI
Technische Universitt Mnchen
Boltzmannstrae 3
85748 Garching bei Mnchen
Germany

Homepage

Research Associate

Open Student Projects

Curriculum Vit

Research Interests

CoTeSys Projects

Affiliations

Videos

Task-based robot controller

Internal 3D representation

Learning a new building plan and Execution of previously learned building plans

Kinect enabled robot workspace surveillance

Interactive human-robot collaboration with Microsoft Kinect

Fusing multiple Kinects to survey shared Human-Robot-Workspaces

Publications

Open Student Projects


Many more on request. Just write me an email!

Curriculum Vit

1982: born in Burghausen, Germany.

2005: B.Sc.: Bachelor of Science in Electrical Engineering at Technische Universitt Mnchen

2007: Dipl.-Ing.: Diploma in Electrical Engineering at Technische Universitt Mnchen

2007: Research Assistant at the Robotics and Embedded Systems Lab, Technische Universitt
Mnchen

2011: Dr. rer. nat.: Informatics, Technische Universitt Mnchen

2011: Co-founder of Cognition Factory GmbH

Research Interests

Computer Vision

Visual Tracking

Image Processing

Cognitive Robotics

CoTeSys Projects

JAHIR: Joint Action for Humans and Industrial Robots

BAJA - Basic Aspects of Joint Action

ITrackU: Image-based Tracking and Understanding

Affiliations

Member of the Tracking Group

Member of the Task Force: Cognitive Architectures

Videos
Task-based robot controller
Direct physical human-robot interaction has become a central part in the research field of robotics today.
To use the advantages of the potential for humans and robots to work together as a team in industrial
settings, the most important issues are safety for the human and an easy way to describe tasks for the
robot. In the next video, we present an approach of a hierarchical structured control of industrial robots for
joint-action scenarios. Multiple atomic tasks including dynamic collision avoidance, operational position,
and posture can be combined in an arbitrary order respecting constraints of higher priority tasks. The
controller flow is based on the theory of orthogonal projection using nullspaces and constraint least-square
optimization.
Internal 3D representation
In this video one can see a visualization of the internal 3D representation used in the robotic controller to
measure distances for the collision avoidance task. The 3D representation includes static and dynamic
objects and is updated according to sensor data. Every sensor module can broadcast its information about
the current status of unique identified objects in the workspace.
Learning a new building plan and Execution of previously learned building plans
In this videos, we show how our industrial robot is instructed via speech input and that the JAHIR robot is
following and executing a collaborative plan that was teached-in previously using multiple input modalities.
Kinect enabled robot workspace surveillance
A human co-worker is tracked using a Microsoft Kinect in a human-robot collaborative scenario.

Interactive human-robot collaboration with Microsoft Kinect


In this video a human is interacting with the robotic assistive system JAHIR to jointly perform assembly t
asks. The human is tracked using a Microsoft Kinect. Virtual Buttons in the menu guided interaction allow a
dynamic and adaptive way of controlling and interacting with the robotic system. The overlay video on the
upper right shows a close up of the working desk with the on-table projection for the buttons, the virtual
environment representation and the menu.
Fusing multiple Kinects to survey shared Human-Robot-Workspaces
In today's industrial applications, robots are often strictly separated in space or time. Without surveillance
of the joint workspace, a robot is unaware of unforeseen changes in its environment and can not react
properly. Since robots are heavy and bulky machines, collisions may have severe consequences for the
human worker. Thus, the work area must be monitored to recognize unknown obstacles in it. With the
knowledge of its surroundings, the robot can be controlled to avoid collisions with any detected object. to
enable direct collaboration between humans and industrial robots. Therefore, the environment is perceived
by using multiple, distributed range sensors (Microsoft Kinect). The sensory data sets are decentrally preprocessed and broadcasted via the network. These data sets are then processed by additional
components that segment and cluster unknown objects and publish the gained information to allow the
system to react to unexpected events in the environment.

Publications
[1] Claus Lenz and Alois Knoll. Mechanisms and capabilities for human robot
collaboration. In Proceedings of the 23rd IEEE International Symposium on
Robot and Human Interactive Communication, pages 666-671, Edinburgh,
Scotland, UK, 2014. [ DOI | .bib | .pdf ]
[2] Markus Huber, Aleksandra Kupferberg, Claus Lenz, Alois Knoll, Thomas
Brandt, and Stefan Glasauer. Spatiotemporal movement planning and
rapid adaptation for manual interaction. PLoS ONE, 8(5):e64982, 2013.
[ DOI | .bib | .pdf ]
[3] Markus Huber, Claus Lenz, Cornelia Wendt, Berthold Frber, Alois Knoll,
and Stefan Glasauer. Predictive mechanisms increase efficiency in robotsupported assembies: An experimental evaluation. In Proceedings of the
IEEE International Symposium on Robot and Human Interactive
Communication, Gyengju, Korea, 2013. [ .bib | .pdf ]
[4] Jrgen Blume, Alexander Bannat, Gerhard Rigoll, Martijn Niels Rooker,
Alfred Angerer, and Claus Lenz. Programming concept for an industrial hri
packaging cell. In Proceedings of the IEEE International Symposium on
Robot and Human Interactive Communication, Gyengju, Korea, 2013.
[ .bib | .pdf ]
[5] Aleksandra Kupferberg, Markus Huber, Bartosz Helfer, Claus Lenz, Alois
Knoll, and Stefan Glasauer. Moving just like you: Motor interference
depends on similar motility of agent and observer. PLoS ONE, 7(6):39637,
2012. [ DOI | .bib | .pdf ]

[6] Claus Lenz, Markus Grimm, Thorsten Rder, and Alois Knoll. Fusing
multiple kinects to survey shared human-robot-workspaces. Technical
Report TUM-I1214, Technische Universitt Mnchen, Munich, Germany,
2012. [ .bib | .pdf ]
[7] Claus Lenz, Thorsten Rder, Markus Rickert, and Alois Knoll. Distanceweighted Kalman fusion for precise docking problems. In Proceedings of
the International Conference on Mobile Robots and Competitions, Lisbon,
Portugal, 2011. [ .bib | .pdf ]
[8] Claus Lenz, Alice Sotzek, Thorsten Rder, Helmuth Radrich, Alois Knoll,
Markus Huber, and Stefan Glasauer. Human workflow analysis using 3d
occupancy grid hand tracking in a human-robot collaboration scenario.
In Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems, San Francisco, USA, 2011. [ DOI | .bib | .pdf ]
[9] Claus Lenz. Context-aware human-robot collaboration as a basis for future
cognitive factories. Dissertation, Technische Universitt Mnchen,
Mnchen, 2011. [ .bib | .pdf ]
[10] Alexander Bannat, Thibault Bautze, Michael Beetz, Jrgen Blume, Klaus
Diepold, Christoph Ertelt, Florian Geiger, Thomas Gmeiner, Tobias Gyger,
Alois Knoll, Christian Lau, Claus Lenz, Martin Ostgathe, Gunther Reinhart,
Wolfgang Rsel, Thomas Rhr, Anna Schuboe, Kristina Shea, Ingo Stork
genannt Wersborg, Sonja Stork, William Tekouo, Frank Wallhoff, Mathey
Wiesbeck, and Michael F. Zh. Artificial cognition in production
systems. IEEE Transactions on Automation Science and Engineering,
PP(99):1-27, 2010. [ .bib | .pdf ]
[11] Manuel Giuliani, Claus Lenz, Thomas Mller, Markus Rickert, and Alois
Knoll. Design principles for safety in human-robot interaction. International
Journal of Social Robotics, 2(3):253-274, 2010. [ DOI | .bib | .pdf ]
[12] Claus Lenz, Thorsten Rder, Martin Eggers, Sikandar Amin, Thomas
Kisler, Bernd Radig, Giorgio Panin, and Alois Knoll. A distributed manycamera system for multi-person tracking. In Proceedings of the 1st
International Joint Conference on Ambient Intelligence, Malaga, Spain,
2010. [ DOI | .bib | .pdf ]
[13] Christoph Staub, Claus Lenz, Giorigio Panin, and Alois Knolland Robert
Bauernschmitt. Contour-based surgical instrument tracking supported by
kinematicprediction. In proceedings of the IEEE/RAS International
Conference on BiomedicalRobotics and Biomechatronics, pages 746-752,
Tokyo, Japan, 2010. [ DOI | .bib | .pdf ]
[14] Frank Wallhoff, Jrgen Blume, Alexander Bannat, Wolfgang Rsel, Claus
Lenz, and Alois Knoll. A skill-based approach towards hybrid
assembly. Advanced Engineering Informatics, 24(3):329 - 339, 2010. The
Cognitive Factory. [ DOI | .bib | .pdf ]
[15] Martin Wojtczyk, Giorgio Panin, Thorsten Rder, Claus Lenz, Suraj Nair,
Rdiger Heidemann, Chetan Goudar, and Alois Knoll. Teaching and
implementing autonomous robotic lab walkthroughs in a biotech laboratory
through model-based visual tracking. In IS&T / SPIE Electronic Imaging,
Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques:
Autonomous Robotic Systems and Applications, San Jose, CA, USA, 2010.
[ DOI | .bib | .pdf ]
[16] Claus Lenz, Giorgio Panin, Thorsten Rder, Martin Wojtczyk, and Alois

Knoll. Hardware-assisted multiple object tracking for human-robotinteraction. In Francois Michaud, Matthias Scheutz, Pamela Hinds, and
Brian Scassellati, editors, HRI '09: Proceedings of the 4th ACM/IEEE
international conference on Human robot interaction, pages 283-284, La
Jolla, CA, USA, 2009. ACM. [ DOI | .bib | .pdf ]
[17] Claus Lenz, Markus Rickert, Giorgio Panin, and Alois Knoll. Constraint
task-based control in industrial settings. In Proceedings of the IEEE/RSJ
International Conference on Intelligent Robots and Systems, pages 30583063, St. Louis, MO, USA, 2009. [ DOI | .bib | .pdf ]
[18] Martin Wojtczyk, Giorgio Panin, Claus Lenz, Thorsten Rder, Suraj Nair,
Erwin Roth, Rdiger Heidemann, Klaus Joeris, Chun Zhang, Mark Burnett,
Tom Monica, and Alois Knoll. A vision based human robot interface for
robotic walkthroughs in a biotech laboratory. In Franois Michaud, Matthias
Scheutz, Pamela Hinds, and Brian Scassellati, editors, HRI '09:
Proceedings of the 4th ACM/IEEE international conference on Human
Robot Interaction, pages 309-310, La Jolla, CA, USA, 2009. ACM.
[ DOI | .bib | .pdf ]
[19] Markus Huber, Claus Lenz, Markus Rickert, Alois Knoll, Thomas Brandt,
and Stefan Glasauer. Human preferences in industrial human-robot
interactions. InProceedings of the International Workshop on Cognition for
Technical Systems, Munich, Germany, 2008. [ .bib | .pdf ]
[20] Claus Lenz, Suraj Nair, Markus Rickert, Alois Knoll, Wolfgang Rsel,
Jrgen Gast, and Frank Wallhoff. Joint-action for humans and industrial
robots for assembly tasks. In Proceedings of the IEEE International
Symposium on Robot and Human Interactive Communication, pages 130135, Munich, Germany, 2008. [ .bib | .pdf ]
[21] Claus Lenz, Giorgio Panin, and Alois Knoll. A GPU-accelerated particle
filter with pixel-level likelihood. In International Workshop on Vision,
Modeling and Visualization (VMV), Konstanz, Germany, 2008. [ .bib | .pdf ]
[22] Thomas Mller, Claus Lenz, Simon Barner, and Alois Knoll. Accelerating
integral histograms using an adaptive approach. In Proceedings of the 3rd
International Conference on Image and Signal Processing, Lecture Notes
in Computer Science (LNCS), pages 209-217, Cherbourg-Octeville,
France, 2008. Springer. [ DOI | .bib | .pdf ]
[23] Giorgio Panin, Claus Lenz, Suraj Nair, Erwin Roth, Martin Wojtczyk,
Thomas Friedlhuber, and Alois Knoll. A unifying software architecture for
model-based visual tracking. In IS&T/SPIE 20th Annual Symposium of
Electronic Imaging, San Jose, CA, 2008. [ .bib | .pdf ]
[24] Giorgio Panin, Erwin Roth, Thorsten Rder, Suraj Nair, Claus Lenz, Martin
Wojtczyk, Thomas Friedlhuber, and Alois Knoll. ITrackU: An integrated
framework for image-based tracking and understanding. In Proceedings of
the International Workshop on Cognition for Technical Systems, Munich,
Germany, 2008. [ .bib |.pdf ]
[25] Gunther Reinhart, Wolfgang Vogel, Wolfgang Rsel, Frank Wallhoff, and
Claus Lenz. JAHIR - Joint action for humans and industrial robots.
In Fachforum: Intelligente Sensorik - Robotik und Automation. Bayern
Innovativ - Gesellschaft fr Innovation und Wissenstransfer mbH, 2007.
[ DOI | .bib | .pdf ]
2016 Technische Universitt Mnchen Department of Informatics Robotics and Embedded Systems

2011 Technische Universitt M

Soft Biophysics group


Martin LenzLPTMS of CNRS and Universit Paris-Sud

Our group uses tools from theoretical Soft Matter Physics to


investigate the mechanics of living systems, from
elementary cellular components to whole organisms. In
particular, we seek to characterize the ways in which these
active, self-driven systems differ from that of usual
materials. These concrete examples allow us to understand
how the cell takes advantage of its internal activity to adapt
to its environment.

Univ. Paris-Sud - Building


Martin
Lenz 100
LPTMS
office
232 15 rue Georges Clmenceau
(+33)1 69 15 32 62 91405 Orsay CEDEX, France
martin.lenz@u-psud.fr
Click here for directions

Ian Lenz
PhD Student

Department of Computer Science


Cornell University
Advisors: Ashutosh Saxena, Ross Knepper
Quick links: Publications, Videos

Email:
ianlenz at cs dot cornell dot edu

last updated M

About
I'm a final year PhD student, graduating May 2016. I work with the Personal Robotics
Lab and the Robotic Personal Assistants Lab here at Cornell. Previously, I also led
our Miniature Aerial Vehicles project.

Interests
I'm interested in machine learning applied to robotic tasks. In my PhD work, I applied deep
learning algorithms to robotic manipulation problems, including grasping and cutting
variable food materials.
In the future, I'm interested in finding new applications for these algorithms and adapting
them to the new challenges presented. I'm also interested in incorporating new modalities,
such as vision, into real-time control, investigating hierarchical approaches integrating
model-based and model-free control, and incorporating active learning methods to improve
data collection.

Education
Cornell University
2010-2016
PhD student in Computer Science
Advised by Ashutosh Saxena and Ross Knepper
Thesis: Deep Learning for Robotics - I investigate the unique strengths of deep learning
algorithms for robotic applications and address the challenges in adapting these algorithms
to robotics problems through several successful robotic systems.

Carnegie Mellon University


2006-2010
B.S. in Electrical and Computer Engineering (focus in control systems), Minor in Robotics
Research experience:
Center for the Neural Basis of Cognition fellowship with Tai-Sing Lee,
CMU Google Lunar X-Prize Team with Red Whittaker

Publications

Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds,


Language, and Trajectories.
Jaeyong Sung, Ian Lenz, Ashutosh Saxena. Cornell Tech Report 2015
[PDF, more] bibtex

DeepMPC: Learning Deep Latent Features for Model Predictive Control.


Ian Lenz, Ross Knepper, Ashutosh Saxena. Robotics: Science and Systems (RSS) 2015
[PDF,extended version] bibtex
Given long oral at RSS (1 of 8), invited talk at AAAI 2016

Deep Learning for Detecting Robotic Grasps.


Ian Lenz, Honglak Lee, Ashutosh Saxena. International Journal of Robotics Research (IJRR)
Special Issue on Robot Vision 2015
[PDF, more] bibtex
(Previous version presented at RSS 2013)

Hierarchical Semantic Labeling for Task-Relevant RGB-D Perception.


Chenxia Wu, Ian Lenz, Ashutosh Saxena. In: Robotics: Science and Systems (RSS) 2014
[PDF] bibtex

Low-Power Parallel Algorithms for Single Image based Obstacle Avoidance in


Aerial Robots.
Ian Lenz, Mevlana Gemici, Ashutosh Saxena. In: International Conference on Intelligent
Robots and Systems (IROS) 2012
[PDF] bibtex

Videos

S-ar putea să vă placă și