Sunteți pe pagina 1din 36

Interactive Floor

Design Project Report Interaction Design, IT-University Chalmers University of Technology

Mohammad Ardavan Eelke Boezeman Amir Chamsaz Keyvan Minoukadeh Alejandro Valenzuela

1 Introduction ............................................... 3 1.1 Background ............................................. 3 1.2 Design Goals .......................................... 3 2 Architecture ............................................... 3 2.1 Hardware ................................................ 4 2.2 Software ................................................. 5 2.2.1 Tracking module ................................. 6 2.2.2 Interface module ................................ 7 3 Operation ................................................... 9 3.1 User types .............................................. 9 3.2 Scenarios ................................................ 9 3.2.1 Prisoners Dilemma ............................. 9 3.2.2 Personzilla ........................................10 3.2.3 Meditation game................................11 3.2.4 Pong game .......................................12 3.3 Installation.............................................12 3.4 Licensing ...............................................15 4 Realisation ................................................ 15 5 Evaluation ................................................. 17 5.1 Costs and Free Software ..........................17 5.2 Sensitivity to Light ..................................18 5.3 Speed....................................................18 5.4 Foot as Pointing Device ............................18 5.5 Shadows ................................................19 5.6 Sound, ambient sound and music ..............19 6 Related Work ............................................ 19 6.1 Living Surface by Vertigo Systems.............20 6.2 The Famous Grouse Experience by ART+COM....................................................20 6.3 Audience by Chris O'Shea.........................21 7 References ................................................ 23 8 Appendices................................................ 24 8.1 Project plan ............................................24 8.2 Design document ....................................25

1 Introduction
The Interactive Floor is a video-projection displayed on the floor that people can interact with. As soon as someone walks into the projected area he becomes part of the digital world that is projected around him. The Interactive Floor offers a low-barrier interaction. People do not have to have any knowledge or skills to perform the interaction; the only requirement is that they enter and move around in the projection area. As soon as someone enters the area he is no longer a bystander, but a user. The system keeps track of where the user is and uses this information to change the interface that it projects around the user. The user sees the interface projected around him and can move around to explore. He learns quickly that by moving around, he changes the interface. His actions change the state of the device. In this way the Interactive Floor offers a direct feedback loop that is very much similar to a computer: the projected area is the monitor and the user is the mouse - yet, our aim is not to provide another input method for ordinary computer programs but to present users with an innovative interactive experience, approaching what is known as "enhanced reality".

1.1 Background
The Interactive Floor is a Master project developed by five students from the Interaction Design program at Chalmers University of Technology, Gothenburg, Sweden. The project is part of the Design Project course in which students are required to spend 3 months developing a hardware and/or software project. The theme of the 2009 Design Project course was Interacting Interactive Personalities. All student projects were presented during Expo 09 exhibition at Lindholmen Science Park, Gothenburg, Sweden.

1.2 Design Goals


The aim for the project is twofold. First, the goal is to develop a device that projects a display on the floor while capturing user input with a webcam. The device should be able to function under different lighting circumstances and environments, with different kinds of users and should be able capture multiple user simultaneously. The device is required to work on basic hardware as the project budget is limited. The second goal of the project is to research what kind of scenarios and games are suitable for this kind of interface. By developing different applications that each have a different perspective, task and type of user input, the goal is to get a rough idea of what works - and what doesn't. In light of the theme of the Design Project course, the develop applications should contain agents with personalities that can interact with each other.

2 Architecture
The architecture comprises everything that was used to project the interface and capture and process the movements from users. The architecture involves both the hardware and the software that runs on it.

2.1 Hardware
Due to budgetary issues it was necessary to use standard basic hardware components. The setup consisted of several components that were all integrated to each other to create a single working device. It consisted of: Projector: The system uses a projector to display images on the floor; therefore a projector that projects as bright and as wide as possible was required. Most commonly available projectors will work as long as light conditions can be controlled, i.e. kept to a minimum by blocking external light. For this project the Sharp Notevision PG-C30XE projector is used, because of its wide angle (33mm) and high lumen value (1700 lm). Webcam: A fully UVC-compliant webcam is required; the recommended framerate capabilities of the webcam for adequate responsiveness range from 15 to 30 frames per second in low light conditions. The webcam must be modified by removing the infrared-blocking filter and placing an infrared-only-pass filter; this will ensure that the system does not pick up the projected images as users. The recommended webcam model is Creative Live!Cam IM Ultra. There are various instructions on the web for modifying webcams for IR use. Infrared light beam: under normal lighting conditions humans emit not enough infrared light to be detected properly. A self-made infrared LED circuit, containing 64 OSRAM SFH 285-2 infrared LEDs, was used to produce the infrared light beam.

Circuit diagram of the infra-red LED array. Mount structure and mirror: Mounting the projector exactly above the displayed area often requires an extensive and expensive hardware setup. It must obviously be very safe and functional. Projection from a high altitude is

therefore much easier if the projector is placed on a balcony or a high plateau. To project downwards a mirror is used that reflects the projection downwards to the floor.

In house testing of the mounting structure in the Design Lab at IT-Univ. The mirror reflects the image downwards. The mounting structure also holds the webcam (left side) and the infra-red light beamer (right side). The support structure for the mirror was also used to mount the webcam and the infrared beam. Both are ideally placed towards the center of the displayed area, so the mirror was obviously the best place to do this. It also allowed to have a relatively compact hardware setup which is helpful during transportation. Computer(s): The software was made in such a way that it could run on either one computer or on two computers (one for image tracking and for interface projection). In the two computer setup a TCP/IP connection is necessary so the computers can communicate. The minimum system requirements are: 2 GHz processor 1 GB RAM 1 USB port for the webcam Well-supported video card VGA port for the projector The recommended system requirements are: Dual-core 2 GHz processor 2 GB RAM 1 USB port for the webcam Graphics accelerator video card VGA port for the projector GNU/Linux or MacOSX hardware support is required.

2.2 Software
The development of the software framework was divided in two modules: tracking and interface. The tracking module takes care of the input into the system which consisted of user positions. The interface module is responsible of processing this user position data and changing the display accordingly.

The software framework was divided in two parts for flexibility as well as development reasons: Parallel development: many different software third party frameworks and libraries needed to be tested to find the one that best suits our requirements. This work could be done in parallel to the development of the interface module without the interfering with each other. Different requirements: While user tracking has efficiency as its most important requirement, the user interface had expressiveness and ease of programming as its primary goals. The obvious programming language taking care of user tracking would therefore be C++. Because of two module division, the interface module could be implemented in any other programming language. Different team members: Each team member had a different set of skills and experience with programming languages and frameworks. The division allowed team members to work in a programing language that suited their needs and experience. Because of it's simplicity and object-orientation all interfaces were developed in Processing, a framework that sits on top of Java, while the Dividing the software into two modules to allow for parallel development created the need for a protocol that defined the communication between the two modules. The protocol defined what would be communicated and in what format. Both modules were obligated to follow the predefined protocol. This created the advantage of seamless switching between modules. It also created the possibility of simulating tracking data and using it as input for the interface module. A more monolithic approach to parallel development was considered as well. However the advantages of working on a single code base did not outweigh the down sides of developing with multiple team members - with each different needs, skills and knowledge. After trying out many different software platforms, OpenFrameworks was chosen because the OpenFrameworks-based implementation of the Optic Flow algorithm was found to be the best fit for our requirements.

2.2.1 Tracking module


The Optic Flow algorithm was used for keeping track of objects that change their position and their shape in a gradual manner from one frame to another. Since the webcam was able to sense a much wider area than the projected image, scaling users' positions and ignoring those outside the projected image's area were also actions performed by the tracking module. During the testing, it was found out that on-the-fly adjustments would be needed to compensate as much as possible for changing light conditions; therefore the tracking module also allowed for adjusting the minimum and maximum blob sizes for different user groups, as well as image filtering to eliminate mild noise. To compensate for the webcam perspective, the option to offset users' positions was also added to the tracking system.

Interface used to change tracking settings and reset tracking background image. Detected user positions were relayed to applications in two ways: 1. Through the standard output stream (stdout) 2. Through TCP/IP sockets Initially we relied on the standard output stream but later on switched to sockets. The main reason for the switch was to allow us to separate the tracking module and the output module by giving each its own machine (and therefore more processing power).

2.2.2 Interface module


The interface module could be anything that used the communication protocol. In section 3.2 four different scenarios are described which are implementations of the interface module. The relationship between the tracking module and the interface module is oneto-many. Overview The following diagram shows how the two modules integrated with each other. The process begins at the tracking module where image data from the webcam is processed into user position data. The interface module uses this data to update the scenario and send the graphical data to the projector.

Diagram showing how the two modules integrated with each other.

Diagram showing how the two modules make use of different software frameworks.

3 Operation
3.1 User types
A participant is someone standing on the projected field. Participants are able to interact with the visible agents, as well as among themselves (if we have more than one participant on the field the system allows participants to interact simultaneously), in the different schemes available. Participants can enter and exit the projected field at any time and we expect their interaction with it to last between 3 and 15 minutes, depending on how interested they are. We expect participants to have a very diverse age and occupation range, as the interactive floor is pitched as a public exhibition.

3.2 Scenarios
The system can be used in a number of different ways depending on the application running. The tracking module always provides the same input (position of participants, amount of movement) regardless of the application running. Each application can use this input and interpret it in whatever way is relevant. For example, an application which only responds to the amount of movement on the floor can ignore the position values of participants provided by the input module. We developed four different games (scenarios) for the interactive floor, each with its own style of interaction. A few applications that were developed for the system are outlined below.

3.2.1 Prisoners Dilemma


Prisoners' Dilemma is a strategy game base on the game theory where two players choose whether to cooperate or defect each other while they do not know what the other player's choice is. After each choice, every player receives a score depending on the accumulative result of both choices. If they both choose to cooperate, each player gets 3 points; if one cooperates and the other defects, the defector gains 5 points while the

cooperator gets nothing; in case they both defect, then each player receives 1 point. The goal for each player is to outscore and win the other player while the number of game iterations is unknown.

Still from video of the Prisoners Dilemma. See the whole demo: http://vimeo.com/ 4184793 There are different strategies to play the game and each person has their own style of acting in different situations. Our computer generated agents, too, incorporate these different behaviors in their personality. Agents projected on the floor interact with a user as soon as the user steps into the field. Each agent possesses a specific personality which could be for instance aggressive, retaliating, forgiving, etc. These different personalities are in fact, strategies which agents interact through them. As long as a player walks in the playing field, the system recognizes a moving object and tracks him/her as they move. When the player encounters an agent they start a game and will score based on their moves. In its standard form, the number of iterations in each game is random thus unknown, in order to reduce chances of always aggressive plays. During the exhibition however, three iterations were assigned to each round of the game and players were able to play as long as they stayed in the filed. We realized that the game concept was hard for the audience to grasp. If they were not acquainted with prisoners' dilemma, understanding the idea behind the game would seem fuzzy. Although, the system was capable of tracking multiple users, we realized that images will interfere with each other if multiple users play simultaneously and thus the game was designed for a single player. One goal in the development of this game was to enable agents to interact amongst each other. This goal was not realized to the limitation of the projected space as well as time constrains. The implementation of prisoner's dilemma game was an attempt to explore interactive personalities in the context of user tracking and although it did not attract high user attention, it was technically a successful implementation.

3.2.2 Personzilla
Personzilla is a game where participants become godzilla-like giants in an attempt to protect their egg from evil tanks who will stop at nothing to crack it open until they've been stomped on! The aim of the game is to destroy as many tanks as possible before the they can shoot the egg three times.

Left: Still from video. Children playing Personzilla and setting another high score. See the whole video here: http://vimeo.com/4738059. Right: Screenshot of the displayed area. White dots are users, green tanks are tanks ready to kill the egg and the red tanks are destroyed tanks. Designed in an attempt to explore multiple users tracking, personzilla was the most successful scenario in attracting audience. As multiple players walked through the field, they could interact with projected tanks and destroy them by stepping onto them. The simplicity of the game and the active playing style was an advantage where even the small kids trying the game could easily comprehend and have fun with. Among different user groups, kids were the most attracted to personzilla. The game style allowed for incorporating very flexible handling of user tracking to minimise as much as possible the interference caused by changing light conditions and other unexpected factors (children hiding behind the installation's curtains, for instance). Personzilla also employed sound and music for better feedback and immersion experience: tanks explode when stepped on and the happy soundtrack we chose on days 2 and 3 of the exhibition set the mood for enjoyment (on day 1 we had chosen a soundtrack composed of war sounds that, while being excellent, was deemed not entirely appropriate for children).

3.2.3 Meditation game


This idea behind this game was to create something a little less conventional: a game where peoples' natural movements and tendency to fidget could be recorded as user scores. In this game the aim is to simply sit on the floor for one minute without moving or fidgeting. The application monitors players as they sit still and records any movement. The person recording the least movement at the end of the 'meditation' session is the winner. This is one idea we had when thinking about scenarios which our interactive floor could enable but which would not make sense as a conventional computer game (using mouse, keyboard or game controllers).

One of the better players we've encountered during usability testing

3.2.4 Pong game


This scenario is an implementation of the pong's gameplay where players slide their paddle by moving along the field sides. Players move their paddles to bounce a ball back and forth. The aim for each player is to earn more points by bouncing the ball in the direction where the opponent misses to catch up. Pong was developed as a two player game to experiment assigning user IDs to players and keeping a record of each identity. The implementation of the game faced issues due to the noise caused by lighting conditions as well as a programming issue that dismissed players if they accidentally walked away from the sides of the field.

Still from video. Testing pong German game. See the demo: http://vimeo.com/4184980

3.3 Installation
We had quite a few requirements for the interactive floor and the space it occupied was very important to us. We needed to install the projector and connect the computers running the code. The projector had to be a few meters above the actual floor so we needed a spot that allowed us to be on the floor and still reach the projector and computers. The only suitable spot we found was by the stairs leading up the 1st floor. The projector ended up on the 1st floor (on top of a table to give it a little more height).

To keep the light out of the projected area we hung curtains by attaching one end to the sides of the stairs. On the first day of the exhibition we improved this by erecting walls around the space to keep even more light out and to prevent people moving the curtains from outside.

Our desired location within Lindholmen Science Park We also found that by placing the white walls face down on the floor (a grey patterned colour) we ended up with a much clearer image from the projector. This change, however, proved problematic for our tracking module. The resulting infrared image showed people blending in with the floor much more than before, reducing the accuracy of the tracking algorithm. We considered using different materials to cover the floor but with very little time left decided to go back to projecting on the grey floor. Two posters were designed to put up on the wall at the entrance of the project in order to explain about the work. A brief description of the project was given in the project poster (see bellow) and implemented scenarios were explained in the second poster titled scenarios. Since following different scenarios was essential in understanding the project, posters served as a mediator for explaining the project as well as its scenarios and game plays to visitors.

Project poster

Scenarios poster

3.4 Licensing
Our project was largely made possible by adapting existing Free Software to our needs both OpenFrameworks and Processing are Free Software frameworks in themselves. Therefore, we recognise the advantages that come with Free Software and have decided to license it under the terms of the GNU General Public License Version 3.0 or higher, hoping our development efforts will be useful to others as well.

4 Realisation
The project started with working several project ideas which we published in our development blog, located at: http://idp.mexinetica.com/blog/ . Chief among these ideas, which were proposed by mindstorming, we had four concepts: A "playing field" which interacts with users, with a very basic projected image for "personality", a character to interact with. An augmented refrigerator - with a display showing animated characters that reflect the status of different items inside it who react to the user. A space shooter game where the spaceship is not directly controlled by the user, but by a computer-controlled character displaying emotions and reacting to user input. An application which displays an animated character and tracks if user is paying attention. When the user fidgets, the animated character becomes angry. The essence of this idea would actually return in the end, as the "meditation" scenario. Recombining these ideas the Interactive Floor was the one project concept that everybody could get involved in the way he wanted. It seemed to be a challenge to create the device, but still feasible. After settling on the Interactive Floor we developed two prototypes: one prototype focused on how the device would look when it would work (see image). The second prototype was one of the many software frameworks we tried with a basic version of the tracking algorithm, using Miis (Nintendo Wii avatars) instead of real people to demonstrate basic tracking functionality.

Still from video. This prototype used real video footage with the interactive floor digitally rendered into the images.

Still from video of Miis walking around which was used as input to demonstrate the tracking algorithm prototype. After the prototype phase we got feedback from groups with experience in similar projects that we would really need an infra-red light beam to be able to capture user input properly. Without the infra-red light there would be much more noise, or we would have to use natural light which is much more unstable, and agents projected on the floor would be picked up erroneously as users. To test the concept and our initial tracking code, we used pre-recorded videos of moving objects (e.g. Nintendo Miis) instead of a live video feed and applied the tracking algorithm to the video. And instead of using a projector straight away, we viewed the results on the monitor. We knew that the test environment indoors was very different from the space at the exhibition. The ceiling in the design studios was a lot lower and we weren't able to fix the projector high enough to give us the projection area we were after. We were therefore unable to test with real people inside the design studios. To get around this problem we placed the beamer and its stand over one of the tables in the design studio to project against the floor and we used our feet and a rod with reflective material attached to one end to mimic user movements.

"Feet chasing monster" prototype To test the scenarios most of the time the mouse was used to simulate the input - which was useful, yet we ran into a few issues which we had not realised: The mouse support was coded in such a way that it never sent a "user entered/ user left" notification. It reported position events much more frequently than the webcam did. Multiuser event generation was awkward and scenario-dependent. A multiuser event simulator was partially developed, but it was not finished because we preferred to focus our efforts into the tracking code and the scenarios. The scenarios comprising the interface were developed simultaneously to the tracking software; the first test scenario consisted of a "monster" chasing the user's location reported from the input. The second scenario to be developed was the Prisoner's dilemma, which made us realise the following facts: A player avatar is needed because the tracking is not 100% accurate. The interface for making choices had to be very explicit and able to detect when its graphics were out of bounds (outside of the projected image) The interface should also be able to convey at least some usage instructions by itself Graphics should be simple and with contrasting colours. Finally, the Personzilla and Meditation scenarios' development was started in parallel a few weeks before the exhibition, as the tracking system was considered ready, mainly to fully explore the interaction possibilities but also to debug the tracking system. The main design factor behind Personzilla was multiuser and active interaction, the fact that it should be a game with tanks being crushed was more of a sudden inspiration combined with the fact that people are stepping on the images. The reasons for creating the meditation game were to experiment with longer tracking of multiple users and also to create something a little less conventional than a typical computer game.

5 Evaluation
Carrying out this project was a very interesting experience for us and enabled us to explore a number of areas related to human interaction with such systems, the technologies behind these systems and the issues likely to arise.

5.1 Costs and Free Software


One of the biggest obstacles we came across when researching these types of systems was the costs involved. On the hardware side, finding a suitable projector was difficult and the cost of buying a more powerful projector (or one with a wider angle) was much too high. On the software side, we could not find any existing solutions or any frameworks (free or closed-source) to help developers build applications for these types of systems. One major downside to this is that we had to spend a considerable amount of time building and testing our own framework instead of focusing on applications and user interaction. We were all keen to stick to free software solutions as much as possible so we focused on free software frameworks with support for image and video analysis. We did

eventually find some some code which partly implemented some of the basic tracking we were hoping for and ended up building on top of that.

5.2 Sensitivity to Light


Another problem we encountered was changes in light which affected the tracking system. The tracking software works by taking a snapshot of the floor when it starts and using that against subsequent frames to produce a difference image. The difference between these frames, in a controlled environment, should only be the presence of people on the floor. When light conditions change, however, the tracking system easily becomes confused because it's suddenly comparing images of the floor with more light than on the original snapshot. The light is sometimes identified as a user, sometimes it affects an actual user's position on the floor. Despite our best efforts to keep light out of the project area by erecting walls around the space, we still experienced problems with tracking due to changes in light.

5.3 Speed
Another aspect of the system which was important to us was speed. We had two components running simultaneously - a tracking component which was constantly analysing a live video feed from our webcam, and the display/reaction component which was constantly reading from the tracking component and projecting something in response at a high frame rate. This put a great burden on the computer. We managed to lessen the load by splitting these tasks to separate PCs and letting the components communicate via a wireless connection. Despite these changes we still experienced an unexpected speed boost on the last day of the exhibition. We're still not completely sure why, but one likely reason could be the IR filters we used in our modified webcam. The filters block out a lot of light and end up reducing the framerate of the video feed unless we can compensate with enough IR light of our own. We suspect one of these filters came loose causing more light to enter the webcam resulting in a speed increase.

5.4 Foot as Pointing Device


Observing people who hadn't been on the interactive floor before we noticed many using their feet as a kind of pointing/clicking device. Especially with games were objects were projected, people's immediate reaction was to try and stamp on the objects with their feet. This proved to be a problem as the tracking component could not tell when a user's foot hit the floor and calculated the position based on the user's entire body (not just the foot). While the tracking component could perceive the users' shapes, transmitting this data to the scenarios proved to be unfeasible because it required a much more complex communication protocol; to deal with this problem, a collision detection query was proposed but unfortunately was not carried out due to time constraints. Since it was not possible for the interactive scenarios to accurately detect cases were users stuck their feet out to stamp on object, we could, and did, make assumptions about users' heights so we could calculate positions around the feet.

A better approach might have been to integrate shock sensors for footfall recognition (see The Famous Grouse Experience by ART+COM). This would, however, have added some extra complexity to our system.

5.5 Shadows
Shadows were not a big problem, but unlike similar systems our projector was positioned to one side of the projected floor and projecting at a slight angle. This angle meant we had to deal with slightly more extended shadows than in systems which position the projector exactly in the center of the floor. In games which required users to look out for projected, moving objects (such as personzilla and prisoner's dilemma) they were a slight nuisance. We discussed two options for reducing shadows. One option was to try and use two projectors projecting from two sides, the other was to move our projector to the center of the floor. The first option was too complicated as we would have to configure both projectors to align the projected images up exactly. The second option was tricky because we had no easy way of securing the projector to prevent it falling.

5.6 Sound, ambient sound and music


Sound effects were considered first as an obvious complement to the visuals of personzilla - stepping on a tank should produce some sound. However we found out that it was not only an "obvious complement" but rather a much needed feedback, as it made the results of the interaction completely unambiguous when people were not familiar with the graphics -no matter how different we tried to make a "live" tank look in comparison to a crushed tank, it was never explicit enough- and also worked around the shadow problems mentioned earlier: users no longer needed to see that the tank was crushed, they knew it from hearing the sound effects. On the other hand, having ambient sound or music dramatically changed the users' appreciation of the project. Without ambient sound, they would just wander by, see the images on the floor, perhaps crush one or two tanks, and go somewhere else. With war ambient sounds they would spend at least three minutes testing the project, giving the game enough time to become difficult and more interesting. The war ambient sounds created an athmosphere of war, which though good to our game in general, would probably not be the most appropriate for children, so on the second day of the exhibition a decision was taken to change it for a happy, instrumental electronic music soundtrack with the goal of making the game seem less serious and more like a fun contest.

6 Related Work
Related work falls into two categories: works with similar goals to ours and works which used technologies similar to ours but with different goals in mind.

6.1 Living Surface by Vertigo Systems

Vertigo Systems, a German company, have a similar product called Living Surface. We do not have very much information on technical aspects of the system, particularly the software, but we do know the hardware used is similar to the hardware we used: a projector, video camera, infrared LEDs. We suspect one difference between their system and ours is the use of a wide angle lens on the projector. Their installation in the Universeum projects on a large area of the floor despite the projector being a short distance away from the floor.

6.2 The Famous Grouse Experience by ART+COM

ART+COM, another German company, have worked on similar interactive floor systems for various clients. This particular work was created in 2002 using 6 PCs and 6 projectors, 6 infrared spotlights and 2 cameras for tracking, and 8 shock sensors for footfall recognition! (They also used a separate PC just to control everything.)

6.3 Audience by Chris O'Shea

Chris O'Shea's work, Audience, uses user tracking in a different way. It is an installation made up of a number of mirrors which all point at a particular person and follow the

him/her around (keep the mirror pointed at the user).The idea here is not an interactive floor, but it is related because of the hardware and software used in implementation. Chris uses free software such as openFrameworks and OpenCV to implement the tracking part of the system. We heard about his work through forums discussing the topic of user tracking using free software and contacted him for more information. The information we found on the forums, and looking at projects like Audience, influenced our decision to use openFrameworks and OpenCV to implement the tracking part of our own system.

7 References
Living Surface: http://livingsurface.de Accessed on: 19 May 2009 The Famous Grouse Experience: http://www.artcom.de/ index.php?lang=en&option=com_acprojects&id=7&Itemid=113&page=6 Accessed on: 19 May 2009 Audience: http://www.chrisoshea.org/projects/audience/ Accessed on: 19 May 2009 OpenCV: http://opencv.willowgarage.com/wiki/ Accessed on: 27 May 2009 OpenFrameworks: http://www.openframeworks.cc/ Accessed on: 27 May 2009

8 Appendices
8.1 Project plan
Week 5 Form groups, brainstorm about project ideas Present three project proposals Week 6 Decide on project Create prototype(s) Present final project with prototypes Week 7 Discuss project possibilities, general approach and software and hardware requirements Decide on development schedule Prepare pitch presentation Present pitch Week 8, 9, 10 Develop software for tracking and interfaces / scenarios Create mounting prototype Test different webcams and projectors Test prototype with soft- and hardware in design labs Week 11 Find testing location in Science park Test setup in Science Park Week 12, 13, 14 Develop software for tracking and interfaces / scenarios Test setup in Science Park Week 15, 16, 17 Create website Develop software for tracking and interfaces / scenarios Test setup in Science Park Week 18 Prepare for exhibition Make last changes to soft- and hardware Week 19 Exhibition time!

8.2 Design document

Interactive Floor
Design Document 4.0

Mohammad Ardavan Amir Chamsaz Eelke Boezeman Keyvan Minoukadeh Alejandro Valenzuela

Last modified: 2009-05-27

Index
1 Introduction......................................................................................................................................1 1.1 Background...............................................................................................................................1 1.2 Design Goals.............................................................................................................................1 1.3 Related Work............................................................................................................................1 2 Architecture......................................................................................................................................1 2.1 Overview...................................................................................................................................1 2.2 Software dependencies.............................................................................................................2 2.3 Components..............................................................................................................................3 2.4 Data...........................................................................................................................................3 2.5 Communication.........................................................................................................................3 3 Operation..........................................................................................................................................4 3.1 User types.................................................................................................................................4 3.2 Scenarios...................................................................................................................................4 3.3 Installation................................................................................................................................7 3.4 Licensing...................................................................................................................................7 4 Development.....................................................................................................................................7 5 References........................................................................................................................................8

1 Introduction
1.1 Background
An Interactive Floor is a real video-projection floor display, where images appear and people can interact with them. Compared to typical interfaces the Interactive Floor offers the user the capability of really being part of the interaction. All the user needs to do is walk into the projected area and he becomes part of the digital world that is projected around him. The setup consists of a beamer projecting images on the floor and a webcam scanning the projected area for user input. The beamer projects interactive interacting personalities on the floor that react to the user moving within the projected field. The interactive floor projection is an attractive high tech application of image projection for meetings and events. A floor projection can be used in almost any place imaginable: the entrance of a venue, the floor of a stand, a dance floor, a table surface Attention guaranteed; a great tool to attract people and get exposure.

1.2 Design Goals


Develop a virtual surface by projecting images on the floor. Process input from multiple users simultaneously. Research limits on efficient projection and tracking with ordinary commercial hardware. Present agents that interact with the user as well as among themselves. Use these agents to simulate interacting interactive personalities. Present various interactive scenarios such as small strategy games and less goal-oriented schemes. Research these interaction schemes to find guidelines for creating suitable interface for the Interactive Floor

1.3 Related Work


Vertigo Systems, a German company, has a similar product called Living Surface: http://living-surface.de. We do not have very much information on technical aspects of the system, particularly the software, but we do know the hardware used is similar to the hardware we intend to use: a projector, video camera, infrared LEDs. Although we are not sure, we suspect one difference between their system and ours is the use of a wide angle lens on the projector. Their installation in the Universeum, Gteborg, projects on a large area of the floor despite the projector being a short distance away from the floor.

2 Architecture
2.1 Overview
The following figure shows an overview of the logical design of the hardware and the software. 1

Figure 1: Hardware and software flow The webcam captures an image and converts the image into user tracking data the Tracking module. This data adheres to a predefined protocol. The user tracking data serves as input to the Interface module. The Interface module uses this data to project an interface with a beamer. The system works as a loop: the image projected by the Output component provides an interface for the user projected on the floor. If the user decides to interact with this interface, the Input mechanism captures his movement. This results in user tracking data that serves as input to the Output mechanism upon which the interface can respond to change the interface accordingly.

2.2 Software dependencies


The following figure shows the software dependencies

Figure 2: Software dependencies The Interface Module and interactive scenarios depend on Processing 1.0.3 The Tracking Module depends on OpenFrameworks'0051 and its implementation of the Optic Flow algorithm.

2.3 Components
The interactive floor is composed of the following hardware and software Input: Webcam, UVC compliant (with an IR filter for the lens) Output: Wide-angle Beamer Tracking Software: Image processing software (OpenFrameworks; custom enhancements to Optic Flow Algorithm) Interface software: Graphics, Interactive Scenarios (Processing) Platform: Ubuntu, Debian GNU/Linux and MacOSX are supported

2.4 Data
Input: Image data obtained from webcam. OpenCV-compatible internal representation. Internal representation after image processing and Optic Flow algorithm. Interactive Scenarios' internal states

2.5 Communication
The main communication in the system is a cycle between the tracking and interface modules. The tracking module analyses a live video stream of the floor and extracts information such as peoples positions, direction of movement and a general level of movement. This information is passed to a separate output module which determines the next action of the currently running application. The result of this process is then projected back onto the floor, prompting the user(s) to make his/her next move. The cycle continues until the application comes to an end. Technical details are specified below
1 '006 in the case of MacOSX

A bitmap buffer in UVYV or JPEG format is received from the camera drivers, which is processed by OpenCV/OpenFrameworks. Subsequently, OpenCV/ OpenFrameworks employs filtering techniques to detect people present inside the image, and the Optic Flow algorithm to keep track of their movement, and produces User Event data.

User event data is passed from the Tracking module to the Interface module in User Tracking Protocol Format through TCP sockets. This format is subject to change as the project is developed. Its description follows: A string of variable length between 45 and 70 bytes, NULL-terminated, containing the following fields separated by straight pipes: - User event type; one digit [0: User joined; 1: User moved; 2:User exited; 3:User mood] - User number; one or two digits - User's X position; floating point number [0.0 1.0], relative to virtual environment width - User's Y position; floating point number [0.0 1.0], relative to virtual environment height - User's X orientation coordinate; floating point number [0.0 1.0] - User's Y orientation coordinate; floating point number [0.0 1.0] * Measure of an user's activity pattern. A standard VGA/SVGA/XVGA signal in a compatible resolution and frequency is sent to the beamer.

3 Operation
3.1 User types
If the system is deployed in an adequate, light-controlled setting, no calibration is necessary. Therefore only one type of user is recognised: Participant: A participant is someone standing on the projected field. Participants are able to interact with the visible agents, as well as among themselves, in the different scenarios available. Participants can enter and exit the projected field at any time and we expect their interaction with it to last between 3 and 15 minutes, depending on how interested they are. We expect participants to have a very diverse age and occupation range, as the interactive floor is pitched as a public exhibition, however it is usually children who are most interested in it.

3.2 Scenarios
The system can be used in a number of different ways depending on the application running. The Tracking module always provides the input in the same format (position of participants, amount of movement) regardless of the application running. Each application can use this input and interpret it in whatever way is relevant to the application. For example, an application which only responds to the amount of movement on the floor can ignore the position values of participants provided by the input module. The scenarios developed for the Interactive Floor showcase much of its functionality but are by no means exhaustive: 4

Strategy game: Prisoner's Dilemma Prisoners' Dilemma is a strategy game where two players choose whether to cooperate or defect each other while they do not know what the other player's choice is. After each choice (iteration), every player receives a score depending on the accumulative result of both choices. If they both choose to cooperate, each player gets 3 points; if one cooperates and the other defects, the defector gains 5 points while the cooperator gets nothing; in case they both defect, then each player receives 1 point. The goal for each player is to outscore and win the other player while the number of game iterations is unknown. There are different strategies to play the game and each person has their own style of acting in different situations. Our computer generated agents, too, incorporate these different behaviors in their personality. Agents are projected on the floor and interact amongst each other and with a user as long as the user steps into the field. Each agent possesses a specific personality which could be for instance aggressive, retaliating, forgiving, etc. These different personalities are strategies which agents interact through them. As soon as a player steps into the playing field, the system recognizes them and tracks them as they move. When the player encounters an agent they enter a game and will score based on their moves. The number of iterations in each game is unknown in order to reduce chances of always aggressive plays. Each player tries to maximize their score and as soon as the game finishes, the space around the winner shines and the looser leaves the field.

Figure 3: Prisoner's dilemma interface

Personzilla
Personzilla is a game where participants become godzilla-like giants in an attempt to protect their egg from evil tanks who will stop at nothing to crack it open until they've been stomped on! The aim of the game is to destroy as many tanks as possible before the they can shoot the egg three times. It's easy to understand yet challenging and active nature make it popular, especially with children.

Figure 4: Children playing Personzilla

Meditation Game
In this game the aim is to simply sit on the floor for one minute without moving or fidgeting. The application monitors players as they sit still and records any movement. The person recording the least movement at the end of the 'meditation' session is the winner.

Figure 5: User experience of the Meditation scenario

Pong
This scenario is an implementation of the pong's gameplay where players slide their paddle by moving along the field sides. Players move their paddles to bounce a ball back and forth. The aim for each player is to earn more points by bouncing the ball in the direction where the opponent misses to catch up.

Figure 6: Pong game

3.3 Installation
The system, in its most basic setup, consists of the following hardware A projector pointing at a mirror (positioned at an angle) which reflects the image down onto the floor. This is placed a few meters up from the floor to allow a large enough projection area. A custom support must be built for this effect. A UVC-compliant webcam pointing at the projected area to pick up movement of people below. IR LEDs positioned next to webcam pointing in the same direction (at the projected area) A dedicated computer connected to the projector (VGA cable) and webcam (USB cable) The software employed in the system consists of A GNU/Linux Operating System (such as Debian or Ubuntu) or MacOSX The OpenCV library, used for processing images captured from the webcam Tracking module Interface module (Scenarios) Once the hardware has been set-up, a ready-to-use live-CD or live USB memory stick can be provided to enable usage among a wide range of computers.

3.4 Licensing
Our project was largely made possible by adapting existing Free Software to our needs - both OpenFrameworks and Processing are Free Software frameworks in themselves. Therefore, we recognise the advantages that come with Free Software and have decided to license it under the terms of the GNU General Public License Version 3.0 or higher, hoping our development efforts will be useful to others as well.

4 Development
Week 5 7

Form groups, brainstorm about project ideas Present three project proposals Week 6 Decide on project Create prototype(s) Present final project with prototypes Week 7 Discuss project possibilities, general approach and software and hardware requirements Decide on development schedule Prepare pitch presentation Present pitch Week 8, 9, 10 Develop software for tracking and interfaces / scenarios Create mounting prototype Test different webcams and projectors Test prototype with soft- and hardware in design labs Week 11 Find testing location in Science park Test setup in Science Park Week 12, 13, 14 Develop software for tracking and interfaces / scenarios Test setup in Science Park Week 15, 16, 17 Create website Develop software for tracking and interfaces / scenarios Test setup in Science Park Week 18 Prepare for exhibition Make last changes to soft- and hardware Week 19 Exhibition time!

5 References
Living Surface: http://livingsurface.de Accessed on: 19 May 2009 The Famous Grouse Experience: http://www.artcom.de/index.php? lang=en&option=com_acprojects&id=7&Itemid=113&page=6 8

Accessed on: 19 May 2009 Audience: http://www.chrisoshea.org/projects/audience/ Accessed on: 19 May 2009 OpenCV: http://opencv.willowgarage.com/wiki/ Accessed on: 27 May 2009 OpenFrameworks: http://www.openframeworks.cc/ Accessed on: 27 May 2009

S-ar putea să vă placă și