Sunteți pe pagina 1din 22

Research Design Description: LeapFrog LeapWorld

By: Alex Britez


Research in Games and Simulation
Digital Media Design for Learning
New York University
May 15, 2011

1. Introduction
1.1 Convergence: Real and Virtual
Games are enclosed in what Katie Salen and Eric Zimmerman (2005) describe
as a “magic circle”. In this “magic circle” the rules of the game take precedence,
and interpretation is guided by the context of the game, allowing players to
temporarily separate themselves from the domain of daily life. However, through
the “gamification” of everyday tasks, the line between games and "real life" are
blurring (Schell, 2011).

Shifts in socialization have been felt in area that we would least expect. With the
increase of working households, many parents use media as “a chance to get
their chores done, quiet their kids down, or just have some 'me' time, knowing
that their kids are 'safe' — not playing outside, and less likely to be making
trouble around the house” (Rideout & Hamel, 2006, p. 32). As a result, the
popularity of virtual world registered accounts has dramatically increased in
popularity for young children. KZero reports (2011) that the 6-10 year old
demographic has grown from 77 million, to 272 million in only 2 years. Brands
such as Barbie and Lego have used this trend to extend their reach, and offer a
“third place” (Steinkuehler & Williams, 2006, p. 885) (the first being the family and
the second, school communities) for playing and socializing with other young
consumers. This allows for children's "real life" social interactions with these
brands, such as two young girls playing with their Barbies, to be continued
virtually at Barbiegirls.com. It also allows the toy to assume an online existence
(Lim, 2010).

1.2 Designing for Children


When designing virtual world "play spaces" for children, it is important to
understand children's interactions in the non-virtual world. Mandee. D Brouwer-
Janse (1997), describes these "real life" play spaces as being:
1. Play spaces should be open-ended, allowing children to combine and
organize elements infinitely.
2. Play spaces should allow for children to create patterns, events, stories,
and games.
3. Play spaces should be situated in friendly environments.
4. Playthings should always be accessible and inviting.
5. Play spaces should be flexible so that types of interaction could be tailored
to specific activities, such as modeling, play, or quiet reading
6. Play spaces should be secure and forgiving, allowing for experimentation.

Alex Britez 1
It is also important to understand how the design choices that are made are read
by they child. Due to their developmental need for safety and security, children
are more prone to be effected by instinctual biases, such as the babyface effect,
friend or foe and attractiveness bias (Isbister 2006).

1.3 Virtual Economies


Along with this popularity and increased brand interactions, businesses are
profiting from the virtual economies that many of these environments incorporate
in their game mechanics. According to the Inside Virtual Goods report (2010), the
US virtual goods market will reach $2.1 billion overall in 2011 and estimated that
sales from social games made up more than half the total U.S. virtual goods
revenues in 2010.

If children and adults are willing to make monetary investments, in exchange for
the virtual currency needed in purchasing these in-game goods, how could we as
educators capitalize on this motivation to have them invest time on academic
focused interactions such as reading or studying. Rewarding people for
demonstrating specific behaviors has been a controversial topic (Kohn 1993;
Schell, 2010), however Chris Heckers states that more research needs to be
done when applied towards games (Gamasutra, 2010).

1.4 Research Question:


1. How does the customization of a game players avatar effect a child's (K-
1st grade) overall emotional engagement?
2. When introducing virtual rewards for exhibiting non-game related
behavior, is there a measurable increase in either the virtual world
interaction or behavior?
3. Are there any positive/negative implications due to eliminating the rewards
earned during "real life" interactions?

2. Game Description
2.1 Background
LeapFrog, known for it popular early childhood educational toys, has recently
opened up LeapWorld, an online virtual world, where children could safely
practice many of the educational concepts that they are learning in school and at
home. The target audience for LeapWorld ranges from 4-8 years old, and is
specifically designed to be accessible for children who have little to no reading
skills.

Alex Britez 2
2.2 Genre
Massively Multiuser Online Games (MMOGs)
LeapWorld’s core mechanics contain what would be expected from most
Massively Multiplayer Online Games (MMOGs). Children are able to personalize
their avatar and design their home all in a safe and fun environment. Initially the
available items are limited, however a player is able to complete mini-games
throughout the LeapWorld system to gain “LeapWorld Tokens” an exchange
them for more items, which include clothing, accessories, and furniture.

Mini Games

Alex Britez 3
As the player explores LeapWorld they will have the opportunity to play various
mini-games for a chance to win “LeapWorld Tokens”. Prior to starting the game,
the player is taken to an instructional video page. This video explains all the core
mechanics of the mini-game, including user controls and the games objective.

2.3 External Harware Intergration


Early-childhood MMOGs are nothing new, there are various example of virtual
worlds geared towards this target market. They include, Club Penguin, WebKinz
and JumpStart to name a few. Where LeapWorld stands out, is in their
incorporation of LeapFrog products, such as Tag Reader, Leapster 2 and
Leapster Explorer. These products allow children to earn LeapWorld Tokens, by
playing with their handhelds and later syncing the physical toy to their computer
via USB. Once synced, all the rewards that the child collected are passed to
LeapWorld, and ready to be exchanged for new games and items.

Alex Britez 4
LeapWorld with Tag Reader
The LeapFrog Tag Reader is a popular educational product that allows children
learning to read, to touch words and items in LeapFrog’s library of books. In
doing so, they receive feedback consisting of words, sounds, and other audio
that serves to scaffold the child’s learning experience, freeing up cognitive
resources to comprehend the entire text, making reading fun and interactive. It
also includes various assessment activities and games that are recorded in
LeapFrog’s, Learning Path software. This software could be described as a lite
Learning Management Software application
that allows parents to keep track of their
child’s progress with various toys and
products in their catalog.

Having a Tag Reader allows the child access


to LeapWorld, if the child’s parent chooses to
enable it. Once enabled, as the child
explores their books using the Tag Reader,
they will begin to accumulate “LeapWorld
Tokens” and other rewards. As mentioned
above, these tokens are available to be used
on LeapWorld, once the child’s parent syncs
the reader to a computer running LeapFrog
Connect via a tethered USB connection.

3. Methods
3.1 Participants

Alex Britez 5
The participants in this study will consist of thirty students (15 female, 15 male)
enrolled in Grades K-1. Each student will be selected from urban northeastern
public schools that use the Developmental Reading Assessment (DRA2). The
learner characteristics, based on the DRA2 assessment, will include low reading
capabilities, and average speech comprehension ability. Participants will also
require having at least 6 months experience with computer-based video games in
the past.

Due to their age, all parents will have been made aware of the purpose of this
research, and have agreed to allow their children to be a participant in this study.

Once the thirty participants have been selected, they will be split into three equal
groups consisting of:

Group Name Count Avatar Tag Reader


Experimental group 5 male Fully customizable Integrated Tag
(EG) 5 female avatar Reader
Control group (CG1) 5 male Select from list of Integrated Tag
5 female preconfigured avatars Reader
Control group (CG2) 5 male Assigned preconfigured Non-Integrated Tag
5 female avatar Reader

3.2 Instrumentation
Developmental Reading Assessment (DRA2)
Each participant would have already taken DRA2, a criterion-referenced reading
assessment (DRA2, K–3, Beavers, 2006) that is required by many public school
systems around the country. This assessment is used to identify a child's
phonemic awareness, alphabetic principle/phonics, fluency, vocabulary,
comprehension, print concepts and reading engagement. As part of our initial
participant selection process, we will obtain permission to use the their scores on
file at the school.

Elementary Reading Attitude Survey (ERAS)


This public-domain paper-based survey (Appendix 1) enables teachers to
estimate attitude levels efficiently and reliably towards reading. It is specifically
designed using four pictorial icons representing a range of emotions, from happy
to sad. This type of liker scale works especially well when reading and
comprehension is bellow average (McKenna,1990).

According to McKenna (1990), this survey consists of 20 questions and is divided


into 2 classifications, “recreational” and “academic”. The scoring of this
instrument involves giving each of the options in the likert scale a numerical
value ranging from 1 (disagree) and 4 (absolutely agree). Once complete, the
experimenter will have access to the following values:

1. Recreational Score Total

Alex Britez 6
2. Academic Score Total
3. Full Scale Raw Score = Recreational + Total
4. Percentile = Full Scale Raw Score/80

Mounted Web Camera /w External Microphone


All participants will be recorded using the built-in web camera on the Apple iMac.
Their computer screen and seat will be adjusted to make sure that their faces are
correctly framed in the camera’s viewable area. Since children of this age group
tend to have soft voices, Libby Hanna (1997), of Microsoft, suggests placing a
small wireless clip-on microphone as close to the child’s collar as possible, to
pick up any subtle feedback from the child.

Screen Recoding Software


All activity on the computer will be recording using ScreenFlow. ScreenFlow
allows the experimenter to capture the contents of the participant’s entire monitor
at the same time as they capture their web camera, microphone and computer's
audio.

Posture Sensor Seat


Studies have shown that a participant’s posture is a good method of making
inferences regarding the players affective states related to engagement (De
Silva, 2004). To make these correlations, participants will be sitting on a chair
that transmits posture data through pressure sensors on the seat and back of the
chair.

Galvanic Skin Response (GSR)


Through the use of galvanic skin response, we are able to measure
electrodermal activity (EDA) as an indicator of emotional arousal (Boucsein,
1992). For the purpose of this study, we have decided to use the Affectiva Q
Sensor, which uses a wristband enclosure, rather then the more traditional finger
velcro straps. The wristband alternative allows the participant to have full function
of their fingers to use the mouse and keyboard. When triangulating data with
GSR, it is important to note that there is at least a 3 second latency in GSR
response (Isbister, 2008).

Data Logging: In-game Events of Interest


During game play, specific events have been identified that would enable the
experimenter to make correlations between an event of interest and any peaks in
the GSR and posture sensor seat readings.

Events of Interest (EOI):


• Customization start
• Customization stop
• Mini Game: Start
• Mini Game: Failure State
• Mini Game: Win State
• Home: Enter

Alex Britez 7
• Home: Leave
• Purchase Item

Each EOI Is packaged with metadata that offers additional information regarding
the event. For example:

• Group Type
• Timestamp
• Experiment ID
• User ID
• Is Avatar Visible?

Data Logging: External Hardware Usage


Data will be collected from the Tag Reader’s precompiled software, which
collects analytical data of usage.

Data List:
• Timestamp
• User ID
• Session Start
• Session End

4. Procedures
4.1 Set-Up and Planning
Playtesting will be run in a comfortable room with a homely feel (Hanna, 1997). It
will be scheduled for one hour per participant, with ample time for rest in
between. There will also be snacks, such as cookies and sandwiches to assure
that no child is hungry when asked to focus (Druin, 1999).

Since this test will run over 45 minutes, children should be asked to take a short
break, once the experimenter notices fatigue or jitteriness. This break should
consist of something brief, such as going to the bathroom, or grabbing a drink of
water (Hanna, 1997).

Prior to the testing date, selected children that will be participating in the study
are asked to complete a 20-questions Elementary Reading Attitude Survey
(ERAS could be found in Appendix 1), to assess how much the child enjoys
reading, prior to intervention.

4.2 Introductory Discussion: (10 Minutes)


As participants enter the testing laboratory the experimenter will greet them.
During this initial meeting of the participants, it is important for the experimenter
to establish a relationship with the children.

Participants are asked simple open-ended question, in a very informal manner.


The goal of these questions are to make the child comfortable, while still asking

Alex Britez 8
relevant questions that may help to screen the child's gaming proficiencies.
Some question may include:
1. What is your favorite game?
2. What game do you want for your birthday?
3. How often do you play video games?
4. What are you favorite websites to play games on?

There may also be questions in this introductory interview that inform and
personalize follow up question later in the post-game interview. An example of
such a question is, “What is your favorite TV show or book?” Refer to the Post-
Interview section, to see how this question is leveraged later in the study.

4.3 Overview Discussion: (5 Minutes)


The experimenter will give a quick overview of what will be taking place during
their time in the laboratory. Some of the key points that must be covered in this
time are:

1. Children will be told that everything that they do in the lab is "top secret",
and shouldn't be discussed with others. Parents will then be asked to sign
a non-disclosure agreement, since they will also be exposed to some
confidential designs, or information.
2. The experimenter stresses to the participant that they should be honest,
and not to worry about hurting his/her feeling. Additionally the
experimenter should try to motivate the child by telling them, "that you
[they] have forgotten what it is like to be a child, and that you [they] need
their help to make a good product for children all around the world. (Hanna
1997, pg. 12)"
3. The experimenter explains the role of both the participant and child in the
study.

“I want to make it clear that I’m testing the software, not your child. We
want the software to be fun and easy for your child to use on her own,
so I will be asking you to sit back and allow your child to try things out.
I’m right here if she gets stuck, and I will help her out by giving some
hints and asking her to make some guesses.” (Hanna, 1997 pg. 12)

4. The staff will disclose to the parents and participants that they will be
video taping the session, and will be collecting data from the various
sensors. During this time the parents are reassured that information
collected will be used solely for research purposes.
5. Any known bugs with the software will be shared with the children, so
they know what to expect (Isbister, 2008), and aren't discouraged in the
event they run into one during their interaction.
6. The experimenter lets the child know that they will be there the entire time.
This is necessary since sometimes the child will need reassurance and

Alex Britez 9
encouragement and may be agitated by being alone or following directions
from a loudspeaker (Druin, 1999).

4.4 Feedback Sensor Setup: (5 Minutes)


Once the overview discussion is concluded, the participant is taken to the
playtesting room. Here the child will be asked to wear a galvanic skin response
(GSR) wristband. This wristband will be able to sense emotional arousal in an
unobtrusive manner. Participant will also be asked to wear a small clip-on
microphone to record some of the children’s small voices (Hanna, 1997).

The child is directed to a workstation that is accompanied by a seat, which is


rigged with a posture sensor. The child will be asked to take a seat and relax
while the experimenter captures a baseline reading from the GSR.

4.5 Free Play: (5 Minutes)


During this time they are given an opportunity to unwind and pick any game from
the laboratory’s library. This allows the experimenter to make sure that sensor
readings are being transmitted from both the galvanic skin response sensor, and
the posture sensor, and allow the child ample time to "forget" about the sensors
and camera.

4.6 Playtesting: (25 Minutes)


Participants will be video taped via a mounted web camera, capturing audio from
the software via the external clip-on wireless microphone. Screen recording will
also be stored on the local drive of all the participant’s in-game interactions.
During game play, a data log will be updated with all the predefined events of
interest.

During there interaction, the experimenter would need to make sure that the
participant completes the following tasks, based on group type:

EX CG1 CG2
Design Avatar X
Explore Home X X X
Complete 3+
X X X
micro games
Shop Virtual
X
Assets

Some special instruction:


• The tester will have a script of hints, with varying levels of support for each
task. Each time a hint is used, it will be recorded.
o What do you think those numbers [credits] are for?
o What do you think about your avatar's clothing?
o Wouldn't it be cool if your character's shirt was [Favorite color from
introduction interview]
Alex Britez 10
• Since young children are accustomed to ask for help from teachers,
parents, and older siblings, they may request for the experimenter’s help
when exploring a new interface. It is the experimenter’s goal to redirect
these questions, with non-leading questions of their own. An example of a
dialog exchange might look like this:
o Child: How do I get back to the game screen?
o Experimenter: How do you think you get back?
o Child: I don't know.
o Experimenter: Look around the screen and see if something looks
like it would close the window?
o Child: Is it this button here?
o Experimenter: What do you think?
• If children start to lose focus, or attempts to make irrelevant conversation,
the experimenter should gently remind them to get back on task (Hanna
1997).
o “Lets try this for a few minutes longer and we will move on to
something different.”
o “I want to see just how much you can do—let’s try some more.”
• Due to the demographic, children that demonstrate difficulty with words
and instructions may require intervention from the experimenter.
• Throughout the playtest the experimenter should offer constant generic
feedback such as:
o "You really worked at that!”
o “Wow! You did that all on your own!”
• If a child is spending too much time on a task, then the experimenter
would need to get the children to move on the next tasks.
o “You have some great taste in clothing, let’s move onto something
else and come back to this a little later.”

4.7 Post-game Interview: (10 Minutes)


During this section, the experimenter will ask the participant a few questions.
This is meant to identify some of the individual user preferences that may not be
observable or present in any biometric readings.

General questions:
• What did you think about the game?
• What was your favorite part?
• Was there anything you didn’t like?

Avatar based
• How did you like/dislike about your character?
• How important are the tokens to

Customization (EG only):


• Was there anything you wanted to buy for your avatar, but didn't have
enough money?

Alex Britez 11
• Was it easy to buy the clothing?
• What is your favorite item?
• Out of all the items that you saw in the game, which one did you want to
get? Don’t worry about cost.
• If it cost $1 “real” dollar? Would you ask [NAME OF GAURDIAN THAT
BROUGT THEM] to buy it for you?
• What if I told you that you could get it for free, but you need to read a
[NAME OF CHILD’S FAVORITE TV CHARECTOR] book?

Once these questions have been answered, the interviewer presents the second
phase of the study. Each child will be handed a LeapFrog Tag Reader, and
asked to select 10 books of their personal choice from the library. Participant in
both the EG and CG1 are allowed to integrate their Tag Reader to the LeapWorld
account, so they could collect rewards for usage. CG2 is not told about this
feature, and only allowed to use the Tag Reader as an independent product
completely separate from LeapWorld, with no rewards being transferable to the
virtual world.

This concludes, the playtesting section of the study. The participant should be
told how helpful they have been, and how their hard work has helped the
experimenter see exactly which things need to be fixed. Before the participants
leave, the parents are given a $50 gift card to Toys ’R Us.

4.8 At Home Testing: LeapWorld with Tag Reader (1 month)


As children interact with both LeapWorld and Tag Reader, the experimenter will
have access to data logs acquired by both systems. Participants will also be
asked to complete a bi-weekly Elementary Reading Attitude Survey (ERAS)
during this section to see if there have been any changes in reading attitudes.

4.9 At Home Testing: LeapWorld without Tag Reader (1 month)


Once the first month is over, the EG and CG1 participants will have their Tag
Readers un-synced from LeapWorld. This means that all that EG and CG1, no
longer could benefit from the virtual rewards brought about from the Tag Reader,
much like CG2.

All interactions with both LeapWorld and Tag Reader, will continue to be logged
by both systems, and available to the experimenter. The participants will also be
asked to complete a bi-weekly Elementary Reading Attitude Survey (ERAS) for
the remainder of this month.

5. Data Analysis
5.1 Research Question 1

How does the customization of a game players avatar effect a child's (6-10
yrs old) overall emotional engagement?

Alex Britez 12
Results of the playtest session will be used to find any measurable difference
between all three groups (EG, CG1 and CG2). This will be done by both
qualitative and quantitative methods:

• Counting the peaks on GSR readings, and finding the average mean
value of each group.
• Counting each occurrence of posture classification and its duration.
• Counting all positive and negative occurrences of observable facial cues
via video, using the Facial Analysis Coding System (FACS).
• Ethnographic review of audio and video.
• Analyzing post-playtest interview.

To assist in analyzing the data a custom user interface will be used to visualize
the data triangulation and help in quickly identifying correlations between the
various events, logging, facial analysis, dialog, and sensor readings. Since you
always run the risk of a child trying to please the experimenter, using the
behavioral data collected, is much more reliable then what a child's response to a
question is (Hanna 1997).

Figure: Screen Shot of Triangulation User Interface

By analyzing the color coded events of both the “Event of Interest” and posture
sensor reading, then triangulating those values with the synced video and screen
recording, the experimenter is able to make inferences about events that caused
noticeable shifts in emotion.

Alex Britez 13
By triangulating the data through time, we could filter it based on events, and
whether the customized avatar is visible or not. This information may be helpful
in informing if customization is more beneficial for 3rd person point of view (POV)
games when compared to1st person POV. Studies have shown that games that
support avatar customization and use a 3rd person POV, instead of a 1st person
POV, show greater amounts of increased heart rate, in males (Lim, 2006).

Seeing a measurable gains in the EG, when compared with both control groups,
would be consistent with Jakabosson’s (2002) idea that the investment of time
and energy given to avatar development become a type of “social capital”. Dede
(1996) also shows how the safety of role-playing through an avatar allows a
participant to take more risks, and Bruckman’s (1997) reveals how role-playing
provides interesting learning outcomes.

By identifying if the EG is more intrinsically motivated then the CG1, we could


use that knowledge to see how much (or little) of an effect the extra motivation
has on the following two research questions.

5.2 Research Question 2

When introducing virtual rewards for exhibiting non-game related behavior, is


there a measurable increase in either the virtual world interaction or
behavior?

In order to see if there are any measurable increases in either virtual world or
behavior, we could look at the data logs that show us any fluctuations in usage
through time. We could then compare the usage over time from both EG and
CG1, who both had a rewards incentive from using the Tag Reader, with CG2
which had no rewards for using the Tag Reader.

If there was a measurable increase in either EG or CG1, we could conclude that


introducing an external rewarding mechanic coupled with an internal outcome,
such as customizing your avatar (EG), or simply gaining more points [tokens]
(CG1) can influence the usage of either the digital game, the external behavior,
or both.

Dependant on the results of Research Question 1, I would hypothesis that when


more intrinsic motivation is introduced in a game, the chances of exhibiting an
external behavior would increase. This hypothesis is based on Dr. Edward
Castranova’s (2003; 2005) work regarding the economics of virtual worlds, and
how they motivate players to invest “real money” in exchange for “individuality”.

5.3 Research Question 3

Alex Britez 14
Are there any positive/negative implications due to eliminating the rewards
earned during "real life" interactions?

To answer this research question we could analyze the results of our longitudinal
Elementary Reading Attitude Survey, and see if there are any shifts in either
pleasure, academic, full raw scale, percentile, or all of the above. We could then
compare those results to between all three groups.

1. EG: Elimination of Tag Reader rewards tied to intrinsic motivation of


customization
2. CG1: Elimination of Tag Reader rewards tied to extrinsic motivation of
points [tokens with no customization options]
3. CG2: Was never rewarded for using Tag Reader

My hypothesis for this research questions would be split between attitudes


towards the game and attitudes towards reading. I predict that once the
rewarding mechanic is removed from both EG and CG1 groups, usage of the
both game and external device would reduce substantially. However, I believe
that the attitudes toward reading would either stay the same or continue to
gradually increase from the participant’s increase self-confidence with reading
due to the intervention of the Tag Reader product. Unfortunately, there is no valid
research study on this technology and its effect on reading, other then the
company’s claim and it’s pedagogical approaches.

6. Data Application
The results of the research questions proposed in this paper could be leveraged
to create more tightly integrated transmedia storytelling (Jenkins, 2004)
experiences for learners. In doing so, designers could leverage the affordances
of multiple mediums in an effort to help a learner successfully transfer their skills
to “real world” applications, all through the same narrative.

Allowing low-prior knowledge learners to experience content through multiple


modalities and representations, such a text with picture (Clark & Pavio, 1991),
multimedia animations and narration (Mayer & Anderson, 1991), games (Gee,
2003), simulations (Rieber, 2005) and toys (Billard, 2003), could all work
harmoniously to help with increasing learning.

7. References
Billard, A. (2003) Robota: Clever Toy and Educational Tool. Robotics &
Autonomous Systems 42, 259-269

Brouwer-Janse, M. D., Suri, J. F., Yawitz, M., deVries, G., Fozard, J.L., and
Coleman, R. User interfaces for young and old. interactions (March–April 1997),
34- 46.

Alex Britez 15
Bruckman, A. (1997). MOOSE Crossing: Construction, Community, and Learning
in a Networked Virtual World for Kids. PhD dissertation, MIT.

Castronova, E. (2003) “On Virtual Economies”, The international journal of


Computer Game Research, Vol. 3(2).

Castronova, E. (2005). Synthetic Worlds: The Business and Culture of Online


Games. Chicago: University of Chicago Press.

Clark, J. M., Paivio A. (1991) Educational Psychology Review, Vol. 3, No. 3,

De Silva, R and Bianchi-Berthouze, N (2004) Modelling human affective


postures: an information theoretic characterization of posture features. Computer
Animation and Virtual Worlds , 15 (3-4) 269 - 276.

Dede, C. (1996). Emerging technologies and distributed learning. American


Journal of Distance Education 10(2), 4-36.

Druin, A. & Solomon, C. (1996). Designing multimedia environments for children:


Computers, creativity, and kids. NY: John Wiley and Sons.

Facebook. (2011). Facebook Statistics. Retrieved from


http://www.facebook.com/press/info.php?statistics .

Gamasutra (2010) GDC: Hecker's Nightmare Scenario - A Future Of Rewarding


Players For Dull Tasks. Retrieved from
http://www.gamasutra.com/view/news/27646/GDC_Heckers_Nightmare_Scenari
o__A_Future_Of_Rewarding_Players_For_Dull_Tasks.php

Gee, J. P. (2003). What video games have to teach us about learning and
literacy. New York: Palgrave/Macmillan.

Hanna, L., Neapolitan, D., & Risden, K. (2004). Evaluating computer game
concepts with children, Proceeding of the 2004 conference on Interaction design
and children: building a community. Maryland: ACM Press.

Hanna, L., Risden, K., & Alexander, K. (1997). Guidelines for usability testing
with children. interactions, 4(5), 9-14.

Isbister, K., & Schaffer, N. (2008). Game Usability. New York: Morgan Kaufman.

Jakobsson, Mikael (2002). From architecture to interacture. Internet Research


3.0: Net / Work / Theory. Maastricht, The Netherlands.

Jenkins, H. (2004). The cultural logic of media convergence. Inter- national


Journal of Cultural Studies 7(1):33–43.

Alex Britez 16
Kohn, A. (1993). Punished by Rewards: the Trouble with Gold Stars, Incentive
Plans, A's, Praise, and Other Bribes. Boston: Houghton Mifflin.

Lim S. (2006) The effect of avatar choice and visual POV on game play
experiences. Unpublished Dissertation. Stanford University.

Mayer, R. E., & Anderson, R. B. (1991). Animations need narrations: An


experimental test of a dual-coding hypothesis. Journal of Educational
Psychology, 83, 484-490.

McKenna, M. C, & Kear, D. J. (1990). Measuring attitude toward reading: A new


tool for teachers. The Reading Teacher, 43, 626-639.

McKenna, M. C., Kear, D. J., & Ellsworth, R. A. (1995). Children's attitudes


toward reading: A national survey. Reading Research Quarterly, 30, 934-956.

Rieber, L. P. (2005). Multimedia learning in games, simulations, and microworlds.


In R. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 549-
567). New York: Cambridge University Press.

Salen, K. & Zimmerman, E. (2003). Rules of Play : Game Design Fundamentals.


The MIT Press.

Schell, J, 2011. Jesse_Schell_Visions_of_the_Gamepocalypse Full. [video


online] Available at:
http://fora.tv/2010/07/27/Jesse_Schell_Visions_of_the_Gamepocalypse
[Accessed 12 May 2011]

Alex Britez 17
8. Appendix

Alex Britez 18
Alex Britez 19
Alex Britez 20
Alex Britez 21
Alex Britez 22

S-ar putea să vă placă și