Sunteți pe pagina 1din 4

I Robot

I Robot is the first in the Robot Series by Isaac Asimov. In this novel, Asimov discusses the three
laws of robotics and how they have influenced the development of robots over the years. The
novel begins with an interview by a reporter of Susan Calvin, a robopsychologist who specializes
in making robots seem more human. Susan tells the reporter several stories about robots that
illustrate these rules and how they have impacted the development and actions of robots over the
years. I Robot is a futuristic novel that leaves the reader with a vision of a future that could one
day be reality.
Susan Calvin is a robopsychologist who has worked with US Robot and Mechanical Men most
of her career. Now in her seventies, Susan Calvin is being interviewed by a reporter about her
experiences with robots. Susan begins the interview with a story about a robot named Robbie
who was a nursemaid to a young girl in the early nineties. This robot was very beloved by the
child it was purchased to care for, but the neighbors, and even the childs mother, felt the robot
could be a danger. For this reason, the robot is sold back to Us Robot and Mechanical Men.
However, the child has such a hard time without the robot that the father arranges for the child to
see the robot again. During this visit, the child falls into danger and the robot saves her, causing
the mother to relent and allow the robot to return to the family.
Dr. Calvin continues by describing how robots were banned from earth so US Robot began using
robots to work on space colonies on other planets. In one story, Dr. Calvin describes how orders
caused one robot to become unbalanced, acting drunk, because of a conflict between the second
and third laws. A researcher had to place himself in mortal peril to break the robot out of its
state. Later, on another space station, a robot became so self-aware that it began to believe that it
was created by a machine rather than man despite a demonstration that showed the robot the
truth. This belief, however, did not interfere with the three laws, so researchers decided to leave
the robot alone.
Dr. Calvin also discusses the development of a robot who could read minds. This robot told the
scientists working with it what it knew about them and the people around them. However, it was
soon discovered that this robot had the capacity to lie. In order to adhere to the first law, the
robot would tell the scientists what they wanted to hear rather than the truth to protect their
feelings.
As technology and the use of robots increased, so did the difficulties experienced with robots.
Dr. Calvin recalls one situation in which the first law was modified in certain robots in order to
allow them to behave in a specific way. This modification allowed one robot to hide among
others, making it important that it be identified and removed before this modification could be
discovered. Dr. Calvin had to trick the robot by placing herself in danger and using a radiation
field that only this specific robot would know was not dangerous.
When Dr. Calvin returned to earth after this experiment, she learned that a rival company had
attempted to feed their superbrain a specific problem that caused it to malfunction and now they
were offering US Robots a deal to use their superbrain robot. Due to the fact that they believed
their brain to be superior, US Robots fed the problem to their brain who in turn built a spaceship
that could leap through time. To do this, the robot had to disregard the first law to a certain
degree, something Dr. Calvin gave it permission to do.

Dr. Calvin continues the interview by telling the reporter about a politician she once had the
occasion to know. This politician was accused by his opponent of being a robot. The man refused
to go along with any of the tests his opponent insisted he submit to based on the argument of
personal privacy. However, the day of the election this politician hit a man in public after he was
accused once again of being a robot, apparently proving he was not a robot as a robot could not
hurt a human. However, Dr. Calvin would later theorize that the man the politician hit was also a
robot.
Dr. Calvin ends her story by relating a problem the politician, by now the World Coordinator,
was having with the five superbrain robots that aided in governing the world. These robots
appeared to making, or influencing, mistakes in each of the territories. The World Coordinator
investigated each of these mistakes and came to the conclusion that humans were attempting to
influence the superbrains to further the influence of anti-robotic groups. The World Coordinator
was set to outlaw these groups when Dr. Calvin explained to him that the robots had caused this
mistakes themselves to prevent future economic problems with the anti-robotic groups, thus
adhering to the first law of robotics to protect humans at all costs, in this case by preventing
future wars.
Dr. Calvins story illustrates for the reporter, and reader, the development and adherence to the
three laws of robotics.

Settings
The Future! From 1998 to 2064Earth, Mercury, some Space Stations
OK, so 1998 isn't the future anymore, but it was the future when Asimov was writing these
stories in the 1940s. And in that future, there were going to be robots, and a lunar base, and
manned expeditions to Mercury and Mars, and instead of cars there are gyros (which are
probably helicopters, not the delicious Greek meat dish). Oh yeah, there's also no Internet,
people don't have cellphones or laptops, and no one is watching cute cats on Youtube, which is
definitely the best thing about life in the future.
So, yeah, we need to cut Asimov some slack in imagining the future: science fiction writers get a
lot right when they describe the future, but they usually get even more wrong. Which is fine,
since the point isn't to guess what will happen in the future, but more to ask, "if X happens, then
swhat?" And the X in Asimov's case is robots. Also, there's a base on the moon (Robbie.62),
which we think is pretty awesome. (What sort of sports do people play in low-gravity on the
moon?) But Asimov doesn't really talk about the lunar base because he's more interested in
showing us how robots work or don't work. Everything in these stories is focused on the issue of
robots.
For instance, we get to see the mines on Mercury ("Runaround"), and the power converter space
station ("Reason"), and even Hyper Base, where they're developing some sort of warp drive
("Little Lost Robot"). But in all of these cases, Asimov doesn't tell us all that much about these
settings. Does Asimov give us enough information to imagine what it's like to be on a Space
Station? (Like, what does it smell like on a Space Station? Unpleasant, we're guessing.) No, he
doesn't describe so much about that. He only gives us enough information about the setting to
understand how the robots function in it. So, Mercury is hot and dangerous, which leads to the
problem that Powell and Donovan have with Speedy and their headquarters.

For us, the most interesting and most detailed setting work happens at the beginning of "Robbie"
and in "The Evitable Conflict." These two stories make an interesting pair: "Robbie" is a story
interested in one household and one little girl, whereas "The Evitable Conflict" is interested in
the whole world and all humans. So we get very precise details in "Robbie"enough to let us
imagine a robot in our neighbor's house (since the issue here is one girl and her robot); and in
"The Evitable Conflict" we get some big-picture description of the regions of the world that
should lead us to thinking about the big picture of robots and humans.
But still, wherever the story is set, the focus is always on how robot and humans get along.
Plot Analysis
Since I, Robot is a collection of short stories, we're going to try something a little different here.
Here, we're going to break the plot of the whole book downbut we're going to do so as if this
book were all about the robots as a single main character. We're calling this the "Rise of the
Robots" plot.
Initial Situation
Stage Identification: "Robbie"
Explanation/Discussion: Here's the question that I, Robot asks: can robots and humans live
together safely? Even though Robbie is really primitive (he can't even talk), Mrs. Weston is
afraid of him. And even though Robbie is all kinds of awesome, robots still get banned from
Earth (225). So some people clearly think that robots are dangerous to us humans. Are they?
Conflict
StageIdentification: "Runaround, "Reason"
Explanation/Discussion: Well, robots aren't dangerous but they can be a little unpredictable.
Which is strange, because they have the Three Laws that they're supposed to follow. But these
two stories show us that these laws have some wiggle room. This is part of the conflict in the
book, because how can we be sure that robots are safe if these Three Laws have wiggle room?
Complication
Stage Identification: "Catch that Rabbit," "Liar!"
Explanation/Discussion: We could probably put these two stories under conflict. (Actually,
"Catch that Rabbit" is a weird story that doesn't fit anywhere since the problem there has nothing
to do with the Three Laws. Let's forget that story.) But if you think about it, "Liar!" really
complicates the whole Three Laws by showing us a robot that wants to follow just one lawthe
First Law, the really big oneand he can't. This story also ends on a huge downer, with Susan
Calvin purposely driving Herbie insane. So if the question of this book is "can robots and
humans live together safely?," "Liar!" seems to say "no, because humans will put robots in
impossible situations or destroy them." Oopshumans were the real monsters all along.

Climax
Stage Identification: "Little Lost Robot"
Explanation/Discussion: We think "Little Lost Robot" is a climax because this is the story that
most seriously deals with the idea of robot rebellion. Calvin worries that the Nestors may be able
to harm people. But at the end, even though Nestor-10 wants to attack Calvin, he can't. So,
robots seem perfectly safe for peopleeven really weird robots like the Nestors.
Suspense
Stage Identification: "Escape!"
Explanation/Discussion: But if robots are safe for humans, what's going on in "Escape!"? The
Brain seems to intentionally put humans in dangerand why? For a joke. Of course, it turns out
that the Brain didn't put people in danger and everything is OK. (In fact, even though Calvin
solves the mystery, it's not like anything bad would have happened if she didn't.) So we can all
breathe a little easier.
Denouement
Stage Identification: "Evidence"
Explanation/Discussion: We know from the earlier stories that robots can get along with
humans, but all those situations were under experimental conditions. Here we see the end-result
of all these stories: the idea that a robot and a human can live side-by-side in the real world. If
Mrs. Weston in "Robbie" was worried about robots replacing us, here we see that robot
replacements might not be such a bad thing.
Conclusion
Stage Identification: "The Evitable Conflict"
Explanation/Discussion: And here's the kicker: can robots and humans live together safely?
Yesand, in fact, that might be the only possible way for humans to live safely after all.
(Because, remember, we're very dangerous, both to robots and to each other.) Of course, this
story raises questions about human freedom and destiny; so, in the end, we have a whole new set
of questions.
So there you have it, in nine easy stories: robots start out as useful servants ("Robbie"); they
become more complex and more problematic ("Runaround," "Liar!"); and they end up as
indispensable partnersor masters ("Evidence," "The Evitable Conflict"). So, I, Robot has an
arc (the rise of the robots) even though it's just a collection of stories.

S-ar putea să vă placă și