Sunteți pe pagina 1din 22

• Unit I: Search-Concept of search and the representation of spaces for search.

Classes of search
and popular search algorithms. Group 1

• Unit II: Knowledge Representation-Goal behind knowledge representation and representation


techniques. Semantics, propositional logic and first-order logic in knowledge representation.
Group 2

• Unit III: Machine Learning-Machine learning algorithms. Supervised and unsupervised learning.
Groups Group 3

• Unit IV: Neural Networks-Supervised and unsupervised neural network learning rules.
Feedforward and back propagation algorithm. Associative neural networks. Group 1

• Unit V: Fuzzy Logic-Fuzzy sets theory and application. Fuzzy rules, fuzzification, defuzzification
and inference systems, Mamdani and Takagi-Sugeno fuzzy controllers. Group 2

• Unit VI: Evolutionary Computing-Natural evolution process and Genetic algorithms. Groups 3

Chapter 1 Exercises
I’ve chosen the following problems from Chapter 1 to work on.

1.1)
Define in your own words:
(A) Intelligence
(B) Artificial intelligence
(C) Agent
(D) Rationality
(E) Logical reasoning

1.1)
A) Intelligence is the the ability to solve problems, no matter how miniscule or
extraordinary they are. Not only should we be able to solve these problems, but
also find improvements to our solutions and continue to expand our knowledge.
Being intelligent is one thing, but being able to expand our Intelligence is far
more valuable.
B) Artificial Intelligence is a piece of machinery, programming, code, etc that
was built by humans with the specific task to solve a problem or many problems.
But as I’ve stated before, solving problems is one thing, expanding that problem
solving knowledge is another. Artificial Intelligence should keep records of its
attempts to solve a problem so that it can learn from it’s progress and its
mistakes.
C) Agents are things that perform actions. They are given instructions and are
expected to follow them; but there’s more to it than just following those
actions. Agents must also be able to operate on their own, change their actions
depending on their surroundings, maintain themselves over an elongated period
of time, and develop their own goals to pursue.
D) Rationality is doing what is expected of you or what is acceptable to do. When
a problem arises there are specific ways that one would solve it; these are filed
under Rationality. For example: Your car runs out of gas while driving on the
highway and you pull over to solve the problem. A rational action would be to call
AAA or a tow service. An irrational action would be to try an siphon gas from a
nearby parked police cruiser.
E) Logical Reasoning is the ability to perceive a problem logically and know
which solution is best to solve it.
1.3) Are reflex actions (such as flinching from a hot stove) rational? Are they
intelligent?

1.3)
Reflex actions are second nature to humans, our brains try and keep us healthy
so we live longer. This includes minimizing damage to our bodies. And even
though we flinch without thinking, I do believe that those actions are rational
because it’s beneficial to minimize the amount of time being burned. However, I
do not believe flinching to be intelligent, being that it’s second nature and people
do it without thinking. Intelligence requires at least some amount of thought.

1.4) Suppose we extend Evans’ ANALOGY program so that it can score 200 on a
standard IQ test. Would we then have a program more intelligent than a human?
Explain.

1.4) If we have a program that can score a 200 on a standard IQ test it would
have the same intelligence as a human. I looked up the scores for an IQ test and
the max score is 140 and over. It is possible for a human to score a 200, but it is
very unlikely. The same can be said for the program, it can score a 200 on the IQ
test, just as the human can. If we had a program that can always score a 200
then I believe it would be more intelligent than a human.

1.6) How could introspection—reporting on one’s inner thoughts—be inaccurate?


Could I be wrong about what I’m thinking? Discuss.

1.6) Introspection can be inaccurate in many ways. One of the main reasons is
that everyone thinks in different ways, so the way a program perceives one
person;s thoughts may be different from the way it perceives another person’s
thought, or even the same person just at a different point in time. The way we
think changes over time so it’s hard to keep an accurate report on our thoughts.

1.7) To what extent are the following computer systems instances of artificial
intelligence:
• Supermarket bar code scanners.
• Web search engines.
• Voice-activated telephone menus.
• Internet routing algorithms that respond dynamically to the state of the
network.

1.7) The following items contain at least some artificial intelligence.

Supermarket Bar Code Scanners are tied into a database with records for each
item in a specific store. The scanner itself just reads the code and fins the
corresponding item in the database and comes up with the correct price. The
scanner itself has to be able to recognize that there is a bar code present, scan
it, and display the information to the cashier and the customer.
Web Search Engines are connected to the every piece of information stored on
the internet. Think of it like an all knowing entity. You can type anything in and if
it exists on the web, it will find it for you. The intelligence required isn’t that high
though. It seems like a daunting task for a human, but a piece of code that
searches the web for any text, image, file, etc containing the corresponding text
is fairly simple. It’s just matching what you typed to what’s out there, but on a
much larger scale. There are options when using search engines, but those only
refine the search; such as excluding items from a certain date, only retrieving
peer reviewed articles, or only finding images that contain pictures of Iron Man.
Voice-Activated Telephone Menus are in a way, similar to Web Search Engines.
The phone is programmed to hear a voice, listen for key words, and perform an
action based on those words. For example, I take out my phone and say “Galaxy,
call home.” My phone picks up that I’m addressing it by saying Galaxy, it knows
to open the phone app when I say call, and it knows which contact to call when I
saw home. If I had multiple numbers for home it would ask me to clarify which
number I want to call.
Internet routing algorithms that respond dynamically to the state of the
network respond on their own to what’s happening in their environment. If there
is too much traffic, it can decided whether or not to open up more
space. Internet Routing Algorithms know what ports are accessible and which
ones are not. Yes it is programmed to do so but it reacts on its own. There’s no
one scanning a box of cereal, searching the web for Iron Man, or someone telling
there phone to call home.
1.9) Why would evolution tend to result in systems that act rationally? What goals
are such systems designed to achieve?

1.9) Acting rationally is beneficial to everyone. Evolution is survival of the fittest,


only the strong (and intelligent) survive. Humans (for the most part) tend to act
rationally, and they designed their systems to do the same. We want proper
results as quickly as possible. Hunt in a pack not alone, solve this Rubik Cube by
not repeating any moves. Though the goals may be different, the way we solve
them is the same, in the most rational way.

If there is a solution to a problem then it is most likely rational. If there are


multiple solutions to a problem then some may be more rational than others.

1.10) Is AI a science, or is it engineering? Or neither or both? Explain.

1.10) Artificial Intelligence is most certainly a science. But it would be nothing


with engineering. Computer Scientists need somewhere to place their programs,
such as computers, servers, robots, cars, etc. But without engineers they would
have no outlet to test their Artificial Intelligence on. Science and Engineering go
hand in hand, they both benefit each other. While the engineers build the
machines, the scientists are writing code for their AI.
1.11) “Surely computers cannot be intelligent—they can do only what their
programmers tell them.” Is the latter statement true, and does it imply the
former?

1.11) The latter statement is true, in a sense. Yes computers only do what they’re
told, but they also learn from what they do. After a certain period of time they
know what works and what doesn’t. You can look at two children the same way.
Child A is brought up in a good home, enrolled in school, taught right from wrong.
Child B is raised in a less than favorable neighborhood, not enrolled in school,
and does not have much parental guidance. Child A has a completely different
view of the world from Child B. Children can only do what their parents
(programmers) tell them to do. But does that mean Child B will never succeed in
anything or that Child A will always do great things? The answer is no, There are
outside factors that affect children (programs). The program performs its tasks
and takes in knowledge and learns as it goes.

1.12) “Surely animals cannot be intelligent—they can do only what their genes
tell them.” Is the latter statement true, and does it imply the former?

1.12) The latter statement is blatantly false. Mostly because I don’t believe we’ve
even begun to really understand animals. Yes we can watch them, study their
behavior and habits, but that doesn’t mean we know why they do the things they
do. Yes, there are certain animals that are smarter than others. For example, a
dolphin is smarter than a sloth. Dolphins can learn commands and remember
people they interact with frequently, while sloths sometimes mistake their arms
for a branch and fall out of trees. Just as intelligence varies from person to
person, it varies from animal to animal, not only between different species, but
also within. There could be two dolphins of the exact same species and one is
more intelligent than the other.

1.13) “Surely animals, humans, and computers cannot be intelligent—they can do


only what their constituent atoms are told to do by the laws of physics.” Is the
latter statement true, and does it imply the former?

1.13) The latter statement is again false. If this were true then humans would
have never invented anything, most of all the airplane. The very first humans had
nothing, they were on equal ground with most animals. But over time they
developed tools, medicine, shelter, transportation, weapons, etc. I want to
specifically focus on the invention of the airplane. By the laws of physics,
humans cannot fly. But the Wright Brothers tirelessly worked on a machine that
would allow humans to soar in the sky like birds. And now, over 100 years later, a
human can fly around the world mere hours. Now this is quite a feat, but animals
have accomplished an even more impressive feat. They have survived for as long,
if not longer than humans with only evolution, They have not built mighty sky
scrapers, airplanes, computers, anything. They simply survive by using their
natural instincts, which to me is very impressive.

1.14) Examine the AI literature to discover whether the following tasks can
currently be solved by computers:
A) Playing a decent game of table tennis (Ping-Pong).
B) Driving in the center of Cairo, Egypt.
C) Driving in Victorville, California.
D) Buying a week’s worth of groceries at the market.
E) Buying a week’s worth of groceries on the Web.
F) Playing a decent game of bridge at a competitive level.
G) Discovering and proving new mathematical theorems.
H) Writing an intentionally funny story.
I) Giving competent legal advice in a specialized area of law.
J) Translating spoken English into spoken Swedish in real time.
K)Performing a complex surgical operation.

For the currently infeasible tasks, try to find out what the difficulties are and
predict when, if ever, they will be overcome.

1.14)
A) Playing a decent game of table tennis (Ping-Pong) – This is solvable by
computers. A robot arm can be fitted with a ping pong paddle and motion
sensors to move to wherever the ball is.

B) Driving in the center of Cairo, Egypt – This is solvable by computers. Google is


working on a self-driving car. It requires a ton of motion sensors in order to
respond to its surroundings. One issue would be refilling the gas tank.
C) Driving in Victorville, California -This is solvable by computers. Google is
working on a self-driving car. It requires a ton of motion sensors in order to
respond to its surroundings. One issue would be refilling the gas tank.

D) Buying a week’s worth of groceries at the market – This would be a challenge


for computers right now. The computer would have to know what it (or you)
wants, it would have to be able to identify foods without bar codes such as
apples.

E) Buying a week’s worth of groceries on the Web – This is solvable by


computers. All you need to do is tell your computer what you want, what size (if
any), how many, etc. You would also need your payment information accessible to
your computer so it can complete the transaction for you. Aside from picking up
your groceries, everything is done for you.

F) Playing a decent game of bridge at a competitive level – This is solvable by


computers. There are already computers that can play chess at a competitive
level, and since bridge isn’t as complicated as chess it shouldn’t be too hard for
a computer.

G) Discovering and proving new mathematical theorems – This would be a


challenge for computers right now. Computers can solve mathematical theorems,
but discovering them is a whole different story. A computer would have to be self
aware to discover anything.

H) Writing an intentionally funny story -This would be a challenge for computers


right now. Computers don’t know the concept of comedy. You can input jokes into
a computer, but it won’t know how to write a funny story with new material.

I) Giving competent legal advice in a specialized area of law – This would be a


challenge for computers right now. They would need to know every aspect of the
case and that specific area of the law. It’s easier for a computer to give
quantitative advice than qualitative advice.

J) Translating spoken English into spoken Swedish in real time – This is solvable
by computers.
K)Performing a complex surgical operation – This would be a challenge for
computers right now.

Chapter 2 Exercises
Below are the exercises from Chapter 2 I’ve chosen to work.
2.1) Suppose that the performance measure is concerned with just the first T
time steps of the environment and ignores everything thereafter. Show that a
rational agent’s action may depend not just on the state of the environment but
also on the time step it has reached.

2.1) A rational agent’s actions vary in regards to the environment. An action may
or may not affect the environment, but if it does then we need to update the
environment. One action may also lead to different paths, so we need to know
what the final result will be in regards to every action, not just up until T time
steps. An agent can only know what to do based on what it knows, if it stops
after T time steps then there is a chance it will have not reached its goal, unless
the goal in question has been designed around T time steps.

2.2) Let us examine the rationality of various vacuum-cleaner agent functions.


A) Show that the simple vacuum-cleaner agent function described in Figure 2.3 is
indeed rational under the assumptions listed on page 38.
B) Describe a rational agent function for the case in which each movement costs
one point. Does the corresponding agent program require internal state?
C) Discuss possible agent designs for the cases in which clean squares can
become dirty and the geography of the environment is unknown. Does it make
sense for the agent to learn from its experience in these cases? If so, what
should it learn? If not, why not?

2.2)
The assumptions on page 38 state that:
-The performance measure awards one point for each clean square at each time
step, over a “lifetime” of 1000 time steps.
-The “geography” of the environment is known a priori (Figure 2.2) but the dirt
distribution and the initial location of the agent are not. Clean squares stay clean
and sucking cleans the current square. The Left and Right actions move the
agent left and right except when this would take the agent outside the
environment, in which case the agent remains where it is.
-The only available actions are Left , Right, and Suck.
-The agent correctly perceives its location and whether that location contains
dirt.
A) The simple vacuum cleaner agent is indeed rational under these assumptions
because it accounts for all variables. The map is known, there is a sensor for
clean or dirty, what to do in that space, and the agent will never go out of
bounds.

B) If each move costs one point then an internal state is required to keep track of
starting points (if any) and the subtraction of those points. Unless of course the
score starts at 0 and goes negative for each move. But if cleaning a space
awards one point, then an optimal goal would be to try and get a score of 0 or
higher.

C) If clean spaces can become dirty again then obviously a vacuum should clean
it again. But the vacuum should bot be constantly running, this would waste
electricity. The vacuum should map its surroundings and objects so it knows that
the environment looks like for each additional time. By learning how large the
area is it can determine how many times a day it must run to keep every space
clean. Upon starting up it should assume every space is dirty and start a path
around to clean them all, making sure to get every space. The more the vacuum
does this the more it will know its surroundings and how often to clean.

2.3) For each of the following assertions, say whether it is true or false and
support your answer with examples or counterexamples where appropriate.
A) An agent that senses only partial information about the state cannot be
perfectly rational.
B) There exist task environments in which no pure reflex agent can behave
rationally.
C) There exists a task environment in which every agent is rational.
D) The input to an agent program is the same as the input to the agent function.
E) Every agent function is implementable by some program/machine
combination.
F) Suppose an agent selects its action uniformly at random from the set of
possible actions. There exists a deterministic task environment in which this
agent is rational.
G) It is possible for a given agent to be perfectly rational in two distinct task
environments.
H) Every agent is rational in an unobservable environment.
I) A perfectly rational poker-playing agent never loses.

2.3)

2.4) For each of the following activities, give a PEAS description of the task
environment and characterize it in terms of the properties listed in Section 2.3.2.
A) Playing soccer.
B) Exploring the subsurface oceans of Titan.
C) Shopping for used AI books on the Internet.
D) Playing a tennis match.
E) Practicing tennis against a wall.
F) Performing a high jump.
G) Knitting a sweater.
H) Bidding on an item at an auction.
2.4)

2.5) Define in your own words the following terms: agent, agent function, agent
program, rationality, autonomy, reflex agent, model-based agent, goal-based
agent, utility-based agent, learning agent.

2.5)
Agent- A system with at least some form of intelligence.
Agent Function- What at agent is suppose to do, its purpose.
Agent Program- An internal absolute implementation of code.
Rationality- What the agent knows about the environment and a self judgement
on how it performed.
Autonomy- The ability to act on its own. Knowing where it is, what it has to do,
etc.
Reflex Agent- Responding to percepts in the environment.
Model Based Agent- Has knowledge of the workings of the world.
Goal Based Agent- Has knowledge of the goal and decides what actions to take
in order to reach it.
Utility Based Agent- Determines the best way to reach the goal.
Learning Agent- Analyzes information to make improvements.
2.6) This exercise explores the differences between agent functions and agent
programs.
A) Can there be more than one agent program that implements a given agent
function? Give an example, or show why one is not possible.
B) Are there agent functions that cannot be implemented by any agent program?
C) Given a fixed machine architecture, does each agent program implement
exactly one agent function?
D) Given an architecture with n bits of storage, how many different possible
agent programs are there?
E) Suppose we keep the agent program fixed but speed up the machine by a
factor of two. Does that change the agent function?

2.6)
A) Yes, there can be more than one agent program implementing an agent
function. As stated above, the function is the purpose and the program is the
code for its implementation. If a function has multiple options then there must be
more than one program.
B) There exist agent functions that cannot be implemented by any agent
programs. For example, if an agent function was to count to find the square root
of a negative number. There is no way to solve that.
C) Yes, each agent program will implement exactly one agent function. For
example, a precept has multiple reactions, but each reaction is different
according the situation it’s in.
D) There are 2^n possible agent programs.
E) Speeding up the machine does not change the agent function because the
environment is static.

2.7) Write pseudocode agent programs for the goal-based and utility-based
agents.

2.7)
Goal-Based Pseudocode
set tickets_unsold to 50
set tickets_sold to 0
sell tickets for show
decrease tickets_unsold for each ticket_sold
increase tickets_sold for every decrease in tickets_unsold

stop when tickets_unsold equals zero

Utility-Based Pseudocode
set starting_location to (0,0)
set ending_location to (50,50)
create a fifty by fifty grid (even numbers are rows, odd numbers are columns)
set time to 0

randomly generate two numbers between 1 and 25 (the first is rows and the
second is columns)
this determines how many roads contain traffic which increases time to
ending_location

a normal block takes 1 minute to travel


a traffic block takes 5 minutes

start at (0,0) and arrive at (50,50) is the shortest amount of time

compute which path is the shortest and has the least amount of traffic

2.8) Implement a performance-measuring environment simulator for the vacuum-


cleaner world depicted in Figure 2.2 and specified on page 38. Your
implementation should be modular so that the sensors, actuators, and
environment characteristics (size, shape, dirt placement, etc.) can be changed
easily. (Note: for some choices of programming language and operating system
there are already implementations in the online code repository.)

2.8)

2.9) Implement a simple reflex agent for the vacuum environment in Exercise 2.8.
Run the environment with this agent for all possible initial dirt configurations
and agent locations. Record the performance score for each configuration and
the overall average score.

2.9)

2.10) Consider a modified version of the vacuum environment in Exercise 2.8, in


which the agent is penalized one point for each movement.
A) Can a simple reflex agent be perfectly rational for this environment? Explain.
B) What about a reflex agent with state? Design such an agent.
C) How do your answers to a and b change if the agent’s percepts give it the
clean/dirty status of every square in the environment?

2.10)
A) A simple reflex agent cannot be perfectly rational in this environment because
the agent never stops and its score will continue downward. It also has no idea
whether there are even any unclean spaces before moving.
B) A reflex agent with a state is possible, as long as it keep track of the
environment, otherwise it will keep moving from space to space. But the reflex
agent performs the same action in similar situations, so entering a dirty space
and moving is fine, but after moving from a clean space it will continue to move
forever. So as long as the agent has memory of squares and the environment it is
possible to work. There needs to be a line of code that states “after all squares
are clean, stop).
C) If the agent knows whether a square is dirty or clean it has the option to take
no action which prevents the score from decreasing. The agent should only clean
dirty squares and if it has to travel to a dirty space, it should take the shortest
route.
Chapter 3 Exercises
Below are the problems I’ve chosen to work from Chapter 3.

3.1 Explain why problem formulation must follow goal formulation.

3.1 Well goal formulation is used to steer the agent in the right direction, thus
ignoring any redundant actions. Problem formulation must follow this because it
is based off of Goal Formulation. It is the process of deciding what actions and
states to consider given a certain goal. A goal may be set in stone, but how you
achieve it can vary. Usually the most optimal way is chosen though.

3.2 Your goal is to navigate a robot out of a maze. The robot starts in the center
of the maze facing north. You can turn the robot to face north, east, south, or
west. You can direct the robot to move forward a certain distance, although it
will stop before hitting a wall.
a. Formulate this problem. How large is the state space?
b. In navigating a maze, the only place we need to turn is at the intersection of
two or more corridors. Reformulate this problem using this observation. How
large is the state space now?
c. From each point in the maze, we can move in any of the four directions until
we reach a turning point, and this is the only action we need to do. Reformulate
the problem using these actions. Do we need to keep track of the robot’s
orientation now?
d. In our initial description of the problem we already abstracted from the real
world,
restricting actions and removing details. List three such simplifications we
made.

3.2
a.
initial state: At(0,0), Facing (0,1).
goal state: at (a,b), where a & b are outside of the maze walls.
<Turn North (At (a,b), Facing (0,1))>
<Turn South (At (a,b), Facing (0,-1))>
<Turn East (At (a,b), Facing (1,0))>
<Turn West (At (a,b), Facing (-1,0))>
<Move (z blocks) in any direction until exiting the maze.
The state space is 4. There is one block, with four directional options.
b.
c.
d.

3.6 Give a complete problem formulation for each of the following. Choose a
formulation that is precise enough to be implemented.
a. Using only four colors, you have to color a planar map in such a way that no
two adjacent regions have the same color.
b. A 3-foot-tall monkey is in a room where some bananas are suspended from the
8-foot ceiling. He would like to get the bananas. The room contains two
stackable, movable, climbable 3-foot-high crates.
c. You have a program that outputs the message “illegal input record” when fed a
certain file of input records. You know that processing of each record is
independent of the other records. You want to discover what record is illegal.
d. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a
water faucet. You can fill the jugs up or empty them out from one to another or
onto the ground. You need to measure out exactly one gallon.

3.6
a.
Initial State: All region are uncolored
Actions: Assign color to an uncolored area.
Transition Model: The uncolored area is now colored and cannot be colored
again.
Goal Test: All regions have color, and no two adjacent regions have the same
color.
Cost Function: Number of areas.
b.
Initial State: An 8-foot high room, 2 crates, 1 money, bananas.
Actions: Monkey moving and stacking boxes to reach bananas.
Transition Model: the boxes have either moved, been stacked, or both.
Goal Test: The monkey gets the bananas.
Cost Function: Number of actions.
c.
Initial State: All input records
Actions: Searching through records for the illegal record.
Transition Model: Dividing records up into parts and searching each part.
Goal Test: Finding the illegal record.
Cost Function: Amount of attempts.
d.
Initial State: Jugs are empty
Actions: Fill jugs up or transfer water between them.
Transition Model: Amount of water in each jug changes.
Goal Test: Is there exactly one gallon?
Cost Function: Number of actions.

Extra Point Opportunity to Disprove 1 Gallon in Each Jug


Below I have proven that it is impossible to reach 1 gallon in each jug by using
only this method.
As you can see above, obtaining 1 gallon can be achieved in a small amount of
moves, as can obtaining two separate jugs with one gallon each. But getting all
three with just 1 gallon is impossible under these conditions.

In order to get to 1 gallon we can either add (1+gallon limit) to either the 12, 8, or
3 gallon jugs (this is how we get the second 1 gallon) and only if there is a jug to
hold that extra 1 gallon or we can dump out from a higher gallon into a lower
(This is how we get the first 1 gallon) which can only be done if there are jugs
with available space. Both of these conditions need to be true in order to obtain
the third gallon. The first gallon is only obtained because we pour the 12 gallon
into the 8 and 3, resulting in 1 gallon. Then we empty the 8 and 3 gallons, and
shift the 1 gallon from 12 to 3. We then fill up the 12 and out it into the 8 to get 4
gallons in the 12. After that we empty the 8 gallon, move the 1 gallon from the 3
to the 8. Finally we pour the 4 from the 12 into the 3 to obtain our second gallon.
After this we can shift gallons around to get either (12,1,1),(1,8,1), or (1,1,3). With
these three different types we can try to get the third gallon but in doing so we
will eventually get to a state that has already been visited causing us to loop
back until there are no more options.

3.8 On page 68, we said that we would not consider problems with negative path
costs. In this exercise, we explore this decision in more depth.
a. Suppose that actions can have arbitrarily large negative costs; explain why
this possibility would force any optimal algorithm to explore the entire state
space.
b. Does it help if we insist that step costs must be greater than or equal to some
negative constant c? Consider both trees and graphs.
c. Suppose that a set of actions forms a loop in the state space such that
executing the set in some order results in no net change to the state. If all of
these actions have negative cost, what does this imply about the optimal
behavior for an agent in such an environment?
d. One can easily imagine actions with high negative cost, even in domains such
as route finding. For example, some stretches of road might have such beautiful
scenery as to far outweigh the normal costs in terms of time and fuel. Explain, in
precise terms, within the context of state-space search, why humans do not
drive around scenic loops indefinitely, and explain how to define the state space
and actions for route finding so that artificial agents can also avoid looping.
e. Can you think of a real domain in which step costs are such as to cause
looping?

3.8
a. If each action had a large negative cost, an optimal algorithm would want to
explore the entire state space to determine the route with the least
consequences.
b. It only helps if the step cost is higher than some negative constant c,
preferably so higher that it become positive.
c. The optimal behavior for this instance would be to not take any action, thus
stopping the accumulation of negative points.
d. Scenic routers do look amazing, but the novelty wears off after a while, which
will eventually stop the infinite revisiting of the scene.
e. Real domains that contain looping include going to work, eating dinner, daily
tasks, etc.

3.9 The missionaries and cannibals problem is usually stated as follows. Three
missionaries and three cannibals are on one side of a river, along with a boat
that can hold one or two people. Find a way to get everyone to the other side
without ever leaving a group of missionaries in one place outnumbered by the
cannibals in that place. This problem is famous in AI because it was the subject
of the first paper that approached problem formulation from an analytical
viewpoint (Amarel, 1968).
a. Formulate the problem precisely, making only those distinctions necessary to
ensure a valid solution. Draw a diagram of the complete state space.
b. Implement and solve the problem optimally using an appropriate search
algorithm. Is it a good idea to check for repeated states?
c. Why do you think people have a hard time solving this puzzle, given that the
state space is so simple?

3.9
a.

b. It is a good idea to check for repeated states because if one is encountered


and continues it will loop back to the initial state. But since the search space is
so small, we can use an optimal search.
c. I believe that people have a difficult time solving this problem because a
majority of the moves are illegal.
3.10 Define in your own words the following terms: state, state space, search
tree, search node, goal, action, transition model, and branching factor.

3.10
State- the structure of the current world.
State Space- all available states that can be reached from the initial state.
Search Tree- all available actions that start from the initial state.
Search Node- the search tree represented by data.
Goal- the state that an agent wants to get to.
Action- what moves an agent from state to state, or sometimes the same state.
Transition Model- an overview of what every action does
Branching Factor- the maximum amount of successors a node has(Which
determines how complex the problem is).

3.11 What’s the difference between a world state, a state description, and a
search node? Why is this distinction useful?

3.11
A world state is what the world looks like, while a state description tells us
about the state in every detail, and a search node is a data representation of the
search. So the world state is the state itself, the state description is information
on it, and the search node is the search data.

S-ar putea să vă placă și