Documente Academic
Documente Profesional
Documente Cultură
EXPERT SYSTEM
Fall 2017-18
Intelligent Agents
Chapter 2
Prepared By
Sabbir Ahmed
Credits:
Dr. Shamim Akhter
Ahmed Ridwanul Islam
Abdus Salam
Agents
Human agent:
Sensors: eyes, ears, and other organs
Actuators: hands, legs, mouth, and other body parts
Robotic agent:
Sensors: cameras and infrared range finders
Actuators: various motors
2
Agents
3
Vacuum-cleaner world
Percepts: location and contents, e.g.,
[A,Dirty]
Actions: Left, Right, Suck, NoOp
A vacuum-cleaner agent
4
Rational agents
An agent should strive to "do the right thing", based on
what it can perceive and
the actions it can perform
The right action is the one that will cause the agent to be most
successful
5
Rational agents
Rational Agent:
Given:
Possible percept sequence,
The evidence provided by the percept sequence and
Whatever built-in knowledge the agent has.
Aim: For each possible percept sequence a rational
agent should select an action
Subject to: maximize its performance measure.
6
Rational and Omniscience Agent
Omniscience agent knows the actual outcome of its
actions, and can act accordingly.
Omniscience agent is impossible.
Aim: For each possible percept sequence a rational
agent should select an action
In summary, rational at any time depends on 4
things
The performance measure (degree of success)
Percept sequence (perception history)
Knowledge about the environment
The performed actions
7
Ideal Rational agents
8
Mapping (Percept seq to actions )
9
Autonomy
Agent actions are completely based on the
built-in knowledge (mapping table) then the
agent called lacks autonomy. (e.g. clock)
Behavior can be based on both
Its own experience
Built-in-knowledge
Truly autonomous intelligent agent should be
avoid unsuccessful result (e.g. dung beetle).
10
Rational agents
Rationality is distinct from omniscience (all-
knowing with infinite knowledge)
12
PEAS: Example 1
Agent: Automated taxi driver
Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
Environment: Roads, other traffic, pedestrians,
customers
Actuators: Steering wheel, accelerator, brake,
signal, horn
Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
13
PEAS : Example 2
Agent: Medical diagnosis system
Performance measure: Healthy patient,
minimize costs, lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests,
diagnoses, treatments, referrals)
Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
14
PEAS : Example 3
15
PEAS : Example 4
16
Environment types
Fully observable (vs. partially observable): An agent's sensors
give it access to the complete state of the environment at each
point in time.
17
Environment types
Static (vs. dynamic): The environment is unchanged
while an agent is deliberating.
The environment is semi dynamic if the environment itself
does not change with the passage of time but the agent's
performance score does.
18
Environment types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
19
Agents and Environments
20
Agent functions and programs
An agent is completely specified by the agent
function mapping percept sequences to
actions
21
Table-lookup agent
Drawbacks:
Huge table
Take a long time to build the table
No autonomy
Even with learning, need a long time to learn the table entries
22
Agent types
23
Simple reflex agents
These agents select actions on the basis of current
percept, ignoring the rest of the percept history.
24
Simple reflex agents
25
Simple reflex agents
26
Model-based reflex agents
27
Model-based reflex agents
28
Model-based reflex agents
29
Goal-based agents
30
Goal-based agents
31
Utility-based agents
32
Utility-based agents
33
Learning agents
34
Learning agents
35