Documente Academic
Documente Profesional
Documente Cultură
Artificial Intelligence
build and understand intelligent entities or agents
2 main approaches: engineering versus cognitive modeling
What is Artificial Intelligence?
John McCarthy, who coined the term Artificial Intelligence in 1956,
defines it as "the science and engineering of making intelligent
machines", especially intelligent computer programs.
Artificial Intelligence (AI) is the intelligence of machines and
the branch of computer science that aims to create it.
Intelligence is the computational part of the ability to achieve
goals in the world. Varying kinds and degrees of intelligence occur
in people, many animals and some machines.
AI is the study of the mental faculties through the use of
computational models.
AI is the study of : How to make computers do things which,
at the moment, people do better.
AI is the study and design of intelligent agents, where an
intelligent agent is a system that perceives its environment and
takes actions that maximize its chances of success.
Whats involved in Intelligence?
Ability to interact with the real world
to perceive, understand, and act
e.g., speech recognition and understanding and synthesis
e.g., image understanding
e.g., ability to take actions, have an effect
Mundane Tasks
Formal Tasks
Expert Tasks
Academic Disciplines relevant to AI
Hiro: [screams, then sees who it is] You gave me a heart attack!
Baymax: Clear.
Conclusion:
NO, normal speech is too complex to accurately recognize
YES, for restricted problems (small vocabulary, single speaker)
Can Computers Understand speech?
Understanding is different to recognition:
Time flies like an arrow
assume the computer can recognize all the words
how many different interpretations are there?
1. time passes quickly like an arrow?
2. command: time the flies the way an arrow times the flies
3. command: only time those flies which are like an arrow
4. time-flies are fond of arrows
only 1. makes any sense,
but how could a computer figure this out?
clearly humans use a lot of implicit commonsense knowledge in
communication
Conclusion:
mostly NO: computers can only see certain types of objects under limited
circumstances
YES for certain constrained problems (e.g., face recognition)
Can computers plan and make optimal decisions?
Intelligence
involves solving problems and making decisions and plans
e.g., you want to take a holiday in Brazil
you need to decide on dates, flights
you need to get to the airport, etc
involves a sequence of decisions, plans, and actions
Post Office
automatic address recognition and sorting of mail
Banks
automatic check readers, signature verification systems
automated loan application classification
Customer Service
automatic voice recognition
The Web
Identifying your age, gender, location, from your Web surfing
Automated fraud detection
Digital Cameras
Automated face detection and focusing
Computer Games
Intelligent characters/agents
Hard or Strong AI
Isn't there a solid definition of intelligence that doesn't depend on relating it to human
intelligence?
Not yet. The problem is that we cannot yet characterize in general what kinds of
computational procedures we want to call intelligent. We understand some of the
mechanisms of intelligence and not others.
Many of today's AI systems are connected to large data bases, they deal
with legacy data, they talk to networks, they handle noise and data
corruption with style and grace, they are implemented in popular
languages, and they run on standard operating systems.
Requires:
Natural language
Knowledge representation
Automated reasoning
Machine learning
(vision, robotics) for full test
Acting humanly: Turing test
Turing (1950) "Computing machinery and intelligence
Problems
Humans dont behave rationally
Limitations
Does not account for an agents uncertainty about the world
E.g., difficult to couple to vision or speech systems
Has no way to represent goals, costs, etc (important aspects of real-world environments)
Acting rationally: rational agent
Decision theory/Economics
Set of future states of the world
Set of possible actions an agent can take
Utility = gain to an agent for each action/state pair
The right thing: that which is expected to maximize goal
achievement, given the available information
An agent acts rationally if it selects the action that maximizes its
utility
Or expected utility if there is uncertainty
ASIC FPGA
Human agent:
eyes, ears, and other organs for sensors;
hands, legs, mouth, and other body parts for
actuators
Robotic agent:
cameras and infrared range finders for sensors; various
motors for actuators
Agents and environments
PEAS:
Performance measure
Environment
Actuators
Sensors
PEAS
Example: Agent = taxi driver
back
gammon
taxi partial stochastic sequential dynamic continuous multi
driving
medical partial stochastic sequential dynamic continuous single
diagnosis
image fully determ. episodic semi continuous single
analysis
partpicking partial stochastic episodic dynamic continuous single
robot
refinery partial stochastic sequential dynamic continuous single
controller
interact. partial stochastic sequential dynamic discrete multi
Eng. tutor
task observable determ./ episodic/ static/ discrete/ agents
environm. stochastic sequential dynamic continuous
crossword fully determ. sequential static discrete single
puzzle
chess with fully strategic sequential semi discrete multi
clock
poker partial stochastic sequential static discrete multi
back
gammon
taxi partial stochastic sequential dynamic continuous multi
driving
medical partial stochastic sequential dynamic continuous single
diagnosis
image fully determ. episodic semi continuous single
analysis
partpicking partial stochastic episodic dynamic continuous single
robot
refinery partial stochastic sequential dynamic continuous single
controller
interact. partial stochastic sequential dynamic discrete multi
Eng. tutor
task observable determ./ episodic/ static/ discrete/ agents
environm. stochastic sequential dynamic continuous
crossword fully determ. sequential static discrete single
puzzle
chess with fully strategic sequential semi discrete multi
clock
poker partial stochastic sequential static discrete multi
Goal-based agents
Utility-based agents
Learning agents
Table Driven Agent. current state of decision process
table lookup
for entire history
Table Driven Agent.
Simple reflex agents
NO MEMORY
Fails if environment
is partially observable
Example 2:
if car-in-front-is-braking then initiate-braking
Simple reflex agents
Model-based reflex agents
Model the state of the world by:
modeling how the world changes
description of how its actions change the world
current world state
Example:
how the world evolves independently of the agent
an overtaking car generally will be closer behind than it was a moment
ago
how the agents own actions affect the world
when the agent turns the steering clockwise, the car turns to the right.
After driving for five minutes northbound on the freeway, one is usually
about five miles north of where one was five minutes ago.
The details of how models and states are represented vary widely
depending on the type of environment and the particular technology used
in the agent design.
Model-based reflex agents
Goal-based agents
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
Utility-based agents
Evaluates
current
world
state
changes
action
rules
old agent=
model world
and decide on
actions
to be taken
suggests
explorations
Summary
Conceptions of AI span two major axes:
Thinking vs. Acting; Human-like vs. Rational
Textbook (and this course) adopt Acting Rationally
Esther Dyson: AI as raisin bread
Small sweet nuggets of intelligent control points
Embedded in the matrix of an engineered system
Rational Agent as the organizing theme
Acts to maximize expected performance measure
Task Environment: PEAS
Environment types:Yield design constraints