Sunteți pe pagina 1din 82

CS 785, Fall 2001

George Tecuci
tecuci@cs.gmu.edu
http://lalab.gmu.edu/

Learning Agents Laboratory


Department of Computer Science
George Mason University
G.Tecuci, Learning Agents Laboratory

How are agents built


Intelligent Agent
Domain
Expert

Knowledge
Engineer

Inference Engine

Dialog
Programming

Knowledge Base

Results

A knowledge engineer attempts to understand how a subject


matter expert reasons and solves problems and then encodes
the acquired expertise into the agent's knowledge base.

The expert analyzes the solutions generated by the agent


(and often the knowledge base itself) to identify errors, and
the knowledge engineer corrects the knowledge base.
G.Tecuci, Learning Agents Laboratory

A Scenario for Manual Knowledge Acquisition

Adapted from:
B.G. Buchanan, D. Barstow, R. Bechtal, J. Bennett, W.
Clancey, C. Kulikowski, T. Mitchell, D.A. Waterman,
Constructing an Expert System,

in F. Hayes-Roth, D. Waterman and D. Lenat (eds),


Building Expert Systems, Addison-Wesley, 1983,
pp.127-168.

G.Tecuci, Learning Agents Laboratory

Identification of a problem

The director of ORNL faces a problem. EPA regulations forbid


the discharge of quantities of oil or hazardous chemicals into
or upon waters of the United States, when this discharge
violates specified quality standards. ORNL has approximately
2000 buildings on a 200-square-mille government reservation,
with 93 discharge sites entering White Oak Creek. Oil and
hazardous chemicals are stored and used extensively at ORNL.
The problem is to detect, monitor, and contain spills of these
materials.

G.Tecuci, Learning Agents Laboratory

Investigated solution
Develop a computer system that incorporates the expertise
of people familiar with spill detection and containment
(i.e. a knowledge-based system, expert system or agent).
A knowledge engineer is assigned the job of building
the system.
The knowledge engineer becomes familiar with the problem
and the domain.

The knowledge engineer finds an expert on the subject


who agrees to collaborate in building the system.
G.Tecuci, Learning Agents Laboratory

Scope the problem to solve: specify requirements


The knowledge engineer and the expert have a series of
meetings to better identify the problem and to characterize it
informally. They decide to concentrate on identifying, locating,
and containing the spill:

When an accidental inland spill of an oil or chemical occurs,


an emergency situation may exist, depending on
the properties and quantity of the substance released,
the location of the substance, and whether or not
the substance enters a body of water.
The observer of a spill should:
1. characterize the spill and the probable hazards,
2. contain the spill material,
3. locate the source of the spill and stop any further release,
4. notify the Department of Environmental Management.
G.Tecuci, Learning Agents Laboratory

Understanding the expertise domain


The knowledge engineer schedules numerous meetings with the expert to
uncover basic concepts, primitive relations, and definitions needed to talk
about and understand this problem and its solutions. The following is a
sample dialogue between the knowledge engineer and the expert:
KE: Suppose you were told that a spill had been detected in White Oak
Creek one mile before it enters White Oak Lake. What would you do to
contain the spill?
SME: That depends on a number of factors. I would need to find the source
in order to prevent the possibility of further contamination, probably by
checking drains and manholes for signs of the spill material. And it helps
to know what the spilled material is.
KE: How can you tell what it is?

SME: Sometimes you can tell what the substance is by its smell. Sometimes
you can tell by its color, but that's not always reliable since dyes are used a
lot nowadays. Oil, however, floats on the surface and forms a silvery film,
while acids dissolves completely in the water. Once you discover the type
of material spilled, you can eliminate any building that either don't store
the material at all or don't store enough of it to account for the spill.
G.Tecuci, Learning Agents Laboratory

Identify the basic concepts of the domain


The knowledge engineer schedules numerous meetings with the expert to
uncover basic concepts, primitive relations, and definitions needed to talk
about and understand this problem and its solutions. The following is a
sample dialogue between the knowledge engineer and the expert:
KE: Suppose you were told that a spill had been detected in White Oak
Creek one mile before it enters White Oak Lake. What would you do to
contain the spill?
SME: That depends on a number of factors. I would need to find the source
in order to prevent the possibility of further contamination, probably by
checking drains and manholes for signs of the spill material. And it helps
to know what the spilled material is.
KE: How can you tell what it is?

SME: Sometimes you can tell what the substance is by its smell. Sometimes
you can tell by its color, but that's not always reliable since dyes are used a
lot nowadays. Oil, however, floats on the surface and forms a silvery film,
while acids dissolves completely in the water. Once you discover the type
of material spilled, you can eliminate any building that either don't store
the material at all or don't store enough of it to account for the spill.
G.Tecuci, Learning Agents Laboratory

Identify the basic concepts of the domain (cont.)


As a result of such dialogues, the knowledge engineer identifies a set of
concepts and features used in this problem:
Task: Identification of spill material
Attributes of spill
Type of spill: Oil, acid
Location of spill: <A set of drains and manholes>
Volume of spill: <A number of liters>

Attributes of material
Color: Silvery, clear, etc.
Odor: Pungent/choking, etc.
Does it dissolve?
Possible locations: <A set of buildings>
Amount stored: <A number of liters>
G.Tecuci, Learning Agents Laboratory

Choosing the system-building language or tool


During conceptualization, the knowledge engineer thinks also at a
general system-building language or tool for implementing the knowledge
based system.
It was determined that the data are well-structured and fairly reliable
and that the decision processes involve feedback and parallel decisions.
This suggests the use of a rule-based language.
Therefore the knowledge engineer decides to use the rule-based
language ROSIE.
ROSIE provides a general (rule-based) inference engine, as well as
a formalism for representing the knowledge in the form of assertions
about objects and inference rules.

ROSIE could be regarded as a very general expert system shell.

G.Tecuci, Learning Agents Laboratory

Represent the domain concepts: object ontology


The knowledge engineer attempts to represent the concepts in ROSIE's formalism:

ASSERT each of BUILDING 3023 and BUILDING 3024 is a building.


ASSERT s6-1 is a source in BUILDING 3023.
ASSERT s6-2 is a source in BUILDING 3024.
ASSERT s6-1 does hold 2000 gallons of gasoline.
ASSERT s6-2 does hold 50 gallons of acetic acid.
ASSERT each of d6-1 and d6-2 is a drain.
ASSERT each of m6-1 and m6-2 is a manhole.
ASSERT any drain is a location and any manhole is a location.
ASSERT each of diesel oil, hydraulic oil, transformer oil and gasoline is an oil.
ASSERT each of sulfuric acid, hydrochloric acid and acetic acid is an acid.
ASSERT every oil is a possible-material of the spill
and every acid is a possible-material of the spill.
ASSERT the spill does smell of [some material, e.g. gasoline, vinegar, diesel oil].
ASSERT the spill does have [some odor, e.g., a pungent/choking, no] odor.
ASSERT the odor of the spill [is, is not] known.
ASSERT the spill does form [some appearance, e.g., a silvery film, no film].
ASSERT the spill [does, does not] dissolve in water.
G.Tecuci, Learning Agents Laboratory

Define the problem solving rules


The knowledge engineer now uses the identified concepts to represent the
expert's method of determining the spill material as a set of ROSIE rules:

To determine-spill-material:
[1]

IF

the spill does not dissolve in water


and the spill does form a silvery film,
THEN let the spill be oil.

[2]

IF

the spill does dissolve in water


and the spill does form no film,
THEN let the spill be acid.

(continued on next page)

G.Tecuci, Learning Agents Laboratory

Define the problem solving rules (cont.)


(continued from previous page)
[3]
IF
the spill = oil
and the odor of the spill is known
THEN choose situation:

[4]

IF
THEN

End.
G.Tecuci, Learning Agents Laboratory

IF
THEN

the spill does smell of gasoline


let the material of the spill be gasoline with certainty .9;

IF
THEN

the spill does smell of diesel oil


let the material of the spill be diesel oil with certainty .8.

the spill = acid


and the odor of the spill is known,
choose situation:
IF
THEN

the spill does have a pungent/choking odor


let the material of the spill be
hydrochloric acid with certainty .7;

IF
THEN

the spill does smell of vinegar


let the material of the spill be
acetic acid with certainty .8.

Verifying the problem solving rules


The knowledge engineer shows the rules to the expert and asks for
reactions:

KE: Here are some rules I think capture your explanation about
determining the type of material spilled and eliminating possible spill
sources. What do you think?
SME: Uh-huh (long pause). Yes, that begins to capture it. Of course if
the material is silver nitrate it will dissolve only partially in the water.

KE: I see. Well, let's add that information to the knowledge base and
see what it looks like.

G.Tecuci, Learning Agents Laboratory

Refinement of the knowledge base


The knowledge engineer may now revise the knowledge base by
reformulating basic domain concepts, and refining the rules.
Delete: ASSERT the spill [does, does not] dissolve in water.

Add:

ASSERT the solubility of the spill is


[some level - high, moderate, low].

Modify:[1]

IF

Add: [1.5]

IF
the solubility of the spill is moderate,
THEN let the material of the spill be silver-nitrate
with certainty .6

G.Tecuci, Learning Agents Laboratory

the solubility of the spill is low


and the spill does form a silvery film,
THEN let the spill be oil.

Main phases of the agent development process


Defining problem to solve and system to be built:
requirements specification
Understanding the expertise domain
Choosing or building an agent building tool:
Inference engine and representation formalism
Development of the object ontology
Development of problem solving rules or methods

Refinement of the knowledge base


G.Tecuci, Learning Agents Laboratory

Feedback
loops
among all
phases

Elicitation of experts conception of a domain

By eliciting the expert's conception of his/her


expertise domain we mean determining which
concepts apply in the domain, what they mean,
what is their relative place in the domain, what are
the differentiating criteria distinguishing the
similar concepts, and what is the organizational
structure giving these concepts a coherence for
the expert.

G.Tecuci, Learning Agents Laboratory

Elicitation methodology
(based primarily on Gammack, 1987)
1. Concept elicitation: methods
(elicit the concepts of the domain i.e. an
agreed vocabulary)
2. Structure elicitation: the card-sort method
(elicit some structure for the concepts)
3. Structure representation
(formally represent that structure in a
semantic network)

4. Transformation of the representation


(transform the representation to be used for
some desired purpose)
G.Tecuci, Learning Agents Laboratory

Concept elicitation methods


Ask the expert to prepare an introductory talk
outlining the whole domain, and to deliver it as a
tutorial session to the knowledge engineer
Tape-record a lecture
Ask the expert to generate a list of typical concepts
and then systematically probe for more relevant
information (e.g. using free association).

Identify concepts from the index of an expert's book

G.Tecuci, Learning Agents Laboratory

Concept elicitation methods (cont.)


Unstructured interview of the expert
The questions and the alternative responses are openended.
Example (the interview illustrated before in the spill application):
KE: Suppose you were told that a spill had been detected in White Oak
Creek one mile before it enters White Oak Lake. What would you do to
contain the spill?
SME: That depends on a number of factors. I would need to find the source
in order to prevent the possibility of further contamination, probably by

Used when the KE wants to explore an issue.

Difficult to plan and conduct.


G.Tecuci, Learning Agents Laboratory

Concept elicitation methods (cont.)


Structured interview of the expert
The questions are fixed in advance.
Types of structured questions
Multiple-choice questions (offer specific choices, faster
tabulation, and less bias due to the way the answers are
ordered)
Dichotomous (yes/no) questions
Ranking scale questions (ask the expert to arrange items in a
list in order of their importance or preference)

Used when the KE wants specific information.

It is goal oriented.
G.Tecuci, Learning Agents Laboratory

Concept elicitation methods (cont.)


Protocol analysis (think-aloud technique)
Systematic collection and analysis of the thought
processes or problem-solving methods of an expert.
Protocols (cases, scenarios) are collected by asking
experts to solve problems and to verbalize what goes
through their minds, stating directly what they think. The
solving process is carried out in an automatic fashion while
the expert talks.
Knowledge engineer does not interrupt or ask questions.

Structuring the information elicited occurs later when the


knowledge engineer analyzes the protocol.
G.Tecuci, Learning Agents Laboratory

Illustration
Elicitation experiment in the domain of domestic
gas-fired hot water and central heating system
(Gammack, 1987).
Initial interview resulted in about 90 nouns or compound
nouns, both concrete and abstract in nature.

The expert edited this list by removing synonyms, slips of the


tongue, and other aberrant terms, which reduced the list to
75 familiar concepts.

G.Tecuci, Learning Agents Laboratory

Illustration
The expert initially considered the dictionary definition of these
concepts to be adequate, but since there is no guarantee that the
expert's own definition necessarily matches the dictionary one, a
personal definition of the concepts was given. This produced a few
new concepts, such as "fluid", "safety", and "room".
The definitions indicated that sometimes a concept went beyond the
level of detail given in a general purpose dictionary and sometimes
it meant one very specific idea in the context of the domain.

This illustrates an important issue:


Much human expertise is likely to consist in the personal and
semantic associations (connotative meaning) that an expert brings
to domain concepts and may result in the invention or appropriation
of personalized terms to describe esoteric or subtle domain
phenomena.
G.Tecuci, Learning Agents Laboratory

Illustration

The domain glossary obtained characterized the component


parts of a central heating system, such as thermostats
and radiators, but also included general physical terms
such as heat and gravity.

A second path through the transcript yielded 42 relational


concepts, usually verbs (contains, heats, connects to,
etc.). These concepts will be used later to label
relationships between the discovered concepts.

G.Tecuci, Learning Agents Laboratory

Features of the concept elicitation methods


Strengths
gives the knowledge engineer an orientation to the domain.
generates much knowledge cheaply and naturally.
not a significant effort for the expert.

Weaknesses
incomplete and arbitrary coverage

the knowledge engineer needs appropriate training and/or


social skills

G.Tecuci, Learning Agents Laboratory

Structure elicitation: The Card-Sort Method


The Card-Sort Method
(elicit the hierarchical organization of the concepts)
Type the concepts on small individual index cards.
Ask the expert to group together the related concepts into
as many small groups as possible.
Ask the expert to label each of the groups.
Ask the expert to combine the groups into slightly larger
groups, and to label them.

The result will be a hierarchical organization of the concepts


G.Tecuci, Learning Agents Laboratory

The Card-sort method: illustration


Satchwell
Time Switch
Programmer
Thermostat
Set Point
Rotary Control Knob
Gas Control Valve
Solenoid
Electrical System
Electrical Supply
Electrical Contact
Fuse
Pump
Motorized Valve

Electric Time Controls

Thermostat
Gas Control

Control Electricity

Electrical Supply
Electrical Components
Mechanical Components

Part of the hierarchy of concepts from the card-sort method


G.Tecuci, Learning Agents Laboratory

Features of the Card-sort method

Strengths
gives clusters of concepts and hierarchical organization
splits large domains into manageable sub-areas
easy to do and widely applicable

Weaknesses
incomplete and unguided
strict hierarchy is usually too restrictive

G.Tecuci, Learning Agents Laboratory

Structure representation
Represents the acquired concepts into a semantic network
and acquires additional structural knowledge:
Ask the expert to sort the concepts by considering each
concept C as a reference, and identifying those related to it.
Ask the expert to order the concepts related to C along a
scale from 0 to 100, marked at the side of a table. The
values are read off the scale and entered in a data matrix.
Generate a network from the matrix, where the nodes are
the concepts and the weighted links represent proximities.

For each pair of concepts identified as related, ask the


expert what that relationship is.

G.Tecuci, Learning Agents Laboratory

Structure representation: illustration


Domestic
Plumbing
System
Pipe

Water
Supply

Flow

Header
Tank

Water
Expansion

Gas
Control
Valve
Main
Gas
Supply

G.Tecuci, Learning Agents Laboratory

Thermostat
Radiator
Control
Valve

Heat
Boiler

Electrical
Supply

Electrical
Contact

Feedback
Loop

Thermal
Circuit
Pilot
Light

Time
Switch

Gravity

Radiator
Primary
Circuit
Motorized
Valve

Air
Hot
Water
Cylinder

Immersion
Heater

Developing the representation


For each pair of concepts identified by the expert as relatable, ask what
that relationship was.
This task produced 248 relationships. This number was effectively reduced
to around 124 due to symmetry.
Example of elicited relationships:
part-of (radiator, primary circuit )
feeds (water supply, header tank)
warms (radiator, air)

Sometimes relations were not so direct :


"boiler supplies heat [that] causes water expansion [that] requires
header tank
This suggests the relationship:
necessitates (boiler, header tank)
G.Tecuci, Learning Agents Laboratory

Features of the structure representation method


Strengths
gives information on the domain structure in the
form of a network
shows which links are likely to be meaningful
organizes the elicitation of semantic relationships

Weaknesses
results depend on various parameter settings
requires more time from the expert
combinatorial explosion limits its applicability

G.Tecuci, Learning Agents Laboratory

Transformation of the representation

The elicited knowledge needs to be formally represented into


the representation language of an expert system shell, such
as Rosie.

Additional problem solving knowledge need to be elicited,


depending of the type of system to be built (e.g. questionanswering system, diagnostic system, repair system, etc.)

G.Tecuci, Learning Agents Laboratory

Elicitation based on the personal construct theory


The personal construct theory

What is a repertory grid

Elicitation of repertory grids

Grid analysis

Features of the repertory grid approach


G.Tecuci, Learning Agents Laboratory

The personal construct theory


A model of human thinking developed in 1955 by the psychologist
George Kelly, to study psychiatry.
Basic idea of the theory:
Each person is a scientist with a personal model of the world
around him. He creates personal constructs that classifies his
personal observations or experience of the world, developing
theories that allow him to anticipate, and to act in accordance with
his anticipation.
A personal construct is therefore an attribute whose values can
distinguish a subgroup of objects from another one.

This theory was used to develop techniques for eliciting a subject


matter experts personal constructs with respect to his domain of
expertise.

G.Tecuci, Learning Agents Laboratory

Personal constructs: illustration


Example of constructs (or dichotomous distinctions)
characterizing an employee for the purpose of staff appraisal:
intelligent - dim
mild - abrasive
ideas person - staid
Each person can be rated according to the above constructs:
e.g. John is mild (i.e. John is not abrasive)
The rating can be more refined (along the mild-abrasive construct):
mild
neutral
abrasive very abrasive
very mild

mild

abrasive

mild
G.Tecuci, Learning Agents Laboratory

abrasive

What is a repertory grid


A repertory grid is a representation of a persons (or experts) view
of a particular problem. It is a two-way classification of a set of
elements based on a set of constructs.
Example of a repertory grid for staff appraisal:

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
ideasmen

G.Tecuci, Learning Agents Laboratory

L1|114533523|R1dim
L2|124511431|R2unwilling
L3|123544414|R3oldsweats
L4|314521522|R4needsupervision
L5|114522533|R5lessmotivated
L6|322511512|R6notsoreliable
L7|345223154|R7abrasive
L8|115423134|R8staid
D L B P A D M R R
i i o a n o a u o
c z b u n n r t b
y h
k
l

Elicitation of repertory grids: Sample session


with the Pegasus system
Type in your purpose for doing this grid: staff appraisal
Name some of the elements: Dick, Liz, Bob, Paul, Ann, Don, Mary
{ Next, one will elicit constructs from the user using the triad method.}

The triad method (or the minimal context method)


The elements are presented in groups of three, three being the
lowest number that will produce both a similarity and a difference.
The subject is asked to say in what way two are alike and thereby
different from the third. This is the emergent pole of the construct.

The implicit pole may be elicited by the difference method (in what
way does the singleton differ from the pair) or by the opposite
method (what would be the opposite of the description of the pair).
G.Tecuci, Learning Agents Laboratory

Elicitation of repertory grids: Sample session


Triad for elicitation of qualities: Dick, Liz, Bob
Can you choose two of these elements which are in some way alike
and different from the other one ? Yes
Which is the different one ? Bob
Now I want you to think about what you have in mind when you
separate the pair from the other one. How can you describe the two
ends or poles of the scale which discriminates Dick and Liz on the
left pole from Bob at the right pole ?

left pole rated 1:


right pole rated 5:

G.Tecuci, Learning Agents Laboratory

intelligent
dim

Elicitation of repertory grids: Sample session


According to how you feel about the considered persons, please
assign to each of them a provisional value from 1 to 5:
Dick
1
Liz
1
Bob
5
Paul
5
Ann
3
Don
3
Mary
5
Ruth
4
Rob
5
At this point, Pegasus has built the following grid:

intelligent L1|114533523|R1dim

D L B P A D M R R
i i o a n o a u o
c z b u n n r t b
k
y h
l
G.Tecuci, Learning Agents Laboratory

Elicitation of repertory grids: Sample session


The session will continue, with Pegasus presenting other triads
for construct elicitation, and the user defining the corresponding
constructs.
The current grid is:

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
ideasmen

G.Tecuci, Learning Agents Laboratory

L1|114533523|R1dim
L2|124511431|R2unwilling
L3|123544414|R3oldsweats
L4|314521522|R4needsupervision
L5|114522533|R5lessmotivated
L6|322511512|R6notsoreliable
L7|345223154|R7abrasive
L8|115423134|R8staid
D L B P A D M R R
i i o a n o a u o
c z b u n n r t b
y h
k
l

Elicitation of repertory grids: Sample session


After several constructs have been built, Pegasus may direct the
user in defining new constructs that distinguish between the
elements that are very similar with respect to current constructs.

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
ideasmen

G.Tecuci, Learning Agents Laboratory

L1|114533523|R1dim
L2|124511431|R2unwilling
L3|123544414|R3oldsweats
L4|314521522|R4needsupervision
L5|114522533|R5lessmotivated
L6|322511512|R6notsoreliable
L7|345223154|R7abrasive
L8|115423134|R8staid
D L B P A D M R R
i i o a n o a u o
c z b u n n r t b
y h
k
l

Elicitation of repertory grids: Sample session


Ann and Don are matched at the 90% level. This means that so far
you have not distinguished between Ann and Don.
Do you want to split this ? Yes

Think of a construct which separates Ann from Don, with Ann on


the left pole and Don on the right pole.
left pole rated 1:
self starters
right pole rated 5:
need a push
According to how you feel about the considered persons, please
assign to each of them a provisional value from 1 to 5:
Dick
2
Liz
1
Bob
5
Paul
5
Ann
1
Don
5
Mary
5

G.Tecuci, Learning Agents Laboratory

Elicitation of repertory grids: Sample session


Pegasus may direct the user in defining new elements that
distinguish between the constructs that are very similar with
respect to current elements.

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
ideasmen

G.Tecuci, Learning Agents Laboratory

L1|114533523|R1dim
L2|124511431|R2unwilling
L3|123544414|R3oldsweats
L4|314521522|R4needsupervision
L5|114522533|R5lessmotivated
L6|322511512|R6notsoreliable
L7|345223154|R7abrasive
L8|115423134|R8staid
D L B P A D M R R
i i o a n o a u o
c z b u n n r t b
y h
k
l

Elicitation of repertory grids: Sample session


The two constructs you called
intelligent - dim
little supervision-reqd - need supervision
are matched at 66 percent level. This means that most of the time
you are saying intelligent you are also saying little supervision
required and most of the time you are saying dim you are also
saying need supervision.

Think of another element which is either intelligent and needs


supervision or little supervision required and dim.
Do you know such a person ? John
Type in the ratings for this element on each construct.
Left pole rated 1, right pole rated 5.
intelligent - dim
5
willing - unwilling
2
new boy - old sweats
3
little supervision reqd - need supervision
3
motivated - less motivated
2
...

G.Tecuci, Learning Agents Laboratory

Elicitation of repertory grids: Sample session


The final grid is:

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
ideasmen
selfstarters
creative
helpful
professional
overallratinghigh
messers

G.Tecuci, Learning Agents Laboratory

L1|1145335235|R1dim
L2|1245114312|R2unwilling
L3|1235444143|R3oldsweats
L4|3145215223|R4needsupervision
L5|1145225332|R5lessmotivated
L6|3225115123|R6notsoreliable
L7|3452231545|R7abrasive
L8|1154231344|R8staid
L9|2155135345|R9needapush
L10|1155234345|R10noncreative
L11|4342351455|R11unhelpful
L12|1233215244|R12lessprofessional
L13|2134125234|R13overallratinglow
L14|2254351531|R14tidy

D L B P A D M R R J
i i o a n o a u o o
c z b u n n r t b h
k
y h
n
l

Grid analysis: inferring new knowledge from grids


A repertory grid can be viewed a set of feature vectors, each
characterizing an element along the dimensions indicated by the
constructs.

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
ideasmen
selfstarters
creative
helpful
professional
overallratinghigh
messers

G.Tecuci, Learning Agents Laboratory

L1|1145335235|R1dim
L2|1245114312|R2unwilling
L3|1235444143|R3oldsweats
L4|3145215223|R4needsupervision
L5|1145225332|R5lessmotivated
L6|3225115123|R6notsoreliable
L7|3452231545|R7abrasive
L8|1154231344|R8staid
L9|2155135345|R9needapush
L10|1155234345|R10noncreative
L11|4342351455|R11unhelpful
L12|1233215244|R12lessprofessional
L13|2134125234|R13overallratinglow
L14|2254351531|R14tidy

D L B P A D M R R J
i i o a n o a u o o
c z b u n n r t b h
k
y h
n
l

Hierarchical clustering of repertory grids


Clusters similar elements and attributes, prompting the expert to
name the clusters:

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
research
ideasmen
oriented
selfstarters
creative
helpful
professional
overallratinghigh
messers

G.Tecuci, Learning Agents Laboratory

L1|1145335235|R1dim
L2|1245114312|R2unwilling
L3|1235444143|R3oldsweats
L4|3145215223|R4needsupervision
L5|1145225332|R5lessmotivated
L6|3225115123|R6notsoreliable
L7|3452231545|R7abrasive
L8|1154231344|R8staid
L9|2155135345|R9needapush
L10|1155234345|R10noncreative
L11|4342351455|R11unhelpful
L12|1233215244|R12lessprofessional
L13|2134125234|R13overallratinglow
L14|2254351531|R14tidy

D L B P A D M R R J
i i o a n o a u o o
c z b u n n r t b h
k
y h
n
l

Rule induction from repertory grids


The description of each element in the grid is a positive or a
negative example of an output attribute (e.g. overall rating high):
L1|1145335235|R1dim
L2|1245114312|R2unwilling
L3|1235444143|R3oldsweats
L4|3145215223|R4needsupervision
L5|1145225332|R5lessmotivated
L6|3225115123|R6notsoreliable
L7|3452231545|R7abrasive
L8|1154231344|R8staid
L9|2155135345|R9needapush
L10|1155234345|R10noncreative
L11|4342351455|R11unhelpful
L12|1233215244|R12lessprofessional
L13|2134125234|R13overallratinglow
L14|2254351531|R14tidy

D L B P A D M R R J
i i o a n o a u o o
Positive examples of
c z b u n n r t b h Negative examples of
overall rating high
k
y h
n overall rating high
l

intelligent
willing
newboy
littlesupervisionreqd
motivated
reliable
mild
ideasmen
selfstarters
creative
helpful
professional
overallratinghigh
messers

G.Tecuci, Learning Agents Laboratory

Rule induction from repertory grids (cont.)


The description of each element in the grid is a positive or a
negative example of an output attribute (e.g. overall rating high):
overall-rating-high(Dick)

intelligent(Dick), willing(Dick),
new-boy(Dick), little-sprv-req(Dick),
motivated(Dick), ideas-man(Dick),
self-starters(Dick), ...

overall-rating-high(Paul)

dim(Paul), unwilling(Paul),
experienced(Paul),
need-supervision(Paul),
less-motivated(Paul),
not-so-reliable(Paul), mild(Paul), ...

A rule for the output attribute is learned through empirical induction


from such examples:

overall-rating-high(x)

G.Tecuci, Learning Agents Laboratory

intelligent(x), little-sprv-reqd(x),
reliable(x), self-starters(x),
creative(x), professional(x)

KSS0: An integrated knowledge elicitation and


inductive learning system (Gaines and Shaw, 1992)
Consists of the following modules:
ELICIT elicits repertory grids from the expert
FOCUS hierarchically clusters elements and constructs prompting the
expert to add higher-level constructs structuring the domain
PRINCOM spatially clusters elements and constructs prompting the
expert to add higher-level constructs structuring the domain
SOCIO compares the structures for the same domain generated by
different experts
INDUCT induces rules from the repertory grid

EXPORT transfers the results of grid elicitation and analysis to an expert


system shell.
G.Tecuci, Learning Agents Laboratory

Features of the repertory grid approach

repertory grids can be easily elicited from a subject


matter expert
other concepts and inference rules can be learned
from repertory grids, although the number of
examples is small

more complex knowledge structures are difficult to


generate from repertory grids since the grids are
oriented toward representing declarative attributebased knowledge

G.Tecuci, Learning Agents Laboratory

Knowledge acquisition for role-limiting methods


Role-limiting problem solving methods

Propose-and-revise role-limiting problem solving method

SALT: an elicitation tool for propose-and-revise systems

Knowledge elicitation, refinement, and compilation

Features of the role-limiting approach


G.Tecuci, Learning Agents Laboratory

Role-limiting problem solving methods


Problem solving is the identification, selection, and
implementation of a sequence of actions that accomplish a task
within a specific domain.
A problem solving method provides a means of identifying, at each
step, candidate actions. It provides one or more mechanisms for
selecting among candidate actions and ensures that the selected
action is implemented.

A role-limiting problem solving method predefines the task-related


control knowledge used. It typically consists of a simple loop over a
sequence of five to 10 steps. Within a step there is no control, that is,
it makes no difference in what order the actions are performed.
The method also defines the roles the task-specific knowledge must
play and the forms in which that knowledge should be represented,
and therefore facilitates the acquisition of this knowledge from the
expert.
The price paid for these assumptions is a more limited applicability.
G.Tecuci, Learning Agents Laboratory

The propose-and-revise role-limiting PS method


For design applications
Input: a list of parameters representing customer requirements
Output: a list of quantities, ordering codes and other parameters for all
equipment required, and an equipment layout.
Method (creates a design by proposing values for design parameters,
checking constraints on those parameters, and revising values if constraints
on proposed parameters are violated):

1. Extend the design and identify constraints on the extension just formed.
2. Identify constraint violations; if none, go to step 1.
3. Suggest potential fixes for a constraint violation.
4. Select the least costly fix not yet attempted.
5. Tentatively modify the design and identify constraints on
the modification just formed.
6. Identify constraint violations due to the revision; if any, go to 4.
7. Remove relationships incompatible with the revision.
8. If the design is incomplete, go to 1.
G.Tecuci, Learning Agents Laboratory

Knowledge roles
There are three roles that knowledge plays in this method:
PROPOSE-A-DESIGN-EXTENSION
IDENTIFY-A-CONSTRAINT on a part of the design
PROPOSE-A-FIX for a constraint violation
There are three types of knowledge pieces (each for one role):
PROCEDURE to determine the value of a design parameter
CONSTRAINT to identify limits on the value of a design parameter

FIX to suggest revisions in response to a constraint violation


G.Tecuci, Learning Agents Laboratory

SALT: an elicitation tool for propose-and-revise systems


(Marcus and McDermott, 1989)
1. Elicit PROCEDURE, CONSTRAINT, and FIX knowledge pieces,
through a menu-driven dialog.
2. Build a dependency network that expresses the dependencies
between the design parameters from the elicited knowledge pieces.
3. Ask the user to supply a procedure for each design parameter that has
no associated procedure, a constraint for each design parameter, and
a fix for each constraint violation.
4. Detect the cycles in the dependency network. For each cycle ask the
user to supply a procedure for determining an initial value of a
parameter in the cycle.

5. Compile the declarative knowledge pieces into rules for OPS5.


G.Tecuci, Learning Agents Laboratory

Elicitation of the knowledge pieces


In order to elicit knowledge pieces, SALT displays a schema of prompts
for information associated with each type of knowledge role.
Calculation PROCEDURE for "car-jamb-return":
1 Name:
2 Precondition:
3 Procedure:
4 Formula:
5 Justification:

car-jamb-return
door-opening = center
calculation
[platform-width - opening-width] / 2
center-opening doors look best when
centered on the platform.

Information requested by SALT

G.Tecuci, Learning Agents Laboratory

Information provided by expert

Elicitation of the knowledge pieces

Look-up PROCEDURE for "machine-model":

1 Name:
machine-model
2 Precondition:
none
3 Procedure:
database-lookup
4 Table name:
machine
5 Column with needed value: model
6 Parameter test:
max-load suspended-load
7 Parameter test:
done
8 Ordering column:
height
9 Optimal:
smallest
10 Justification:
this procedure is taken from
standards manual iiia, p. 139.

G.Tecuci, Learning Agents Laboratory

Elicitation of the knowledge pieces


For any design parameter defined, the user should also define a
constraint on the possible values of the parameter.
CONSTRAINT for "car-jamb-return":

1 Constrained value:
2 Constraint type:
3 Constraint name:
4 Precondition:
5 Procedure:
6 Formula:
7 Justification:

G.Tecuci, Learning Agents Laboratory

car-jamb-return
maximum
maximum-car-jamb-return
door-opening = side
calculation
panel-width * stringer-quantity
this procedure is taken from installation
manual i, p.12b.

Elicitation of the knowledge pieces


For any CONSTRAINT that can be violated the user has to define a FIX
procedure that suggests a potential fix for the violation.

FIX for the violation of "maximum-car-jamb-return":

1 Violated constraint:
2 Value to change:
3 Change type:
4 Step type:
5 Step size:
6 Preference rating:
7 Reason for preference:

G.Tecuci, Learning Agents Laboratory

maximum-car-jamb-return
stringer-quantity
increase
by-step
1
4
Changes minor equipment sizing.

Criteria for selecting FIX knowledge pieces


1 Causes no problem
2 Increases maintenance requirements
3 Makes installation difficult
4 Changes minor equipment sizing
5 Violates minor equipment constraint
6 Changes minor contract specifications
7 Requires special part design
8 Changes major equipment sizing
9 Changes the building dimensions
10 Changes major contract specifications
11 Increases maintenance costs

12 Compromises system performance

G.Tecuci, Learning Agents Laboratory

Building of the dependency network


dooropening

platformwidth

openingwidth

carjambreturn

maximumcarjambreturn

stringerquantity

panelwidth

contributesto

G.Tecuci, Learning Agents Laboratory

constrains

suggestsrevisionof

Detection of cycles
hoist-cable-quantity = suspended-load / hoist-cable-strength
hoist-cable-weight = hoist-cable-unit-weight * hoist cable-quantity * hoist-cable-length
cable-weight = hoist-cable-weight + comp-cable-weight
suspended-load = cable-weight + car-weight

hoistcablestrength

hoistcablequantity

hoistcableunitweight

hoistcableweight

carweight

hoistcablelength

compcableweight

cableweight

suspendedload

G.Tecuci, Learning Agents Laboratory

machinemodel

Cycle elimination
Ask for a PROCEDURE which provides a first estimate for one of the parameters in the loop:
1 Name:
hoist-cable-quantity
2 Precondition:
none
3 Procedure:
database-lookup
4 Table name:
hoist-cable
5 Column with needed value: quantity
6 Parameter test:
max-load > car-weight
7 Parameter test:
done
8 Ordering column:
quantity
9 Optimal:
smallest
10 Justification:
this estimate is the smallest hoist cable quantity that
can be used on any job.
Change the role of the original procedure for hoist-cable-quantity from PROCEDURE to
CONSTRAINT:
hoist-cable-quantity minimum-hoist-cable-quantity
minimum-hoist-cable-quantity = suspended-load / hoist-cable-strength

Ask for a FIX knowledge piece corresponding to the violation of this constraint:
1 Violated constraint:
2 Value to change:
3 Change type:
4 Step type
5 Preference rating:
6 Reason for preference:

G.Tecuci, Learning Agents Laboratory

minimum-hoist-cable-quantity
hoist-cable-quantity
increase
same
4
changes minor equipment sizing

Updated dependency network


hoistcablequantity

hoistcableunitweight

hoistcableweight

carweight

hoistcablestrength

G.Tecuci, Learning Agents Laboratory

compcableweight

cableweight

suspendedload

minimumhoistcablequantity

hoistcablelength

machinemodel

Compiling the knowledge base


SALT proceduralizes the domain-specific knowledge base into rules written in OPS5.
PROCEDURE for "car-jamb-return":
1 Name:
2 Precondition:
3 Procedure:
4 Formula:
5 Justification:

car-jamb-return
door-opening = center
calculation
[platform-width - opening-width] / 2
center-opening doors look best when centered on the platform.

IF

Values are available for door-opening, platform-width and opening-width, and


The value of door-opening is center, and
There is no value for car-jamb-return,
THEN
Calculate the result of the formula [platform-width - opening-width] / 2
Assign the result of this calculation as the value of car-jamb-return.
Leave a trace that door-opening, platform-width and opening-width
contributed to car-jamb-return.
Leave a declarative representation of the details of the precondition and
calculation and its justification.
G.Tecuci, Learning Agents Laboratory

Compiling the knowledge base


CONSTRAINT for "car-jamb-return":
1 Constrained value:
2 Constraint type:
3 Constraint name:
4 Precondition:
5 Procedure:
6 Formula:
7 Justification:

car-jamb-return
maximum
maximum-car-jamb-return
door-opening = side
calculation
panel-width * stringer-quantity
this procedure is from installation manual i, p.12b.

IF

Values are available for door-opening, panel-width, and stringer-quantity, and


The value of door-opening is side, and
There is no value for maximum-car-jamb-return,
THEN
Calculate the result of the formula [panel-width * stringer-quantity].
Assign the result of this calculation as the value of maximum-car-jamb-return.
Identify this value as a constraint of type maximum on car-jamb-return.
Leave a trace that door-opening, panel-width and stringer-quantity contributed to
maximum-car-jamb-return.
Leave a declarative representation of the details of the precondition and
calculation and its justification.
G.Tecuci, Learning Agents Laboratory

Compiling the knowledge base


FIX for the violation of "maximum-car-jamb-return":
1 Violated constraint:
2 Value to change:
3 Change type:
4 Step type:
5 Step size:
6 Preference rating:
7 Reason for preference:

maximum-car-jamb-return
stringer-quantity
increase
by-step
1
4
Changes minor equipment sizing.

IF

There has been a violation of maximum-car-jamb-return,


THEN
Try an increase of stringer-quantity by-steps of 1. This costs 4 because it
changes minor equipment sizing.
Try a substitution of side for door-opening. This costs 8 because it changes
major equipment sizing.
Try a decrease of platform-width by-steps of 2in. This costs 10 because it
changes major contract specifications.

G.Tecuci, Learning Agents Laboratory

Compiling the knowledge base

Three rule types are used to explore the success of a proposed fix or fix combination in a lookahead context before extending the proposed design on the basis of the proposed revision.
IF

maximum-car-jamb-return has been violated, and


The problem-solver has decided on which changes to try

THEN

FIND car-jamb-return and FIND maximum-car-jamb-return.

IF

The active command is to FIND car-jamb-return, and


Any of door-opening, platform-width or opening-width has been revised, and
The most recently derived value (mrdv) of door-opening is center, and
There is no revised value for car-jamb-return

THEN

Calculate the formula [mrdv of platform-width - mrdv of opening-width] / 2.


Assign the result of this calculation as the value of car-jamb-return.
Mark this value as revised.

IF

The active command is to FIND maximum-car-jamb-return, and


Any of door-opening, panel-width or stringer-quantity has been revised, and
The mrdv of door-opening is side, and
There is no revised value for maximum-car-jamb-return

THEN

Calculate the result of the formula [mrdv of panel-width * mrdv of stringer-quantity].


Assign the result of this calculation as the value of maximum on car-jamb-return.
Mark this value as revised.

G.Tecuci, Learning Agents Laboratory

Features of the role-limiting approach


Strengths
the type and form of the necessary knowledge
are well-defined
building the KB reduces to a menu-driven
knowledge elicitation

Weaknesses
based on a specific problem solving method that
has a limited domain of applicability
defining the knowledge pieces is not easy

G.Tecuci, Learning Agents Laboratory

Why it is hard to build agents


The knowledge engineer has to become a kind of subject
matter expert in order to properly understand experts problem
solving knowledge. This takes time and effort.
Experts express their knowledge informally, using natural
language, visual representations and common sense, often
omitting essential details that are considered obvious. This
form of knowledge is very different from the one in which
knowledge has to be represented in the knowledge base
(which is formal, precise, and complete).

This modeling and representation of knowledge is long, painful


and inefficient.

G.Tecuci, Learning Agents Laboratory

Limiting factors in developing intelligent agents


Limited ability to reuse previously developed knowledge
The knowledge acquisition bottleneck
The knowledge maintenance bottleneck
The scalability of the agent building process
Finding the right balance between using general
tools and developing domain specific modules

Portability of the tools and of the developed agents


G.Tecuci, Learning Agents Laboratory

Advanced approaches to KB and agent development


Problem:
Limited ability to reuse previously developed knowledge

Solution:
Ontology reuse (import, merge, export, OKBC protocol, CYC)

Example:

Ontologies of military units and equipment developed for


a particular military planning agent could be reused by a
course of action critiquing agent or other military agent.

G.Tecuci, Learning Agents Laboratory

Advanced approaches to KB and agent development


Problem:
The knowledge acquisition bottleneck

Solution:
Automation of knowledge acquisition through machine learning

Example:

A subject matter expert teaching an agent


through examples and explanations,
similarly to how the expert would teach an apprentice.

G.Tecuci, Learning Agents Laboratory

Advanced approaches to KB and agent development


Problem:
The knowledge maintenance bottleneck

Solution:
Use of machine learning methods by the agent, to continuously
update its knowledge base in response to changes in
the application domain or in the requirements of the application.

Example:
A subject matter expert providing feedback to the agent
and guiding it to update its knowledge base.

Remark:
Currently, software maintenance is four times more expensive that
software development.
With learning agents that are directly taught by humans, there is no
longer a distinction between building the agent and maintaining it.
G.Tecuci, Learning Agents Laboratory

Advanced approaches to KB and agent development


Problem:
Finding the right balance between using general
tools and developing domain specific modules

Solution:
Customizable learning agent shell.
It is applicable to a wide variety of application domains.
Requires limited customization.

Example:

Disciple learning agent shell

G.Tecuci, Learning Agents Laboratory

Learning agent shell


The learning agent shell is a tool for building agents. It contains a
general problem solving engine, a learning engine and an empty
knowledge base.

Interface

Building an agent for a specific application consists in customizing


the shell for that application and in developing the knowledge base.
The learning engine facilitates the building of the knowledge base
by subject matter experts and knowledge engineers.

G.Tecuci, Learning Agents Laboratory

Problem
Solving
Learning

Ontology
+ Rules

Disciple learning agent shell


The Disciple learning agent shell can use imported ontological
knowledge and can be taught directly by subject matter experts
to become a knowledge-based assistant.

G.Tecuci, Learning Agents Laboratory

Interface

Mixed-initiative
reasoning
between the
expert that has
the knowledge
to be formalized
and the agent
that knows how
to formalize it.

The expert teaches


the agent to perform various
tasks in a way that resembles
how the expert would
teach a person.

The agent learns


from the expert,
building, verifying
and improving its
knowledge base

Problem
Solving
Learning

Ontology
+ Rules

Main phases of the agent development process


Modeling of the problem solving process
Customization of the learning agent shell

Ontology import

Ontology refinement

Agent teaching

Mixed-initiative problem solving and learning


G.Tecuci, Learning Agents Laboratory

Feedback
loops
among all
phases

Recommended reading
G. Tecuci, Lecture Notes on Systematic Elicitation of Expert Knowledge (required
reading)
B.G. Buchanan, D. Barstow, R. Bechtal, J. Bennett, W. Clancey, C. Kulikowski, T.
Mitchell, D.A. Waterman, Constructing an Expert System, in F. Hayes-Roth, D.
Waterman and D. Lenat (eds), Building Expert Systems, Addison-Wesley, 1983,
pp.127-168.
John G. Gammack, Different Techniques and Different Aspects on Declarative
Knowledge, in Alison L. Kidd (ed), Knowledge Acquisition for Expert Systems: A
Practical Handbook, Plenum Press, 1987.
Shaw M.L.G. and Gaines B.R., An interactive knowledge elicitation technique using
personal construct technology, in Alison L. Kidd (ed), Knowledge Acquisition for
Expert Systems: A Practical Handbook, Plenum Press, 1987.

Marcus S. and McDermott J., SALT: A Knowledge Acquisition Language for Proposeand-Revise Systems, Artificial Intelligence, 39 (1989), pp.1-37. Also in Buchanan B.,
Wilkins D. (Eds.), Readings in Knowledge Acquisition and Learning: Automating the
Construction and the Improvement of Programs, Morgan Kaufmann, 1993.

G.Tecuci, Learning Agents Laboratory

S-ar putea să vă placă și