Documente Academic
Documente Profesional
Documente Cultură
PICS
Publications of the Institute of Cognitive Science
Volume 2-2013
ISSN: 1610-5389
Volume: 2-2013
Computational Creativity,
Concept Invention,
and General Intelligence
Proceedings
Volume Editors
Tarek R. Besold
Institute of Cognitive Science
University of Osnabrueck
Kai-Uwe Kühnberger
Institute of Cognitive Science
University of Osnabrueck
Marco Schorlemmer
IIIA-CSIC, Barcelona
Alan Smaill
School of Informatics
University of Edinburgh
This volume contains the proceedings of the workshop “Computational Creativity, Concept
Invention, and General Intelligence (C3GI)” at IJCAI-2013.
Preface
This volume contains the proceedings of the 2nd International Workshop for Computational
Creativity, Concept Invention, and General Intelligence held in conjunction with IJCAI 2013
in Beijing, China.
Continuing the mission started by the 1st edition of the workshop series in 2012 at
Montpellier, the aim of this workshop is to bring together researchers from different fields
of AI working on computational models for creativity, concept formation, concept
discovery, idea generation, and their overall relation and role to general intelligence as well
as researchers focusing on application areas, like computer-aided innovation.
Together with a growing community of researchers in both, academia and industry, we are
convinced that it is time to revitalize the old AI dream of “Thinking Machines” which now
has been almost completely abandoned for decades. Fortunately, more and more
researchers presently recognize the necessity and feasibility of returning to the original
goal of creating systems with human-like intelligence. By continuing the discussions
started one year ago at the 2012 edition of the workshop, we are now working towards
establishing the workshop series as an annual event gathering leading researchers in
computational creativity, general intelligence, and concept invention whose work is in
direct connection to our overall goal, providing and generating additional valuable support
in further turning the current spirit of optimism into a lasting research endeavor.
Committee Co-Chairs
Committee Members
Full presentations:
How Models of Creativity and Analogy Need to Answer the Tailorability Concern
J. Licato, S. Bringsjord & N. Sundar G.
Compact presentations:
Simon Ellis • Naveen Sundar G. • Joseph Valerio • Selmer Bringsjord • Jonas Braasch
elliss5@rpi.edu • govinn@rpi.edu • valerj3@rpi.edu • selmer@rpi.edu • braasj@rpi.edu
Department of Cognitive Science • Department of Computer Science
The Rensselaer AI & Reasoning (RAIR) Lab • Rensselaer Polytechnic Institute (RPI) • Troy NY 12180 USA
p : Boolean | ¬φ | φ ∧ ψ | φ ∨ ψ | φ → ψ | φ ↔ ψ | ∀x : S. φ | ∃x : S. φ
P(a,t, φ) | K(a,t, φ) | C(t, φ) | S(a, b,t, φ) | S(a,t, φ)
φ ::=
B(a,t, φ) | D(a,t, holds( f ,t � )) | I(a,t, happens(action(a∗ , α),t � ))
O(a,t, φ, happens(action(a∗ , α),t � )) and operating typically when a person seeks to understand a
Figure 2: Rules of Inference of DC EC ∗ situation or another person.
Rules of Inference 2. “Applied intuition,” directed towards the solution of a problem
[R1 ] [R2 ] or accomplishment of a task.
C(t, P(a,t, φ) → K(a,t, φ)) C(t, K(a,t, φ) → B(a,t, φ))
3. “Free intuition,” which arises in reaction neither to socioaffec-
C(t, φ) t ≤ t1 . . .t ≤ tn K(a,t, φ)
[R3 ] [R4 ] tive stimulus nor to a specific search process, but rather in-
K(a1 ,t1 , . . . K(an ,tn , φ) . . .) φ volves “a feeling of foreboding concerning the future.”
t1 ≤ t3 ,t2 ≤ t3
[R5 ] It is doubtless the third type of intuition which has caused
C(t, K(a,t1 , φ1 → φ2 ) → (K(a,t2 , φ1 ) → K(a,t3 , φ2 )))
the diminution of intuition’s perceived value: how can some-
t1 ≤ t3 ,t2 ≤ t3
[R6 ] thing so intangible as a ‘premonition’ have any scientific
C(t, B(a,t1 , φ1 → φ2 ) → (B(a,t2 , φ1 ) → B(a,t3 , φ2 ))) value? Yet all three forms of intuition, including the second
t1 ≤ t3 ,t2 ≤ t3
[R7 ]
type, which is most immediately relevant to our work, have
C(t, C(t1 , φ1 → φ2 ) → (C(t2 , φ1 ) → C(t3 , φ2 ))) their bases in a structured, if unconscious, analysis and inter-
[R8 ] [R9 ] pretation of perceived information. In our case, the perceived
C(t, ∀x. φ → φ[x �→ t]) C(t, φ1 ↔ φ2 → ¬φ2 → ¬φ1 ) information is structured.
C(t, [φ1 ∧ . . . ∧ φn → φ] → [φ1 → . . . → φn → ψ])
[R10 ] Because of its time-based nature, music is a good example
where timely decisions have to be made, which immediately
B(a,t, φ) B(a,t, φ → ψ) B(a,t, φ) B(a,t, ψ)
[R11a ] [R11b ] implies, on the Simon-Pollock view here affirmed, that in-
B(a,t, ψ) B(a,t, ψ ∧ φ)
tuition is needed. If time does not allow the deduction of a
S(s, h,t, φ) I(a,t, happens(action(a∗ , α),t � )) logical answer in the realm of music, humans must exploit
[R12 ] [R13 ]
B(h,t, B(s,t, φ)) P(a,t, happens(action(a∗ , α),t)) mechanisms to make a quick “best guess” using a sub-logical
B(a,t, φ) B(a,t, O(a∗ ,t, φ, happens(action(a∗ , α),t � )))
mechanism. Of course, we seek a system that does more than
O(a,t, φ, happens(action(a∗ , α),t � ))
make a random guess by using an informed approach that at
[R14 ] least approximates the correct answer.
K(a,t, I(a∗ ,t, happens(action(a∗ , α),t � )))
Our agent, CAIRA, includes a logic- and music-calculus-
φ↔ψ based reasoning system, Handle, so it can “understand” basic
[R15 ]
O(a,t, φ, γ) ↔ O(a,t, ψ, γ) concepts of music and use a hypothesis-driven approach to
perform with other musicians. The agent uses logic to de-
termine the current ensemble state from observing acousti-
Figure 3: Signature of the Music Calculus cal features in real time. CAIRA assesses the current ensem-
ble state of an improvisatory music group, thereby addressing
S : =Recommendation ⊆ Action | Person ⊆ Agent | Level | Label questions of who is playing a solo or whether it is likely that
MaxLevel : Person × Level → Boolean the improvisation will come to an end soon (see Table1). For
Level : Person × Level → Boolean this purpose, the agent analyzes the relationships between the
WithinOne : Level × Level → Boolean tension arcs of each musician, including the tension arc mea-
f : = LessThan : Level × Level → Boolean sured from CAIRAs own acoustic performance. The tension
a, b, c, d, e, f , g : Level arc describes the current perceived tension of an improvisa-
Tutti : Label → Boolean tion, instrumentally estimated used acoustical features such
low, high : Label and loudness and roughness [Braasch et al., 2011].
However, it often takes CAIRA too long to determine the
current ensemble state using logic, so we conceived a process
orous,’ or ’logical,’ or ’formal.’ Even merely trying to char- to simulate intuition, one that builds on a database of all cur-
acterize these two concepts brings an immediate contrast: rently known ensemble-state configurations. If an ensemble
logic seems well-defined, well-understood; intuition elusive state is unknown for an occurring tension arc configuration
and illusory. But for us, intuition is simply a rapid form of (a vector with individual tension arcs for each musician), the
non-intellectual processing containing no explicit reasoning process guesses the correct answer by taking into account the
(whether that reasoning is deductive, inductive, analogical,
1 [1995] has called “quick
abductive), essentially what Pollock
known states and making an informed guess using the eight
nearest neighbors and distance weighting.
and dirty” cognition. The same basic idea, as we understand
it, can be traced back to a basic idea presented by Herb Simon
1972, whose notion of “satisficing” corresponds to what we 3 CAIRA Overview
have in mind for intuition: rapid heuristic-driven decision- Put briefly, CAIRA, the Creative Artificially-Intuitive and
making and action employed when logic-based cognition is Reasoning Agent, the full architecture of which was pre-
too slow to get the job done. (See also [Bringsjord and Fer- sented at C3GI 2012, is a system able to provide guidance
rucci, 2000].) to ensembles, compose music, and perform in an improvi-
As to the nature of intuition, Raidl and Lubart [2000] enu- sational fashion. A major goal of the CAIRA project is to
merate three essential types: develop an understanding of the relationship between logic
1. “Socioaffective intuition,” concerning interpersonal relations and intuition within the scope of musical performances.
Instead of using a simple audio-to-MIDI converter, the
agent uses standard techniques of Computational Auditory Figure 4: Handle Architecture
Scene Analysis (CASA), including pitch perception, tracking
of rhythmical structures and timbre and texture recognition.
This approach allows CAIRA to extract further parameters re-
lated to sonic textures and gestures in addition to traditional
music parameters such as duration, pitch, and volume. This
multi-level architecture enables CAIRA to process sound us-
ing bottom-up processes simulating intuitive listening and
music performance skills as well as top-down processes in
the form of logic-based reasoning. The low-level stages are
characterized by a Hidden Markov Model (HMM) to recog-
nize musical gestures and an evolutionary algorithm to cre-
ate new material from memorized sound events. The evolu-
tionary algorithm presents audio material processed from the
input sound, which the agent trains itself on during a given
session, or from audio material that has been learned by the system as formerly. The MATLAB client interfaces with Fil-
agent in a prior live session. The material is analyzed using ter and MaxMSP via a simple set of OpenAudio commands
the HMM machine listening tools and CASA modules, re- sent periodically via the network, which provide information
structured through the evolutionary algorithms and then pre- on the current tension level of the three players as integers
sented in the context of what is being played live by the other in the range 1-7. These values are used to look up the cur-
musicians. rent ‘perceived state’ of the performance for Filter and the
CAIRA consists of two standalone modules, FILTER and appropriate information returned. If, however, the state has
Handle. not been encountered, a best guess is made, currently using a
weighted nearest-neighbor heuristic, and the state is marked
3.1 FILTER for formal calculation using an implementation of the mu-
The artificially-intuitive listening and music performance sic calculus in the Lisp-based SNARK automated theorem
processes of CAIRA are simulated using the Freely Improvis- prover. Due to the length of time it takes to perform the op-
ing, Learning and Transforming Evolutionary Recombination erations, all logic calculations are done off-line between per-
system (FILTER) [van Nort et al., 2009; van Nort et al., 2010; formances. An overview of the instantiation of the Handle
van Nort et al., 2012], which uses a Hidden Markov Model architecture corresponding to the work reported on herein is
(HMM) for sonic gesture recognition, and it utilizes Genetic given in Figure 5.
Algorithms (GA) for the creation of sonic material. In the
first step, the system extracts spectral and temporal sound
features on a continuous basis and tracks onsets and offsets Figure 5: Instantiation of Handle Architecture
from a filtered version of the signal. The analyzed cues are
processed through a set of parallel Hidden Markov Model HB+,+I'
(HMM)-based gesture recognizers. The recognizer deter-
mines a vector of probabilities in relation to a dictionary of $/01'
%23456738"9:3/;<43'
reference gestures. The vector analysis is used to determine
parameters related to maximum likelihood and confidence, !"#$%&'
*3</3A:'
,*+&='>/59909?'@5A04'B<C45C5AD'
and the data is then used to set the crossover, fitness, muta- 930?FG15/'
+9<CEA0A'
F35/0A64'
tion, and evolution rate of the genetic algorithm, which acts @5C67</0<:3'
<9<CEA0A'
on the parameter output space [van Nort et al., 2009]. -+.#"*%'
70
60
to believe, via some process, that the composer believed
that the performance’s having this attribute would result
50
in a pleasing experience for the listener, then the conduc-
40 tor may start believing likewise. Naturally, this “epis-
30 temic chain” would result in performances that have pro-
portional tempo.
20
The reasoning pattern above can be computationally real-
ized using the following proof in DC EC ∗ . We first present
10
Abstract
1.1 Creativity Framework for Concept Invention
In this paper, we describe how newly created
metaphor concepts will be invented and Concept invention is the process of discovering new
represented by a humanoid robot. The humanoid concepts that may require new interpretations and mappings
robot will invent these new concepts during of in a given situation [Reffat, 2002]. Creativity is closely
interaction with a child in a pretense play scenario. related to concept invention and is the quality that is marked
We demonstrate the feasibility of this concept by the ability or power to produce through imaginative skill.
invention and representation approach by Creativity has been defined as the ability to generate novel,
evaluating a knowledge-based metaphor discovery and valuable, ideas (e.g. concepts, theories, interpretations,
program using an open source reasoning engine. or stories) [Boden, 2009]. We use a componential
framework for creativity that is based on three factors for
creativity: domain-relevant skills, creativity relevant skills,
1 Introduction and task motivation [Amabile, 1983]. Amabile [1983]
The ability for a human to use her creativity to invent new presents a schematic for the creative process based on these
concepts or artifacts is a hallmark for human intelligence. skills and motivation that includes:
In order for a humanoid robot to attain human-like 1. Problem or task representation
intelligence, it must first be able to represent and 2. Preparation
incorporate creative new concepts into its knowledge base 3. Response generation
and be able to make new inferences using this new 4. Response validation
knowledge. We propose using PyClips, a python language 5. Outcome
interface to the CLIPS knowledge-based programming We incorporate this framework with metaphor-guided
language [Giarratano and Riley, 1998], as a tool to represent pretense play approach for creative concept invention and
and reason about these newly discovered creative semantic discovery.
concepts by a robot.
In [Williams, 2013] we described a newly proposed 1.2 Concept Invention through Metaphor
approach towards investigating how social humanoid robots Learning and Pretense Play
can learn creative conceptualizations through interaction In this section we describe how metaphor is related to
with children in metaphor-guided pretense play using a creativity, or concept invention. According to Indurkhya
componential creativity framework. We describe this [1992], a “metaphor is an unconventional way of describing
metaphor-guided pretense play approach and then illustrate (or representing) an object, event, or situation (real or
it by describing a social robot interaction pretense play imagined) as another object, event or situation”. [Genter et
scenario between a child and a humanoid robot. The rest of al., 2001] describe the critical steps for using metaphors for
this paper is organized as follows. First we review the problem-solving. One must extract a variety of unfamiliar
creativity framework that we combine with our metaphor- concepts from remote domains where possible relationships
guided pretense play creativity approach [Williams, 2013]. with the current task may not be initially apparent. Then
We then describe how our humanoid robot will learn new there must be a mapping of high-level or deep relationships
creative concepts using metaphors learned in a human-robot between the metaphor concept and the problem. Performing
interaction scenario. Then we describe how CLIPS will be generalization and abstraction techniques may make this
used to represent and reason about new creatively learned matching possible. Next, secondary relationships may be
concepts. discarded leaving only structural correspondence between
the metaphor source and the problem. Finally, the structural
matches, or correspondences, associated with the metaphor
source are transferred and applied to the problem, which
leads to a novel solution. Using metaphors is an example of how the child manipulates the objects and composes them.
a creativity-relevant skill of generating hypotheses that can It will then be up to the child to use her available food
lead to set-breaking and novel ideas. These metaphors can objects to create the concept conveyed by the picture. We
be used as heuristics to organize problem solving or design anticipate and will test in future experiments, that the child
thinking to solve loosely defined design problems [Rowe, will create a new metaphor for the subject using the food
1987; Antoniades 1992]. items. The resulting metaphors may consist of a stick of
celery (, or “log”) with soy butter (, or “dirt”) to glue the
raisins on (, or “bugs”). The child may name the new food
2 Concept Invention in an Interactive item, “bugs on a log”. The robot can respond with
“Yummy! What do you call it?” and then begin the verbal
Healthy Snack Food Design Game exchange process to incorporate the novel concept along
We propose using the context of a healthy snack food with the operators to produce the concept for future re-use.
design game to demonstrate how humanoid robots can
invent new concepts through interaction with children. 3 Creative Concept Invention and
[Morris et al., 2012] have developed a system for generating
novel recipes and use their system, PIERRE, to explore Representation
theoretical concepts in computational creativity. They show We target the problem of metaphor learning in the context
that a single inspiring set can be separated into two sets used of a pretense play setting. In this setting, the robot presents
for generation and evaluation to result in greater novelty. a picture not involving food items to a child and asks the
Other artificially intelligent food recipe generators include child to create a dish based on the picture. The underlying
CHEF [Hammond, 1986] which used case-based reasoning idea is that the child will see the objects in the picture as
to plan a recipe and Julia [Hinrichs, 1992] which also used metaphors for the food items it has been given, and create a
case-based reasoning but to plan out a meal. dish based on the metaphor. The robot then uses a
Children can create meaning from their social interactions description of the dish given by the child to recreate those
collectively through pretense play [Janes, 2002]. They can metaphors itself.
creatively take on imaginary roles, act out stories, and A humanoid robot’s ontology consists of the concepts
change toys into tools or instructions. In metaphor-guided known by the robot, or agent, and the interrelationships
pretense play [Williams, 2013], the child is given the between those concepts. The concepts can include
pretense play role of head chef with a set of healthy food attributes, actions, metaphors, and analogies. In a system in
items such as grapes, bananas, celery, soy butter, and raisins which more than one agent exists, or a multi-agent system
(Figure 1). The humanoid robot “customer” presents a (MAS), each agent, Ai, has a set of concepts, !i that it
“seed” image to the child “chef”. However, the seed image knows about.
is not a picture of food but rather a picture of another
unrelated subject, such as a muddy tree log lying on the Eq. 1) A = { A1, A2 ,…Ai,}
ground with a line of ants walking over it. Eq. 2) For each Ai there exists ! = K(Ai , !i )
Figure 1: (a) A sequence of letters, A, B, . . . , was written on a tablet screen by the interviewer until BB. All participants
continued it until BD. (b) The screenshot shows three possible branches of the sequence as proposed and written by one partic-
ipant. (c) The metaphor of a pyramid guides the vertical and diagonal application of the ABCD series. (d) This continuation
temporarily defers coping with the A-problem and later rectifies this by looping the initially linear progression.
aspect (the inception of position expansion) one may bring up type of assembly. When such a construction fails (and noth-
a conception (e.g. the pyramid metaphor) and thereby modify ing is being recognized), one attempts a new construction.
an earlier one (e.g. the linear succession). The new concep-
tion then guides in constructing further detail (e.g. fill center
digits). Similarly, figure 1-(d) indicates that the deferral of 2.3 Aspects of Construction via Intellection
the A-problem leads to completing the linear sequence with
AA, AB, AC, AD, followed in turn by BA, BB, BC, BD. This The first study investigated the inductive learning of a con-
modifies the linear continuation conception to become circu- cept. In study 2, the participants’ reports indicate that mental
lar, or loop. images are situations that need thoughtful intellection. That
is, intellections that have to be construed, and continuously
2.2 Stimulating Structural Construction: Study 2 reconstructed, to discover relations. In both studies, partici-
(Mental Imagery Construction) pants construed and operated with an intellection of the given
stimuli representing different aspects of it. Thus, an ‘intel-
Imagery is widely considered to be essential to creative pro- lection’ here means a structurally hierarchized apprehension
cesses [Finke et al., 1989]. The evidence given in [Schnei- of a situation, which gives an immediate directive of what
der et al., 2013b] show that imagery does not operate with could be done. An intellection is being carried out until the
static images, but requires to frequently reconstruct them. ‘step’ where it fails to ‘proceed forward’ (in the model, this
Apparently, one can mentally assemble and recognize a fig- corresponds to a concept being produced). One then has to re-
ure; a process which is, according to Hofstadter, based on construct and test the new ideation, where reconstruction usu-
“higher-level perception” and thought of as “the core of cog- ally starts with a very similar approach like before. The pro-
nition” [Hofstadter, 2001]. The assembly suggested in fig- cess can be viewed as an attempt to ‘fulfill’ the specification
given by the initial stimuli. The construction does not search
and find, but stimulates and instantiates a certain specifica-
tion (inception) and thereby creates something new. Mutual
modification of structures of different coarseness is another,
essential property of intellection construction. Construction
is guided by comprehensive structures, but a comprehensive
structure can also be modified as a result of working on struc-
Figure 2: The verbal description (inception) easily affects the ture details.
synthesis and apprehension of this figure, but analyzing it in In the model, proposed below, we view ‘conceptual intel-
order to find all triangles is much harder. lection’ as an open-ended process that is intended to com-
putationally simulate the ‘intellection’ mechanism discussed
ure 2 is given by verbal description in [Schneider et al., above: the latter is our view of the way in which humans
2013b], where the task is to count all triangles in it. Par- construct and update their conceptions (based on the analysis
ticipants easily found some of the triangles, but struggled to of the given experiments and the motivational view in sec-
find all. They readily ‘remembered’ the whole figure; but ap- tion 1), and the former is our view of the way the proposed
parently had to ‘reassemble’ varying parts of it in order to model simulates the latter. The construction and update is
consider ‘relations’ between lines that would ‘form’ a trian- mainly performed by means of the existing knowledge con-
gle. In the presented task, assembling parts is guided in that cepts and new stimulating entities. These stimulating entities
a prototype – a triangle in this case – ‘consummates’ relevant cause the process of inception to take place, again, and start
material. In the case where a prototype dominates thought a conceptual intellection process again. This back-and-forth
development we want to speak of construction as a special processing is thoroughly discussed in the following section.
3 Cognition-Based Concept Construction This aims at imitating aspects of knowledge emergence in
From a cognitive science point of view, we claim that creative humans that are based on their ability of acquiring knowl-
production of conceptions, in the human mind, is a (poten- edge from the environment, and using their experience in re-
tially unbounded) mechanism of cognition that utilizes sim- fining their understanding of repeatedly-encountered concep-
pler sub-mechanisms in the construction and organization of tions (cf. [Abdel-Fattah and Krumnack, 2013; Abdel-Fattah,
conceptual entities. So the simulation of this mechanism (in 2012]). In any case, we assume that the storage and orga-
a computational model of creativity, for instance) would be a nization of knowledge in structured concepts is unavoidable
process that needs to reflect the functionalities of other inter- for showing human-like thinking or creative behavior by this
related (cognitive) sub-processes; each of which is model type.
1. intermediate: an ultimate (creative) production has to go 3.2 The Life-Cycle of Conceptual Intellections
through a non-empty set of the sub-processes, and
A proposed life-cycle for concept construction in artificial
2. interacting: one sub-process, that may be involved in systems is explained below and illustrated in figure 3.
a construction process, affects the functioning and out- According to our proposal, a conceptual intellection com-
come of another. putationally simulates a (creative) construction of concep-
In what follows, the previously discussed aspects are trans- tions; a (fulfillment) mechanism that is always incremen-
formed into a proposal of a model that can eventually accom- tal and potentially unbounded. The system should always
plish (creative) production of concepts. Based on the way build and modify its (created) results, be they concept en-
conceptions are constructed in humans (the experimental re- tities, concepts, or arrangements thereof. The act of grasp-
sults and our claims in section 2), the model views (creative) ing conceptions caused by the stimulating entities (in a lively
concept construction as a life-cycle that utilizes cognitive pro- active human being’s mind) may never stop. So, a point
cesses in carrying out conceptual intellections. where (creative) construction stops should be generally un-
desirable in a computationally running system. This does
3.1 General Model Assumptions not have to conflict with notions concerning boundedness
We do not want to restrict our presentation to one spe- (e.g. bounded rationality [Gingerenzer and Selten, 2002;
cific model or another, because the conceptual ideas here Simon, 1984], knowledge and resources [Wang, 2011], etc.)
are intended to apply to a range of models that satisfy because these notions are concerned with how the cognitive
some assumptions. We will not be designing a computa- (natural or artificial) agent thinks and behaves as the limited
tional model of creativity, rather proposing our view about resources only allow (e.g. the processing and storage capaci-
how cognitively-inspired, computational models may employ ties).
cognitive processes to achieve (creative) production of con- Based on the given studies, we propose that the life-cycle
cepts. Building this kind of models needs to reduce com- of a conceptual intellection (that is, of constructing and up-
puting (creative) construction of concepts, in particular, to dating the concepts) consists of four facets: (1) inception, (2)
computationally-plausible cognitive mechanisms. For exam- arrangement, (3) production, and (4) adaptation. Each facet
ple, an agent (of a model that satisfies the assumptions) can constitutes specific cognitive processes. The depiction in fig-
update an already-existing concept, say CT , by making an ure 3 metaphorically uses a staircase object2 with four sides;
analogy to a well-known concept, say CS . The update can each corresponding to a facet of the potentially unbounded,
happen because an analogy-making may involve an analogi- construction process. Each facet’s stair-steps metaphorically
cal transfer that enriches the ‘structure’ of CT (humans would correspond to interrelated cognitive processes (not all of them
also update their understanding of CT in a similar analogy- can be shown), which may be involved in this facet’s func-
making situation). Another example is the utilization of con- tioning.
cept blending in creating new concepts and recognizing un- The utilized metaphor in the given figure tries to pictorially
precedented concept combinations.1 General assumptions deliver our own message that:
about how we conceive the model are given in the following. The more a system uses knowledge during the pro-
Let us first assume that one’s ultimate goal is to design a gressive inception (from brainstorming to identifi-
cognitively-inspired model of computational creativity and cation of context-dependent pieces of knowledge),
general intelligence. We also assume that the considered the more the system organizes and arranges the col-
model provides the ability to represent knowledge in the lected knowledge (during the progressive arrange-
form of conceptual entities. The organization of knowledge ment) before the ultimate (creative) productions are
in some form of concept domains follows the language-of- constructed, then finally adapted to the overall cog-
thought hypothesis [Fodor, 1983], where concepts can be nitive state (before a new life-cycle starts).
provided in modular groups to serve simulating several pro-
cesses. A model of this kind helps the acquisition, organiza- Using stair-steps implicitly reflects that the ‘proceeding’
tion, and development of knowledge using concept entities. through the staircase may go both ways at some times, caus-
ing some processes to be retracted and others re-executed.
1
Although these examples and figure 3 exemplify some sug-
2
gested mechanisms, such as analogy-making and concept blending, In fact, the staircase object, depicted in figure 3, ought to be
we do not intend here to give a list of the cognitive mechanisms. seen as a spiral of repeating stair-steps going up indefinitely. This is
Related discussions can be found in [Abdel-Fattah and Krumnack, also illustrated by the spiral and the up-arrow, drawn to the left of
2013; Abdel-Fattah et al., 2012; Abdel-Fattah, 2012]. the staircase object in the same figure.
Figure 3: An illustration of the suggested life-cycle for (creative) construction of concepts. The visible number of stair-steps in
each of the four turns is irrelevant (some steps may even be skipped when ‘proceeding’ through the staircase). The ‘proceeding’
is, in turn, depicted by a spiral to the left, where the up-arrow indicates that the life-cycle generally moves forward (up the stairs).
At the beginning of a conceptual intellection life-cycle, transfer). It is worth noting that, in the current proposal,
stimulating entities are instantiated. The stimulating entities ‘what counts as a creative construction and what counts
may have already existed (then activated somehow), or re- as not’ is not specified and left to another discussion.
ceived as new input. They can have structural relations tying This is mainly because the knowledge concepts can al-
them together (or to other knowledge entities). The inception ways be updated, though not considered new or ‘really
facet initiates the construction of concepts, which will lead creative’ concepts.
to the creation of a new concept, or the modification and the
4. The fourth facet, ‘adaptation’ parallels the deeper, in-
hierarchical arrangement of already-existing concepts. We
sightful understanding of conceptions (by humans), and
elaborate more on the cycle’s facets in the following:
reflects the changes in an input or a current context.
1. The ‘inception’ facet involves activating the set of pro- There is always the possibility that what might have once
cesses that leads to implanting and updating conceptual been considered just a recently produced concept by the
entities (which form the agent’s knowledge concepts; model (e.g. as a result of conceptual blending), becomes
cf. section 3.1 and [Abdel-Fattah and Krumnack, 2013; a well-entrenched concept later (perhaps after repeated3
Abdel-Fattah, 2012]). The inception can eventually re- usage).
sult in the model having its set of concepts updated. This
is because either (i) the existing conceptual entities are Digression – The Never-Ending Staircase
modified, or (ii) new entities are acquired. In the former
case, the links tying some of the conceptual entities are The visualization in figure 3 makes use of an attention-
affected and, consequently, the concepts involving them exciting construction instead of, for instance, drawing a dull
are also affected. The concepts can also be affected in circle in two dimensions (which might have as well achieved
the latter case, and new concepts may also be created. the same representational goal). It may be interesting for
some readers to see classical ideas represented in unusual
2. The ‘arrangement’ facet follows the ‘inception’ facet, ways, but there is more to this issue than only drawing an
and is responsible for organizing (currently existing) unusual, seemingly-complex shape.
conceptual entities. (Refer to the discussions in sec- A shape of an impossible triangle, famously known as the
tion 2.2 about hierarchical assembly and the attempts to Penrose triangle, was originally created for fun by Roger Pen-
re-construct when the assembly fails.) rose and Oscar Reutersvärd. Later, Roger Penrose and his
3. The ‘production’ facet starts with the ‘construction’ pro- father, Lionel Penrose, created the impossible staircase, and
cess, which constructs a concept (after assembling and presented an article about the two impossible shapes in the
arranging its components) and ends with such concept British Journal of Philosophy in 1958 (see [Penrose and Pen-
having its components enriched (the fulfillment of the rose, 1958]). M. C. Escher artistically implemented the two
initial model specifications; cf. section 2.3). As a pro- shapes in creating two of his most popular (and very creative)
cess, ‘construction’ is not the crowning result of the drawings; namely: “Waterfall” (lithograph, 1961 [Hofstadter,
‘production’ facet, rather the starting step (in a given cy-
cle). Newly constructed concepts may be re-constructed 3
The idea of entrenchment has been elaborated on in [Abdel-
and enriched in several ways (e.g. through analogical Fattah and Krumnack, 2013; Abdel-Fattah, 2012].
1999, fig. 5] and [Seckel, 2004, pp. 92]) and “Ascending and in the latter case, and not necessarily in the former. In fact,
Descending” (lithograph, 1960 [Hofstadter, 1999, fig. 6] and it is not even desirable that the participants reflect a creative
[Seckel, 2004, pp. 91]). A version of the latter drawing has ability (according to the aims of the tests in the former case).
theatrically been directed by Christopher Nolan in his mas- Another difference is that our proposed model has the po-
terpiece, “Inception” [Nolan and Nolan, 2010], which use the tential to utilize computationally-plausible cognitive mecha-
structure to represent impossible constructions. In fact, the nisms in achieving artificial systems that realize the model.
credit for using the term “inception”, as the initial process We are neither concerned with only testing deductive reason-
within the conceptual intellection life-cycle, is also due to ing, nor do we need results that only obey rules of deductive
Christopher Nolan, whose work in [Nolan and Nolan, 2010] reasoning. Rather, our main concern is in testing (creative)
inspired us to use the impossible staircase shape in figure 3 to concept construction, which seems to involve cognitive ca-
reflect a similar meaning. pacities other than deduction. For instance, concept blending
This article is concerned with creative constructions and possibly allows for unfamiliar, resulting concepts by blend-
the inception process. According to our proposal, the former ing two (or more) familiar concepts, opening thus the door
may indefinitely be performed in successive steps, whereas to non-deductively achieve thinking outside the box. Also,
the latter is the most essential facet of the given life-cycle. given two concepts, people do not usually see one, and only
Hence this particular visualization is used. one, analogical mapping between the two concepts’ entities.
Counterfactual reasoning is yet a third example that empha-
3.3 Related Literature sizes the importance of utilizing mechanisms other than de-
In addition to the earlier discussions, the ideas of the pro- ductive reasoning in (creative) concept construction(actually,
posed model are further supported by work in the literature. deduction may not at all work when computationally analyz-
The suggested (four-facet) cycle agrees with the “Geneplore ing counterfactual conditionals [Abdel-Fattah et al., 2013]).
model” [Finke et al., 1992], which proposes that creative
ideas can only be produced in an “exploratory phase”, after an 4 Concluding Remarks
individual already constructs mental representations, called
“pre-inventive structures”, in the first “generative phase”. It is not surprising that many ideas, of what currently is re-
Such pre-inventing, in our view, initiates the inception facet garded standard in (good old-fashioned) artificial intelligence
and finishes before creative construction starts. Usage of (AI), have already been laid out in Newell and Simon’s article
the term “inception” in our proposal is thus akin to “pre- [Newell and Simon, 1976]. The article very early anticipated
inventing”. Additionally, evidence is given in [Smith et improvements that seem inevitable from today’s perspective.
al., 1995, p. 157–178] that the already-existing concepts According to Newell and Simon’s pragmatic definition, an
greatly affect the (re-)structuring of the newly developed intelligent organism is required to be ‘adaptive’ to real world
ideas. Moreover, Weisberg argues that (human) creativity ‘situations’. Newell and Simon inspired the scientific society
only involves “ordinary cognitive processes” that yield “ex- to begin studying human-like intelligence and their ability to
traordinary results” [Weisberg, 1993], which consolidates our adapt. However, since the beginning of AI, it was “search”
motivation to focus on cognitively-inspired models. what they proposed as a cornerstone of computing [Newell
On the one hand, there are similarities between our pro- and Simon, 1976]. Less importance has been given to other
posal and the mental models theory, as proposed by Johnson- computationally-plausible cognitive mechanisms (which may
Laird and others (cf. [Johnson-Laird, 2006; 1995; 1983; not need, or be based on, search).
Gentner and Stevens, 1983]). The reader may for example Creativity, on the other hand, is a complex general intel-
recognize that our view of what we call “intellections” seems ligence ability, that is closely connected with the utilization
analogous to Johnson-Laird’s view of what he calls “mod- of various cognitive mechanisms. Creativity may definitely
els”. “Models are the natural way in which the human mind need searching in performing specific sub-processes, such as
constructs reality, conceives alternatives to it, and searches finding analogies to newly available concepts before modify-
out the consequences of assumptions” [Johnson-Laird, 1995, ing (or adapting) them. But, after all, it certainly differs from
pp. 999]. In addition, the general results of the many psy- mere search in several aspects. Our trial in this article claims
chological experiments given in [Johnson-Laird and Byrne, that investigating, testing, and implementing cognitive mech-
1991], for example, may be considered yet a further support anisms need to attract more attention, in particular when one
to our proposal about how humans construct their understand- is interested in modeling computational creativity.
ings. On the other hand, our approach differs from the the- A model of computational creativity is expected to greatly
ory of Johnson-Laird in at least the following essential points. differ from a classical AI model. An essential difference
Whilst mental model experiments are mainly concerned with between creativity and AI is that AI programs have to ac-
testing one particular ability of participants (namely, their cess huge amounts of (remarkably meaningless or unsorted)
ability to reason ‘deductively’), the first study in section 2, data before getting (meaningful and interconnected) results,
for example, investigated the ‘inductive’ learning of a con- whereas creativity reflects the general intelligence character-
cept. In our view, asking the participants to construct argu- istic of humans, who are much more selective and adaptive
ments, to show that one conclusion or another follows from in this regard. For example, humans can not only induce ap-
what already lies within what is already given, is different propriate meanings of unknown words, both within and with-
from giving them inceptions, then allow them to freely con- out a given context (e.g. interpreting unprecedentedly-seen
tinue their conception constructions. Creativity is revealed noun combinations that appear in general phrases; cf. [Abdel-
Fattah and Krumnack, 2013]), but can also produce words Krumnack, 2013; Abdel-Fattah et al., 2012]), though poten-
that reflect particular meanings or descriptively summarize tially unbounded. Such systems treat the behavior ‘to create’
specific situations (e.g. giving the name “Facebook” to the as that of bringing into existence a concept that has not pre-
online social networking service, yet a “face book” is, phys- viously ‘experienced’, or that is significantly ‘dissimilar’ to
ically, a directory that helps people get to know each other). another analogue (or even its older version). But, in essence,
As a further example, consider the analysis of counterfactual creativity cannot be about creating something from nothing
conditionals. Humans can flexibly restrict the judgement of (computationally-speaking, at least), rather triggered by ‘in-
whether or not a given counterfactual conditional is verifiable, ceptions’ every now and then.
although the conditional might indicate many, or possibly-
infinite, possibilities (cf. [Abdel-Fattah et al., 2013]). In Acknowledgments
essence, the ‘selectivity’ and ‘adaptivity’ capacities of hu-
mans are richly adorned with insightful constructions, regard- The authors would like to thank the three anonymous review-
less of whether or not humans already have a vast amount of ers of this article’s first draft, both for their valuable opin-
knowledge. Knowledge and experience certainly help in in- ions and encouraging comments. The concerns they raised
creasing these capacities: the more their knowledge, the more remarkably helped in improving the presentation of the given
humans can appropriately select, adapt, and (creatively) con- ideas in some parts of the article.
struct. The notion of search is partially motivated by the sub-
jective observation that one initially explores different ideas References
while actively working on a problem.
[Abdel-Fattah and Krumnack, 2013] A. Abdel-Fattah and
A cognitively-inspired model of creativity and general in-
telligence should be designed to reflect, and benefit from, the U. Krumnack. Creating analogy-based interpretations
essential differences between how a cognitive being, and how of blended noun concepts. In AAAI 2013 Spring Sym-
a classical AI analogue, solve the same problems. By a classi- posium: Creativity and (Early) Cognitive Development.
cal AI analogue we mean a classical AI model that solves the AAAI Press, Palo Alto, California, 2013.
problems ingeniously, regardless of whether or not the way [Abdel-Fattah et al., 2012] A. Abdel-Fattah, T. Besold, and
of solving agrees with that of a generally intelligent, cogni- K.-U. Kühnberger. Creativity, Cognitive Mechanisms, and
tive being. To base the models on representably-modifiable Logic. In J. Bach, B. Goertzel, and M. Iklé, editors, Proc.
conceptual constructions would hopefully be a promising so- of the 5th Conference on Artificial General Intelligence,
lution to achieving this design goal. We claim that the design Oxford, volume 7716 of LNCS, pages 1–10. Springer,
of a model of creativity should be inspired by the behavior 2012.
of humans in situations where construction and adaptation of [Abdel-Fattah et al., 2013] Ahmed M. H. Abdel-Fattah, Ulf
conceptions are needed, and can be implemented in the model Krumnack, and Kai-Uwe Kühnberger. Utilizing Cognitive
by employing specific4 cognitive mechanisms. The model
Mechanisms in the Analysis of Counterfactual Condition-
should as well allow for several (bidirectional) paths, some-
als by AGI Systems. In Kühnberger et al. [2013], pages
times even ‘bridges’, to construct and modify conceptions in a 1–10.
potentially unbounded process of construction, without cross-
ing the exponential bounds that search may require. Such a [Abdel-Fattah, 2012] A. Abdel-Fattah. On the feasibility of
potentially unbounded construction process is proposed here concept blending in interpreting novel noun compounds.
as a ‘looping’ process. It is therefore worth quoting part of In Proceedings of the Workshop “Computational Creativ-
Hofstadter’s comments on what he calls “the central cogni- ity, Concept Invention, and General Intelligence”, vol-
tive loop” (described in [Hofstadter, 2001]): ume 01-2012, pages 27–32. Institute of Cognitive Science,
2012.
“The broad-stroke pathway meandering through
the limitless space of potential ideas [... goes [Boden, 1996] Margaret A. Boden. Artificial Intelligence.
around] and around in [...] a loop, alternating be- Handbook Of Perception And Cognition. Elsevier Science,
tween fishing in long-term memory and unpacking 1996.
and reperceiving in short-term memory, rolls the [Finke et al., 1989] Ronald A. Finke, Steven Pinker, and
process of cognition.” [Hofstadter, 2001, p. 521] Martha J. Farah. Reinterpreting Visual Patterns in Men-
From where the authors stand, creativity produces uncom- tal Imagery. Cognitive Science, 13:51–78, 1989.
mon outcomes that provide contemporary applications of pre- [Finke et al., 1992] R.A. Finke, T.B. Ward, and S.M. Smith.
viously known constructs to new contexts. We propose that Creative Cognition: Theory, Research and Applications.
(computational) creativity should be phrased in terms of the Bradford Books. Bradford Books, 1992.
ability of an (artificial) intelligent system to construct concep-
[Fodor, 1983] Jerry A. Fodor. The Modularity of Mind: An
tions or modify them, using the already-existing knowledge
about former concepts; a process that is both computation- Essay on Faculty Psychology. Bradford Books. MIT Press,
ally plausible and cognitively inspired (cf. [Abdel-Fattah and 1983.
[Gentner and Stevens, 1983] Dedre Gentner and Albert L.
4
Again, the precise list of cognitive mechanisms can not be ex- Stevens. Mental Models. Cognitive Science. L. Erlbaum
tensively discussed in this article. Associates, 1983.
[Gingerenzer and Selten, 2002] G. Gingerenzer and R. Sel- number system. Master’s thesis, University of Osnabrück,
ten. Bounded Rationality: The Adaptative Toolbox. 2012.
Dahlem workshop reports. Mit Press, 2002. [Seckel, 2004] A. Seckel. Masters of Deception: Escher,
[Hofstadter, 1999] Douglas R. Hofstadter. Gödel, Escher, Dalí & the Artists of Optical Illusion. Sterling Publishing
Bach: An Eternal Golden Braid. Basic Books. Basic Company Incorporated, 2004.
Books, 1999. [Simon, 1984] H. A. Simon. Models of Bounded Rationality.
[Hofstadter, 2001] Douglas R. Hofstadter. Analogy as the MIT., 1984.
Core of Cognition. In Keith J Holyoak, Dedre Gen- [Smith et al., 1995] S.M. Smith, T.B. Ward, and R.A. Finke.
tner, and Boicho Kokinov, editors, The analogical mind: The Creative Cognition Approach. A Bradford book. MIT
Perspectives from cognitive science, pages 499–538. MIT Press, 1995.
Press, Cambridge, MA, 2001.
[Wang, 2011] Pei Wang. The Assumption on Knowledge
[Johnson-Laird and Byrne, 1991] Philip N. Johnson-Laird and Resources in Models of Rationality. International
and Ruth M. J. Byrne. Deduction. Essays in Cognitive Journal of Machine Consciousness (IJMC), 3:193–218,
Psychology. Lawrence Erlbaum Associates, Incorporated, 2011.
1991.
[Weisberg, 1993] R.W. Weisberg. Creativity: beyond the
[Johnson-Laird, 1983] Philip N. Johnson-Laird. Mental myth of genius. Books in psychology. W H Freeman &
models: towards a cognitive science of language, infer- Company, 1993.
ence, and consciousness. Harvard University Press, Cam-
bridge, MA, USA, 1983.
[Johnson-Laird, 1995] Philip N. Johnson-Laird. Mental
Models, Deductive Reasoning, and the Brain, pages 999–
1008. The MIT Press, 3 edition, 1995.
[Johnson-Laird, 2006] Philip N. Johnson-Laird. How We
Reason. Oxford University Press, 2006.
[Kühnberger et al., 2013] K.-U. Kühnberger, S. Rodolph,
and P. Wang, editors. Artificial General Intelligence - 6th
International Conference, AGI 2013, Beijing, China, 31
Juli – 3 August, 2013. Proceedings, volume 7999 of Lec-
ture Notes in Computer Science. Springer-Verlag Berlin
Heidelberg, 2013.
[McCormack and d’Inverno, 2012] Jon McCormack and
Mark d’Inverno. Computers and Creativity. Springer-
Verlag GmbH, 2012.
[Newell and Simon, 1976] Allen Newell and Herbert A. Si-
mon. Computer science as empirical inquiry: symbols
and search. Communications of the ACM, 19(3):113–126,
March 1976.
[Nolan and Nolan, 2010] Christopher Nolan and Jonathan
Nolan. Inception: The Shooting Script. Insight Editions,
2010.
[Penrose and Penrose, 1958] L. S. Penrose and R. Penrose.
Impossible objects: A special type of visual illusion.
British J. of Psychology, 49:31–33, 1958.
[Schneider et al., 2013a] Stefan Schneider, Ahmed M. H.
Abdel-Fattah, Benjamin Angerer, and Felix Weber. Model
Construction in General Intelligence. In Kühnberger et al.
[2013], pages 109–118.
[Schneider et al., 2013b] Stefan Schneider, Ulf Krumnack,
and Kai-Uwe Kühnberger. Dynamic Assembly of Figures
in Visuospatial Reasoning. In Proceedings of the Work-
shop SHAPES 2.0: The Shape of Things. Co-located with
UNI-LOG 2013, Rio de Janereiro, Brazil, 2013.
[Schneider, 2012] Stefan Schneider. The cognitive structure
of the enumeration algorithm: Making up a positional
Square, zero, kitchen: Start
Insights on cross-linguistic conceptual encoding
Francesca Quattri
The Hong Kong Polytechnic University
Hong Kong
francesca.quattri@connect.polyu.hk
“like”, “as” or by specific verbs like “resembles”. In order to Nouns Verbs Adjectives
select similes from our data set, we used a manually created eye (524) to look like (236) white (265)
set of such words (marks) used in Japanese. This allowed voice (437) to flow (204) black (156)
us to retrieve 12,214 similes, on which we performed some water (338) to shine (197) beautiful (138)
statistical tests. By using JUMAN morphological parser1 we sound (330) to stand (182) cold (135)
have separated and ranked 3,752 unique part of speech pat- face (325) to look (169) heavy (107)
terns that can be helpful while generating figurative expres- heart (309) to fall down (161) dark (101)
sions. Dependency parser KNP’s dictionary2 and semantic breast (282) to put out (146) sharp (101)
role tagger ASA3 were then used in order to rank most popu- light (265) to go (145) big (96)
lar categories and words needed to set weights for deciding on sky (235) to move (139) small (93)
the best simile candidate in the generation process. Most fre- head (218) to feel (137) detailed (86)
quent semantic categories characteristic to figurative speech
were colors, shapes, and patterns. The words with highest Table 2: Top 10 nouns, verbs and adjectives out of 10,658
frequency are shown in Table 2 (grouped by part of speech). morphological tokens found in corpus (similes only).
For comparison, semantic categories characteristic to random
web text (3,500 sentences from a blog corpus) were mostly
places, currencies, names, dates, organizations, education and
In order to examine the applicability of a generation task, the
the words most characteristic to a random web text were as
experimenter must conduct a metaphor generation task with a
follows.
huge number of concepts, therefore Abe et al. used Japanese
Nouns: newspaper corpus as a base for their language model. Re-
learning, media, bad, work, information, method, searchers also use the Web as a source for their models [Veale
enterprise, understanding, company, strength, and Hao, 2007][Masui et al., 2010] and utilize the latest the-
area, necessity, relationship, usage, utilization, sauri and ontologies as WordNet [Miller, 1995] to build so-
direction, United States, system, administration, phisticated generation algorithms [Huang et al., 2013]. Nu-
thought, two, city, money, district, caution merous examples extracted from Onai’s dictionary could be
Verbs: helpful for all existing approaches. Therefore we are planning
to be visible, to know, to be divided to test most popular approaches in nearest future. For the ba-
Adjectives: sic preparations we have used the 12,214 similes mentioned
individual, many, old in the previous section. Currently we are working on Ortony’s
Further semantic analysis data can broaden the system’s salience imbalance theory [Ortony, 1979] which predicts pos-
knowledge and become also helpful for recognizing figura- sible source-target shared attributes and their positions in
tive speech because when understanding users’ utterances as each ranking. Together, these concepts imply that low-high
metaphorical or idiomatic expression they need to be pro- topic-source pairings should cause increases in salience of
cessed by using different comprehension strategies. topic attributes. On Figure 1 we show the idea of two lists
of attributes that describe a word in an order of occurrences.
3 Example Usage: Novel Similes Generation So “sweet honey” is more natural than “sweet voice” but
The topic of machine generating metaphors is not as quite the same adjective can describe both nouns. However, as
popular as the understanding task. Most first trials were lim- Ortony’s theory suggests, if two adjectives are from the top
ited to narrow categories of target (topic/tenor) and source or bottom of the lists (distance between them increases), it is
(vehicle) as in [Hisano, 1996]. Ten years later [Abe et al., less likely that they can form an apt simile. We propose a
2006] have tackled problem of metaphorical data insuffi- method for calculating this distance in the following section.
ciency by using statistical analysis of language data to rep-
resent large scale human language knowledge stochastically. 3.1 Toward Calculating The Salience Imbalance
1 To observe which attributes could be used for a ground com-
Juman System, a User-Extensible Morphological Analyzer for
Japanese. Version 7.0: http://nlp.ist.i.kyoto-u.ac. bining a source and a target (as “strong” in man strong as
jp/index.php?Juman a bear) we experimented with two attribute lists. Prelimi-
2
Japanese Dependency and Case Structure Analyzer KNP 4.0: nary tests with different data sets suggested that it is fairly
http://nlp.ist.i.kyoto-u.ac.jp/index.php?KNP probable to find common attributes between distant source
3
ASA 1.0 - Semantic Role Tagger for Japanese Language: and target nouns using the Japanese case frames database
http://cl.it.okayama-u.ac.jp/study/project/ [Kawahara and Kurohashi, 2001], but as it is mostly verb-
asa/ centered, we also created attributes lists for nouns used in our
Figure 1: An example of attributes lists for a source - target pair created by ranking of adjective-noun bigram occurrences in a
blog corpus. Salience imbalance should, according to the Ortony’s theory, occur between attributes placed higher and lower on
such lists.
Table 3: Sixteen metaphors which had common Ground in a Kyoto Case Frames-based attributes lists. Sorted in order in
distance which is the difference between attribute usualness values calculated from Ground position in both lists.
Table 4: Results for understandability and aptness evaluation experiment. Lines in gray show similes which had both under-
standability and aptness higher than the averages.
have performed multiple aditional experiments with different [Jones and Estes, 2005] Lara Jones and Zachary Estes.
parameters to see which conditions are helping and which are Metaphor comprehension as attributive categorization.
not. Results of one strict set of conditions is shown in Table 5. Journal of Memory and Language, 53:110—124, 2005.
In most cases, if semi-correct results are counted as positive, [Kawahara and Kurohashi, 2001] Daisuke Kawahara and
the newly generated similes were significantly better that a Sadao Kurohashi. Japanese case frame construction
random generation, but further filtering semantically strange by coupling the verb and its closest case component.
outupts is needed. In Proceedings of the first international conference on
Human language technology research, HLT ’01, pages
4 Conclusions and Future Work 1–7, Stroudsburg, PA, USA, 2001. Association for
Computational Linguistics.
In this short paper we have introduced new possibilities for
figurative speech generation by using a new vast collection [Liu and Singh, 2004] Hugo Liu and Push Singh. Concept-
of Japanese metaphorical expressions. Because this is an net: A practical commonsense reasoning toolkit. BT Tech-
early stage of our study, we have performed only prelimi- nology Journal, 22:211–226, 2004.
nary tests and experiments in order to get a grasp of which [Masui et al., 2010] Fumito Masui, Rafal Rzepka, Yasutomo
tools and other repositories can be combined before we start Kimura, Jun-Ichi Fukumoto, and Kenji Araki. Www-
implementing the data to known theories about human’s abil- based figurative descriptions for japanese word. Journal
ity to use examples while explaining physical and abstract of Japan Society for Fuzzy Theory and Intelligent Infor-
objects. We have already started to work with more simile matics, 22(6):707–719, 2010.
patterns, also including verbs to fully utilize Kyoto Frames [Mazur et al., 2012] Michal Mazur, Rafal Rzepka, and Kenji
database. We are experimenting with N-gram frequencies Araki. Proposal for a conversational english tutoring sys-
of target, ground and source triplets to create vectors which tem that encourages user engagement. In Proceedings of
should help us discover more statistical dependencies. We are the 19th International Conference on Computers in Edu-
also testing WordNet and ConceptNet [Liu and Singh, 2004] cation, pages 10–12. Asia-Pacific Society for Computers
as a source for further calculation of semantic dependencies in Education (ICCE2011), 2012.
and show the latest progress during the workshop.
[Mcglone, 1996] Matthew S. Mcglone. Conceptual
Metaphors and Figurative Language Interpretation:
References Food for Thought? Journal of Memory and Language,
[Abe et al., 2006] Keiga Abe, Kayo Sakamoto, and 35(4):544–565, August 1996.
Masanori Nakagawa. A computational model of the [Miller, 1995] George A. Miller. Wordnet: A lexical
metaphor generation process. In Proceedings of the 28th database for english. Communications of the ACM, 38:39–
Annual Conference of the Cognitive Science Society, 41, 1995.
pages 937–942, 2006. [Onai, 2005] Hajime Onai. Great Dictionary of 33800
[Bowdle and Gentner, 2004] Brian Bowdle and Dedre Gen- Japanese Metaphors and Synonyms (in Japanese). Kodan-
tner. The career of metaphor. Psychological Review, sha, 2005.
111(1):193—216, 2004. [Ortony, 1979] Andrew Ortony. Beyond literal similarity.
[Carbonell, 1982] Jaime Carbonell. Metaphor: An in- Psychological Review, 86:161–180, 1979.
escapable phenomenon in natural language comprehen- [Utsumi and Kuwabara, 2005] Akira Utsumi and Yuu
sion. In Wendy Lehnert and Martin Ringle, editors, Strate- Kuwabara. Interpretive diversity as a source of metaphor-
gies for Natural Language Processing, pages 415–434. simile distinction. In In Proceedings of the 27th Annual
Lawrence Erlbaum, 1982. Meeting of the Cognitive Science Society, pages 2230–
[Gentner, 1983] Dedre Gentner. Structure mapping: A theo- 2235, 2005.
retical frame- work for analogy. Cognitive Science, 7:155– [Veale and Hao, 2007] Tony Veale and Yanfen Hao. Com-
170, 1983. prehending and generating apt metaphors: a web-driven,
case-based approach to figurative language. In Proceed-
[Glucksberg, 2001] Sam Glucksberg. Understanding figura-
ings of the 22nd national conference on Artificial intel-
tive language: From metaphors to idioms. Oxford Univer- ligence - Volume 2, AAAI’07, pages 1471–1476. AAAI
sity Press, 2001. Press, 2007.
[Hisano, 1996] Masaki Hisano. Influence of topic character-
istics on metaphor generation. In Proceedings of 38th An-
nual Conference of Japanese Association of Educational
Psychology (in Japanese)., page 449, 1996.
[Huang et al., 2013] Xiaoxi Huang, Huaxin Huang, Beishui
Liao, and Chihua Xu. An ontology-based approach to
metaphor cognitive computation. Mind and Machines,
23:105–121, 2013.
English translation Japanese in Roman letters
Lips as red as they were licking blood Chi-o nameru you-na akai kuchibiru
Lip as thin as somebody press a blade against it Ha-o oshitsukeru you-na usui kuchibiru
Voice so sweet as somebody was sucking up honey Mitsu-wo suiageru you-na amai koe
Wife as young as somebody was talking to daughter Musume-ni hanasu you-na wakai tsuma
Wife so young that resembles daughter Musume-ni niru you-na wakai tsuma
Woman so rough like she was dropping fire Hi-o otosu you-na hageshii onna
Sleep as deep as somebody was avoiding death Shi-o kaihi-suru you-na fukai nemuri
Hand as cold as somebody was biting ice Koori-o kamu youna tsumetai te
Hand as cold as somebody was putting it into ice Koori-ni ireru you-na tsumetai te
Skin as thin as somebody was cutting off paper Kami-o kiritoru you-na usui hifu
Skin as thin as somebody was peeling off paper Kami-o hagu you-na usui hifu
Skin as thin as somebody was scratching off paper Kami-o hikkaku you-na usui hifu
Skin as thin as somebody could stick paper (to it) Kami-ni haritsukeru you-na usui hifu
Face as long as somebody was being put on a horse (back) Uma-ni noseru you-na nagai kao
Face as long as somebody was aiming at a horse Uma-ni ateru you-na nagai kao
Face as long as somebody was separated from a horse Uma-kara hanasu you-na nagai kao
Wife as young as somebody was passing to daughter Musume-ni watasu you-na wakai tsuma
Wife so young that you could get used to daughter Musume-ni nareru you-na wakai tsuma
Wife so young that she could take away daughter Musume-wo ubau you-na wakai tsuma
Wife so young that you could find in daughter Musume-ni mitsukeru you-na wakai tsuma
Wife so young that you could be worried about daughter Musume-ni kizukau you-na wakai tsuma
Wife so young that you could let her go to daughter Musume-ni hanasu you-na wakai tsuma
Table 5: Examples for an Kyoto frames-based generation experiment for candidates with the same particles and grounds, where
there were more than 30 verb-source and 30 grounds in Ameba corpus. Conditions for salience imbalance distance values
were D < 100 and D > 5. First group of examples were classified by authors as correct, second as semi-correct and third as
incorrect, although there was no agreement in case of few examples from the last group suggesting that imagination plays a big
role in evaluating novel metaphors.