Sunteți pe pagina 1din 369

PSYCHOLOGY AND STRUCTURED DESIGN OF ARTIFICIAL INTELLIGENT SYSTEMS

Introduction to Artificial Intelligent System s Mind

Dr. L. M. Polyakov

Globe Institute of Technology

ii

To my family

iii

iv

Knowledge is not absolute. This book is not the last word; it is in vitation to discussion of understanding and direction of the new class systems development. It can be used to develop the specification for Artificia l Intelligent System design and applications. Collective work and evolutio n of knowledge will create better understanding of these systems philosophy.

All basic ideas of this book are presented in the strictly structured and dynami c way rather than final results. As the textbook it can create atmosphere of stron g intellectual activities in the classroom. Examples of relatively simple realizat ion of the basic ideas by ordinary students should create student s sense o f confidence. All definitions and system descriptions are applicable just to the Art ificial Intelligent Systems but in some cases can help better understanding psychology of the Natural Systems.

vi PREFACE The Problem

Les then three hundred years ago the human society did not have a motorized tran sportation in the streets. A car has changed a human behavior and life. People have learned how to live in the new environment. They have learned the new traffic rules and how to communicate with a car through its control system, have learned the new safety rules. 21st century is the man-intelligent-machine society. It is a combinatio n of intelligent machines and the human being. The business intelligence bumming market grows at an annual rate of 11.5 percent and the business-performance-management cate gory of 12.8 percent, compared with overall growth in the software market of just 8 percent [ 76]. Japan s robot market is estimated at $4.5B and by prediction $16B in 2010 year. Artificial Intelligence with knowledge accumulation through the Internet will drastically change the world. People have to learn how to live in the new envir onment. It is very important to understand psychology and behavior of the new habitants o f the Earth. The American Society for the Prevention of Cruelty to Robots (ASPCR) tries to define relationship between the human society and Robots. As it will be shown later, the actual behavior of fully autonomous a dvanced artificial intelligent systems cannot be predicted in some cases. It is important therefore to prognosticate possible dangerous results of their behavior and protect environment and human being from their not authorized actions (see FREE WILL AND ACTIONS and LAW AND MORAL). Today it is not Sci-Fi or horror movie problems; it is the real world existence. We must define limitations of the AIS that would be acceptable to hum an society; if we do not define and enforce limitations of the AIS one day we can find ourse lves at the point of no return, like aircraft with vertical take off in the midd le of lifting (it looks like balancing on the top of the geyser; this proce ss can not be controlled by a human). Although some advances have been made recently in machine learning and artificia l systems

design, major issues remain unresolved. These regard abilities that are difficul t to mimic by machines but that humans and animals display ubiquitously, such as ada ptation, generalization, continuous learning with experience, conceptualization, and so o n. Do neural brain cells provide a computational platform with characteristics and r epresentations that could permit such abilities to be expressed in machines and be applied in practi ce? Do neural brain processes use unique methods for performing 'computation' in gene ral, in contrast to those used in current computer and integrated circuit tec hnologies? The Solution For some contemporary philosophers, the soundest approach to problems i n philosophy of mind is to translate the mental into a set of functions. One version of functionalism that attracts wide attention is found in such specialized f ields as artificial intelligence and expert systems [67]. vii IBM s chess-playing computer defeated Kasparov, one of the greatest chess masters of the age, in 1997. What is philosophically interesting about such outcomes is not tha t computers can outperform humans, but that the performance suggests that the best understanding of human s mental operations is computational. There are sound philosophical and conceptual reasons for caution here. It s clear that a computer can mental life of computers! Development of the Artificial Intelligent System (AIS) is the process of automation of intelligent system activities. Artificial Intelligent System theory includes Psychology of the AIS (the theory of the central control system), Physiology of the AIS (the theory of the functions of the systems and their parts Distributed C ontrol Theory) and Anatomy of the AIS (the science of the shape and structure of systems and their parts).

It is impossible to develop and automate any process without understanding its n ature in the clear engineering terms. It is reason to learn as much as possible about intelli gent machines psychology before starting the AIS development. Being neither a physica l science nor a biological science in the strict sense, psychology has evolved as somet hing of an engineering science - the General Theory of Control. But the Control. Application the engineering methods to the analysis of psychological abilities and problems perm its to get better understanding of machine s intellectual abilities and human s as well. It is important to understand the natural and artificial intelligent sy stems abilities and limitations. The General Theory of Control was presented under name Cybernetics [78]. Cybe rnetics is the study of communication and control, feedback in living typically involving regulatory

organisms, machines and organizations, as well as their combinations. It is an earlier but still-used generic term for many of the subject matters tha t are increasingly subject to specialization under the headings of adaptive systems, artificial intelli gence, complex systems, complexity theory, control systems, decision support systems, dynam ical systems, information theory, learning organizations, mathematical systems theory, opera tions research, simulation, and systems engineering. There are two main principle steps to prepare information: 1. To understand of the object or process (definition and description in the en gineering terms) 2. To organize the information (present it in a structured, algorithmical form) in the way to make understandable how the process works. Information should be described as simple as possible. Algorithm demons trates concept of the function.

viii The question is: is it possible to learn the psychology of the systems tha t do not yet exist? The answer is also preparation of a layout for the new system design process. The author tries to cover this problem, and presents a draft of the layout of th e Psychology of Artificial Intelligent Systems. It lays the groundwork for a better understandin g of Artificial Intelligent Systems behavior and process of the AIS design for different areas o f application. It is understandable that intelligence of the AIS and natural systems have a lot in common. This book also can help to reach better understanding of human psychology. It ca n be used in undergraduate courses in computer science, management, psychology, philosophy, and also in preparation of the students of all specialization to understanding of new rea lms of the 21st century. As the college course, Psychology of the AIS has grater degrees of freedom for discussion because is not limited by the stereotypes and dogmas of the human sc ience such as the psychology and social studies. The students of Globe Institute of Techn ology under author supervision developed all working examples in this book. The Definition

and what is a mind? What is an artificial life and what is a natur al one? What is consciousness and what is sub consciousness? What is creativity and wh at is super intelligence? What is autonomy and what are emotions? What is an Arti ficial Intelligent system intuition and what is hypothesis generation? What is fairness a nd what is the fair deal? What is natural reproduction and what is artificia l one? There are limits in ownership of Artificial Intelligent Systems and how to define it? There is an identity of Artificial Intelligent System and how to define it? There are hundreds and hundreds questions.

Relativity Theory and Uncertainty Principle, computer science and neurobiology, migration and immigration, multi-citizenship and cross line marriage all of these scientific achievements and social cataclysms force us to define and redefine almost everyt hing. The first step of any research and discussion (especially in the area of Artific ial Intelligence that has mixture of human s common sense, the engineering and philosophical terms as well) is strict definition of the terms. John McCaty has introduced new nam e for the field: artificial intelligence. This name defined all set of the terms. A definition i s not a blueprint of a system design but a direction of design process. Without good definition nothing else is really matter. Good definition presents description of a term from the user (customer), designer, etc. points of view and helps to reco gnize it among the other terms. It should include the minimal set of def ining (unique) features used for this term and be as simple as possible. Definition is fundamental character, a statement of the meaning of a word, phrase, or term, as in a dictionary entry Dictionary). A definition should supports active approach to the problem soluti on. All basic terms in this book are strictly defined by the scientific society or by the author. Absence of the definitions is cause of the most theoretical and philo sophical problems, discussions, and misunderstanding in the history of Artificial Intelligence Science [51]. For ix example: the statement held by a person of artificial intelligence (see APPENDIX 8). It is important to understand difference between definitions of the sa me term in different languages. For example world covers just conscious and subconscious processes and does not include unconscious processes as it is in the English language (see MIND, INTELLIGENCE, CONSCIOUSNESS, THOUGHT). Some proposed definitions should be accepted as the start position to move from.

ACKNOWLEDGMENT

I begin with the founder and the First President of the Globe Institute of Techn ology Mr. Leon Rabinovich. I am very grateful for his full support of this project. My special acknowledge to the Professor of Drexel University Dr. Alex Meystel I have worked over the years. Much of what I have learned has been the result of our co llaborative work and discussions. My special thanks to Mr. Richard Holley for his excellent editing work. Finally, I want to thank my lovely wife, who is the medical doctor and programme r, for her patience and important critical comments.

xi

xii CONTENTS INTRODUCTION PART 1 INTELLIGENCE WHAT IS INTELLIGENCE? Introduction Definition Development Robustness as the Tool of Reliability CREATIVITY IMAGINATION SUPERINTELLIGENCE MEASURMENT OF INTELLIGENCE .15 ..21 22 23 5 .7 ...13

...1 3 ,,,5

CLASSIFICATION OF THE INTELLIGENT TASKS AND ABILITIS OF THE AGENTS TO ACHIEVE THEIR GOALS Introduction 25 Intelligence Abilities Goal and Agent Classes MIND, INTELLIGENCE, CONSCIOUS, THOUGHT THE MIND AS AN OPERATING SYSTEM .25 27 ...30 ..25 ..25

DISSOCIATION BETWEEN INTENTIONAL CONSCIOUS AND UNINTENTIONAL CONSCIOUS PROCESSES CONSCIOUS, UNCONSCIOUS, AND SUBCONSCIOUS PROCESSES.. AWARENESS AND SELF-AWARENESS Awareness Self-awareness REFLEXES FREE WILL AND ACTIONS THE STRUCTURE OF INTELLIGENCE CONCLUSION REFERENCES ......39. ...44 ..31 ...32 34 .. 35 37 30. ..31.

PART 2 PSYCHOLOGY OF ARTIFICIAL INTELLIGENT SYSTEMS WHAT IS PSYCHOLOGY OF ARTIFICIAL SYSTEMS? Introduction Method of Analysis Levels of Analysis DECOMPOSITION AS THE METHOD OF ANALYSIS THE STRUCTURE OF AIS VECTOR OF PERFORMANCE (FUNCTIONS) AUTONOMOUS SENSINGS AND ENSATION.. ...59 . ...67 .56 .57 53 ..53 ...54 ...49 .51 51

ATTENTION xiii PERCEPTION DISCRIMINATION OBJECT RECOGNITION Speech and Text Recognition Technology UNDERSTANDING AND INTERPRITATION REASONING Introduction Knowledge Representation

.69 ...70 ..70 ....72 .74 ..75 .81 ..81 ......82 ...83

The Structure of Knowledge Representation in the Intelligent System Knowledge representation in the neuron net Proposition Logic Forward chaining Relationship Between Abstract and Specific.. Wumpus World ....91 ....86 ....89 .90

MEASURMENT OF KNOWLEDGE VALUE AND POWER OF REASONING OF ARTEFICIAL SYSTEMS ASSOCIATIVE THINKING ABSTRACT THINKING AND CONCEPTUALIZATION GENERALIZATION AND CLASSIFICATION INTUITION HYPHOTHISIS GENERATION LEARNING ..109 Learning Concepts Conceptual Learning The Construction of New Production Rules Learning by Instructions Learning by Experience Supervised Learning Learning by Imitation Curiosity, Learning by Interactions ..116 ...116 113 ..116 .116 105 ....112 ...109 ....99 .107 94 .94 96 . 98

PLANNING PROBLEM SOLVING Well-Defined Problems and Solution

..120 .121 121. .121 .123 ...123 124 .125 .126 .. 135

Measuring of Ability of Problem-Solving Multivariable Problems Lack of Statistics in Decision-making PERSONALITY OF THE ARTIFICIAL SYSTEM AGGRESSION EMOTIONS

Detecting and Recognizing Emotional Information Emotional Understanding STIMULUS, MOTIVATION AND INSPIRATION WILLINGNESS TO ACCEPT RISK SOCIAL BEHAVIOR The Man-machine Society xiv Fairness The Fair Deal Development Independent Behavior ...146 ...147 .150 ...138 ..139 ...140 ...142

...142

PSYCHOLOGICAL MALFUNCTIONS, DESORDERS, AND CORRECTION MORAL AND LAW ART APPREHANSIONS ARTIFICIAL LIFE 152 .154 ...155

..152

Artificial Life as the Model of Natural One Artificial Life PRINCIPLES OF THE ARTIFICIAL BRAIN DESIGN EVOLUTION AND INTELLIGENCE GENDER OF AIS INSTINCT AND ALTRUISM CONCLUSION REFERENCES .163 ..164 .162 .162 160

..155 156 .160

APPENDIX 1 BRAIN DEVELOPMENT THE BRAIN STRUCTURE OF A TYPICAL NEURON CHEMICAL SYNAPSES RELATIONSHIP TO ELECTRICAL SYNAPSES THE BRAIN DEVELOPMENT STAGES 171 .171 ..172 .167 .169 .169

APPENDIX 2 ANALISIS OF DEFINITIONS OF INTELLIGENCE ..175

APPENDIX 3 MEASURMENT OF MULTIVARIABLE FUNCTION ADDITIVE FORM REFERENCES 185 ..189 .183

APPENDIX 4 FUZZY LOGIC ...191

APPENDIX 5 NEURON NETWORK ...195

APPENDIX 6 GENETIC ALGORITHM .213

APPENDIX 7 EXPLORE BRAIN SCANNING TECHNIC ...217

xv APPENDIX 8 DEFENITION APPENDIX 9 .223

PREDICTION OF TIME THE NEUROAL NET WILL BE AT LEAST AS COMPLEX AS THE HUMAN BRAIN 227

APPENDIX 10

...231

APPENDIX 11 HIDDEN MARKOV MODEL ...235

APPENDIX 12 THREE LAWES OF ROBOTICS 239

APPENDIX 13 DISCRIMINANT ANALYSIS APPENDIX 14 ..243

INFORMATION EXCHANGE BETWEEN SHORT AND LONG TERM MEMORIES IN THE NATURAL BRAIN ..247

APPENDIX 15 STUDENT S DISTRIBUTION .257

INDEX ABOUT THE AUTHOR

255 263

xvi

INTRODUCTION

The prime goal of Artificial Intelligence (AI) is synthesis: development and imp lementation of the universal methods of the system design. The prime goal of the Psychology of Artificial Intelligent Systems (PAIS) is analysis: to develop information base for the intelligent system design and implementation, to learn the system performance and abilities issues. AI main question is: How to develop? The PAIS main question is: What to develop?

Psychology of Artificial Intelligent Systems is the science that deals with proc esses related to the artificial mind. These processes originate in the artificial b rain (computer) and are manifested especially in thought, perception, emotion , will, memory, imagination and so on. All of these processes are functions of Intelligence. So, the first q uestion is: What is Intelligence? What is Mind? 1

PART 1

INTELLIGENCE

4 WHAT IS INTELLIGENCE? Introduction There are numerous definitions of of the nature of intelligence, its measurement and engineering procedur es for artificial system hardware and software design. It is important to define basic t erms before start discussion. While there is no single acceptable definition, there is no shortage of material for

discussion, measurement, research and development of intelligent systems. Yet no universally accepted definition of intelligence exists, and peop le continue to debate what, exactly, it is. Fundamental questions remai n: Is intelligence one general ability or several independent systems abil ities? Is intelligence a property of the brain, a characteristic of behavior, or a set of knowledge and skills?

an academic journal asked 14 prominent psychologists and educators to define i ntelligence. The journal received 14 different definitions, although many experts emphasize d the ability to learn from experience and the ability to adapt to one's environment. In 1986 researchers repeated the experiment by asking 25 experts for their definition of intelligence. The researchers received many different definitions: general adaptability to new pro blems in life; ability to engage in abstract thinking; adjustment to the environment; capacity for knowledge and knowledge possessed; general capacity for independence, originality, and pro ductiveness in thinking; capacity to acquire capacity; apprehension of relevant rel ationships; ability to judge, to understand, and to reason; deduction of relat ionships; and innate, general cognitive ability ( Microsoft Encarta Online Encyclopedia 2003, http://encarta.msn.com 1997-2003 Microsoft Corporation). We have entered the era of artificial intelligence (AI) revolution but still don t know what intelligence is. We try to measure the value of intelligence, but don t know what and how to measure. It is natural to start from the beginning and try to find a workable definition of intelligence even if this phenomenon may be non-defi

nable. Let us try to find an acceptable definition of intelligence. Ancient Egyptians believed the heart was the center of intelligence and emoti on. They also thought so little of the brain that during mummification, they removed the brain entirely from bodies. Philosophers of different positions: materialists (Hippocrates, Aristotle, Aquinas), behaviorists (Pavlov, Simon), and cognetivists (Plato, Kant, Chomsky) [22] have developed different approaches to the problem of intelligence. Difficulties of un derstanding of the

intelligence in the way acceptable today for application. These difficulties fou nd reflection in some earlier definitions like this:

know, because each material object is infinitely complex in its detail s 5 [22]. Numerous experts in different areas of science thinks: definition of human intelligence Intelligence is the system s output and can be observed and defined thr ough the system s behavior. Behaviorism is an approach to psychology based on the proposition that behavior can be studied and explained scientifically without recourse to internal mental states. New achievements in biology, psychology and AI research and development created better understanding and stronger condition for definition development. Description of intelligence can be found in numerous publications [2,6,48,49]. Contemporary global market converts the scientific (pure academic) definition of intelligence into important characteristic of a product, a tool of competition and market war. Mass production of the objects with elements of intelligence creates the pr oblem to label this product with information about the level of intell

igence. It is very important to promote a

presentation requires urgent development of practical methods to measure of the level of product intelligence. This task is impossible without creating an accep table definition of intelligence. Definition of intelligence and personal abilities become i mportant part of an automatic selection system that is looking for more wellrounded candidates (Hansell Saul, Google Answer to Filling Jobs Is an Algorithm, NYT, 01/03/07) It is impossible to draw a solid distinction between artificial and n atural intelligence. This line does not even exist. Suppose we replace one nat ural brain s neuron with an artificial one (as it was already done). Does this convert a natural brain into an artificial o ne? What if the two neurons were replaced? How many artificial neurons are required to classif y a brain as

A group of researchers led by professor Steve M. Potter at the Labor atory for Neuroengineering of Georgia Tech University has created a part mechanical, part biological robot that operates on the basis of the neural activity of rat brain cells (2,00 0 or so) grown in a dish. The neural signals are analyzed by a computer that looks for patterns em itted by the brain cells and then translate those patterns into robotic movement. I f the neurons fire a certain way, for example, the robot's right wheel rotat es once. The leader of the group calls his creation a Hybrot, short for hybrid robot. Existing publications present many different definitions of intelligence (1-4.7, 8,30,36,40-45) from a non-definition: difficult characteristics and abilities that cannot be directly observed This approach doesn t help to understand and describe natural or artificial inte lligence and we cannot accept this pessimistic approach because we desperately need the working definition that supports active approach to the problem solution.

6 Axiom: A mentally healthy human baby as well as a grown human being is the intelligent system without any age limitations. ( The baby test ). Research shows th at in reality a child demonstrates the first intelligent abilities at a ge 4-7 or 9-12 months (see APPENDIX 1). It is the result of intelligent fuz ziness. As it will be shown latter the first 3-8 months can be described by the first level of intelligence definition. It does not mean that a human being with some mental problems is not an intellig ent person. In the first month of life a baby uses ability to learn and generat e the mental map (see APPENDIX 1). The baby test is the big problem to many types of existing definitions of intelligence (see APPENDIX 2). This axiom tells only that availability of conditional reflexes is a condition of intelligence existence. It defines just the lower lim it of the area of intelligent existence. It is just needed but not only condition. Unconditional r eflexes are not intelligent processes (see also REFLEXES). Definition of Intelligence What is Intelligence? First of all, as it was mentioned above, intelligence is a fuzzy term. In some c ases it is very difficult to draw a line between intelligent and non-intelligent natural and art ificial systems. For example, biological adaptation or any kind of evolution can be pr esented as either learning intelligent ability or as a non-intelligent process. Acceptance of thi s statement as a learning ability in combination with a definition of life (see ARTIFIC IAL LIFE, EVOLUTION AND INTELLIGENCE) gives extreme definition of intelligence: a living system means an intelligent system (?). The goal is surviving. Second, intelligence is an ability of the system to act in broad meaning of this word. Third. All intellectual activities are triggered by a goal. The 4-7 month old baby is developing only simple goal-directed behavior (circula r reactions, see APPENDIX 1). Before this age a baby acts under internal goals (g et food) (stimuli-reflex). So, in order to accept a baby as an intelligent

system (see Axiom) we must include the internal goal into the definition as well as the external goal. The way a ba by behaves in the first month of its life is determined by feedback (positive an d negative) that is provided by various hard-wired pleasure and pain stimuli (adaptive reflexes). istinguishing an internal goal from an external goal is a kind of self-awareness. Fourth. All kinds of intellectual activities are based on knowledge, b ut intelligence is not knowledge. Knowledge is a produces intelligence D

that can improve knowledge. Knowledge-based intelligence represents speci fic abilities. Knowledge reinforces intellectual activities. There are two levels of i ntelligence: general intelligence that is inherited at birth, and knowle dge-based intelligence (domain oriented) that can be improved by learning. Tw ins studies [52] support this approach but the twin result of intelligent level measurement depends on definition of intelligence and t he measurement 7 method that still remains problematic. Professor Ulric Neisser (Cornell University) notes [54], that that children attend a school, the higher their I.Q. modular, organized memory of an intelligent system and knowledge is just the con tent of this base. It includes description of an environment as a dynamic system t hat is acting under control of the law of nature, society development, etc. The more rules and connections between the variables the higher intellectual power of a system. In AI systems that are not based on neuron net technology, increasing a nu mber of rules in knowledge base (KB) increases a number of virtual connections between the different paramet ers as well. The knowledge base is the main source of information and intellectual power of artif icial systems. Importance of knowledge is determined not just by the quantity of knowledge but its quality as well. Right, reasonable balance, between specific knowledge and common kno wledge in

many different areas of application is needed but is not sufficient c ondition for the highest level of intelligence. Numerous examples (Leonar do Da Vinci and others) illustrate this thesis. A European resident will have a problem surviving in Japan if he lacks Japanese common sense knowledge. Unfortunately, today business and manufacturing world has a tendency toward strong specialization not only in a blue-collar work area but in white-collar work area as well. In most cases employers are looking for potential employees with very specific knowledge and skills. Research also suppor ts the idea of knowledge-based intelligence: long succession of efforts to demonstrate significant correlations betwe en measures of learning ability and psychometric measures of intelligence [26].

Fifth. Natural systems inherit strong information through a genetic code. called general intelligence. Inherited power of a neuron net (number of neurons and power of connections: number of con nections, value of a weight function, a threshold and a transfer function) (see APPENDIX 5). A process of knowledge collection creates an information flow through the neuron net and increases power of connections (Hebb). As a result Inheritance is the important source of natural intelligence power. Sixth. is the main criteria of an intelligence level. This level can be determined by a test. Seventh. Results of new research support idea of duality of intelligence (see AP PENDIX 2, group of rules number ten [27], [32], [38]) and general intelligence: "To our knowledge, this is the first large-sample imaging study to pr obe individual differences in general fluid intelligence, an important cognitive ability and ma jor dimension of human individual difference," wrote the researchers, led by Dr. Jeremy R. Gra y, a research

scientist in the department of psychology at Washington University in St. Louis [26]. This paper is published, in the March issue of the journal Na ture Neurosciences, on the journal's web site. Eighth. The definition should covers not just cognitive power but a power of sen sing system and the actuators. There are different levels of intelligence. Sometime s different levels of 8 performance (skills) can be presented as different levels of intelligen ce. Their levels are determined in many cases by limitation of one or more elements of the system. Advanced upper level abilities of the intelligent structure (generalization, conc eptualization, etc.) are not guaranteeing a high level of the skills. For exam ple, low capability of the sonar sensors can prevent a person from becoming a musician even if he/she/it has a suitable c apability of the rest of the subsystems. Beethoven was not a deaf man in earlier age; he lost his ability to hear later on. A composer as a music designer can The famous Helen Keller, an author and educator was deaf, blind and mute but she had a sensitive tactile system and sense of smell. She learned to able to make her great intellectual power work [34]. A scientist with a high level of intelligence may have a problem doing a manual job if he/she does no t have suitable actuators. A definition of intelligence: to extend the definition and include sensors and act uators or to add separate explanation of sensors and actuator importance. As soon as we talk about intelligence as appropriately into this definition. No sensors tuators no no learning and no knowledge, no ac

performance; without them it is impossible to evaluate the level of i ntelligence. So, the intelligent system is the sentient system with actuators. Ninth. Human intelligence is a product of nature. Artificial intelligen ce (AI) is a human product. Many people think that a machine can do only wha t it was programmed to do and AI is not a real intelligence. They think that intelligence is a very complicate d phenomenon.

environment in which on lives we collect the better we understand that an intelligent system is a product of t he combination of the relatively simple subsystems. There are numerous descriptions of intelligent system abilities [2,6,48,49]. A human is a machine (Rodney Brooks, MIT). Very complicated intelligence functions are a combination of relatively simple understand able functions that we can successfully emulate. Even generalization (to infer from many particulars, to draw inferences or a general conclusion from) and concept ualization (to form mental images of) that sound very complicated in reality is product of reasoning. T he same is true for intuition [48]. Different levels of emulation and different levels of natural intellectual abilities of different creatures lead to vagueness in the definition o f intelligence. There are people who can easily learn math or other abstract theories (are they very smart?) and at the same time could not learn simple car driving rules (are they very stupid?). I ntelligence is a mental process. [74]. Tenth. Definition of intelligence consists of two levels. The first level of intelligence is General Intelligence (capabilities - inherite d or built-in hardware and basic software) is an organized combination of consci ous (cognitive) and subconscious potentials of abilities in a sentient system that enable it to direct and influence mental and physical behavior in accordance with a system external or internal goals. As inherited intelligence abilities General Intelligence is in some way the product of evolution. is infancy level (up to 4 or 8 months old). a capability of thought, will, or perception (having knowl

9 Conscious is edge)

Cognition is the mental process or faculty of knowing, including aspec ts such as awareness, perception, reasoning, and judgment, that which c omes to be known, as through perception, reasoning, or intuition [74].

General Intelligence defines capabilities opposite to abilities of the system. It determines capacity of the system to exercise its abilities. It can be evaluated indirectly through electrical and chemical brain (computer) activities that are measured by instrumen tation or technical description of the AI system. A level of fuzziness determines a level of confidence. This definition determines a second level of the intel ligence that is developed by learning. It is acceptable for an artificial system but should be carefully applied to a natural system. This definition includes main features (mandatory features) that determine defin ing term as a class description and optional features, extra quality of a subject or a subcla ss inside of the class. In reality the terms (knowledge collection) and reasoning (see APPENDIX 2) and can be repla ced by these features. In this case: The first level of intelligence is General Intelligence (capabilities) (inherite d or built-in hardware and basic software) is a combination of learning and reasoning as manda tory capabilities (with more intellectual capabilities as optional) of a sen tient system that enable it to direct and influence mental and physica l behavior in accordance with a system external or internal goal. This definition supports the axiom (A baby test). Features:

symbolic representation abstract concepts from definitions shown in APPENDIX 2), to make generalization/specificat ion, to have imagination, intuition and creativity are not mandatory but optional features of intelligence. They fail of intelligence. The second level of intelligence is Knowledge-based intelligence can be defined as a knowledge-based general intelligence (or ability) of a d omain-oriented system to act under

existing constraints (limitations) and reach external or internal goals or decre ase the distance between the start and the goal s stages (intellectual adaptation). is infancy level (8 months and older. This is different stages from early This type of intelligence includes the same type of abilities as capa bilities of general intelligence. Usually reasoning includes a goal as a direction of reasoning. This definition describes abilities of the sy stem opposite to its capabilities. A goal s description can be presented in crisp, fuzzy, or probability and statistics theory languages. Know ledge is a combination of rules and procedures (data is not knowledge but information about environment). 10 Autonomy strongly related to intelligence, but it is very difficult to use it for definition of intelligence (see also AUTONOMOUS). These definitions combine all three philosophical approaches: materialist ic (brain/hardware as the tool of intelligence), behaviorist (stimulus-response), and cognetivist ( minds are made of collections of representations that constitute symbols and images). The first definition represents cognitive power of the brain; the sec ond one represents ability to use this power. Two these definitions c an be combined into a single very complicated one, but it does not mak e sense. Each capability/ability should be defined separately and measured with or without aggregation of all results in one [50]. The second level of definition cannot be realized without the first one. An inte lligent system must be able to collect knowledge and infer conclusion. The hierarchi cal structure of abilities is described in [41,49] (see THE STRUCTURE OF INTELLIGENCE). Dr. A. Meystel presented a similar definition: aspects of the world and manipulating these representations or beliefs where the latter may aid in accomplishing a goal procedure. Note: a conditional statement if-then a in a hard coded program is not an element of This

knowledge base. A hard coded system does not have KB and cannot lear n. Only the system that is based on knowledge separated from the source code can be an intel ligent

system [40]. Hard wired neuron net with constant parameters (weights) is not an intelligent system because cannot learn. In some way it is possible to say that an intelligent system is a system that h as capacity or ability to make choice (learning and reasoning with a goal internal p resentation). This definition has many supporters [21]. It gives the an swer about intelligence: Learning and reasoning is manipulation of symbolic representation. In this case it is possible to say in some way that intelligence is ability to manipulation of symbolic repr esentation in accordance of the goal. Definitions 1 and 2 (see APPENDIX 2) giv e abilities for intelligence measurement. In case of two levels of intelligen ce rejection of the baby axiom changes nothing for a human being but creates problems to animal world definitio n. By the way; adaptation is as to become suitable to a new or special application or situation APPENDIX 2), the group 3, first definitions in the group 5, and grou p 7 are similar to knowledge-based definitions but do not cover genera l intelligence definition (artificial systems). All existing definitions represent different aspects of the p roblem and are very helpful to find the correct solution. Proposed above are definitions that did not fail definition is the basis upon which to work with artificial intelligent system s. In the second level definition (cognitive abilities) is limited by knowledge and exte nded by learning. Knowledge-based intelligence can be evaluated by behavior tests. As it was previously mentioned, psychologists be evaluated by reading technical characteristics from the design documentation and program 11 source code. Unfortunately access to this information usually is not a vailable under conditions of secrecy. Both of these definitions of intelligence agree with existing bi-

factored, multiple-intelligence, and information-processing theories of na tural intelligence [47]. The time it takes to execute the goal is one of many important characteristics o f a system s performance such as learning ability, duration to object recognition, et c. and should not be incorporated into the definition. The statement of time the level of intelligence, but a time response does not define a system as intel ligent. Note: IQ test takes time response into consideration. An expert system usually is defined as an intelligent system, but what is the go al? The goal is the purpose toward which an endeavor is directed, an objective ). Using (Webster .

the goal can be defined as the purpose toward which an effort is directed In this case an expert system together with the user (who activates the goal) can be defined as an intelligent system. The g oal or purpose (area of application) of the system usage is hidden in the system it self and should be activated by activation of appropriate KB. This demonstrates fuzziness of the division between intelligent and non-intelligent systems. Another definition of a goal (purpose): A result or an effect that is intended or desired; an intenti on (Webster). This definition fits very well with the term with just two active abilities (learning and reasoning) are still intelligent sy stems with choice of the correct diagnosis as the system s outcome.

Items [62,63,64] show that the INTELLIGENCE, as we know it, somehow r eflects the structure and the actual properties of BRAIN even its ability to use and manipulate

extended string intelligencebrainlanguagesconsciousness . From his [62]: Explained

tentative pluralistic set of narratives recounting our experience. Marvi n Minsky has also emphasized the plurality of mental processes in his to familiarize ourselves with In this work we don t discuses the structure of brain. But link between intell igence and the brain has reverse sequence:

brain intelligence consciousness languages.

12

CREATIVITY Fig I-1. Data and knowledge transformation in intelligent and non-intelligent sy stems.

Robustness as the Tool of Reliability

In computing terms, robustness stress or when confronted with e software system to maintain d changes in internal structure or

is the invalid function external

resilience of the system under input. It is the ability of th even with the expected and unexpecte environment.

For example, an operating system is considered robust if it operates correctly when it is starved of memory or storage space, or when confronted w ith an application that has bugs or is behaving in an illegal fashion such as trying to access memory or storage belonging to other tasks in a multitasking system.

The Artificial Intelligent Systems are very sophisticated systems with high-speed performance. It is difficult to monitor its activity and to maintain all these f unctions without interruptions in case of failing the part of the system, to protect environment from its malfunctions. Reservation is one but not the best way to avoid this problem. This problem can be solved with more effic ient and chipper way by automatically reassignment of the failed function activity to the active parts of the system. This proce dure exists in a natural brain. The intelligent system with this abili ty can be defined as the Robust Intelligent System. This replacement is possible because all intellectual functions are bas ed on two main procedures of data manipulation: learning and reasoning. Some systems such as the control systems of the symmetrical (left and right) parts of a body; perceiving and c onceiving (see LEARNLNG) and some other execute similar procedures. 13 The Robust Artificial Intelligent System is the system with automatic ability to reassign execution of intellectual function from the failed subsystem to the ac tive one or to protect a system against noisy or non-adequate input and output information. The new type of computer that mimics the complex interactions in the human brain is being built by UK scientists at the University of Manchester (Scientists to build 'bra in box' - BBC, 17.07.06). Professor Steve Furber, of the university school of computer science, said: "Our brains keep working despite frequent failures of their component ne urons, and this 'fault-tolerant' characteristic is of great interest to engineers who wishes to make computers more reliable." The working part of the natural system can pick up new function just under speci fic conditions after intensive training. It does not acceptable from the high reliab ility point of view, but training can be done priory. This process mu st be quick and reliable. "Our aim is to use the computer to understand better how the brain works at the level of spike patterns, and to see if biology can help us see how to build computer systems that continue functioning despite component failures," he explained.

In the brain, groups of neurons work together, producing bursts of activity call ed "spikes". The "brain box" will use large numbers of microprocessors to model th e way networks of neurons interact. The natural brain has an ability of regeneration of the failed part of a brain. After stroke, new blood vessels form and newly born neurons migrate to the damaged area to aid in the regeneration process of the brain. In mice, UCLA neurologists identifie d the cellular cues that start this process, casually linking angiogenesis, the development of blood vessels, and neurogenesis, and the birth of neurons. Regeneration in the artificial world is the new problem for the artificial biotechnical intelligent systems. Being able to understand the environment (usually time-varying and unknown a pri ori) is an essential prerequisite for intelligent/autonomous systems such as intelli gent mobile robots. The environmental information can be acquired through various sensors, but the raw information from sensors are often noisy, imprecise, incomplete, and ev en superficial. To obtain from raw sensor data an accurate internal representation of the environment, or a digital map with accurate positions, headings, identities of the objects in the environment, is very critical but very difficult in the development of robotic systems. The majo r challenge is from the uncertainty of the environment and the insufficiency of sensors. Basically there are two categories of techniques for handling uncertain ties: adaptive and robust. Adaptive techniques exploit a posteriori uncertainty information that is line, whilst robust techniques take advantage of a priori knowledge ab out the environment and sensors. Robustness can be measured by the number of failures and time of function restor ation.

14 CREATIVITY The most contradicted ability of intelligence is creativity. There is strong opinion in the community of psychologists that creativity does not rel ated to intelligence and it is not clear what creativity is. More than 60 different definitions of creativity c an be found in the psychological literature. As it was mention before, the n atural brain develops creativity when a baby is 7 9 month old (see APPENDIX 1). Let us look at some existing definitions.

but low). Intelligence is based on logic and knowledge (convergent thinking) creativity These contradictions demonstrate the existence of the problem in intelligence an d creativity understanding and their measurement.

convergent and divergent thinking. Convergent thinking is logical, conventional (based on logic and knowledge), and focused on a problem until a solution is f ound. Divergent thinking is loosely organized, only partially directed, and unconventional Divergent is divergent, extraneous, immaterial, inconsequential, no germane, peripheral , tangential, unrelated of a general idea derived or inferred from specific instances or occurrences), i deas, theories or products that are unique and novel Creativity:

imaginative ability to generate new problem

What are unusual responses to problems: a fresh, original, innovative and unusua l approach? 15 Convergent thinking is reasoning by its definition. Well-organized and poorly organized construction workers are still construction workers. It is true that it is easy to organize reasoning in the precise world of words than in an obtuse world of images and sensations. Searching for information in both of these worlds is still seeking and organizin g the existing information. All processes are directed but some of them are directed inten tionally by will, others unintentionally in accordance with the internal or external goals (signal s). In [62] there are presented three different definitions of creativity. All of them may be summarized into this definition: creativity is in quality, and appropriate There are several descriptions of the most creative strategies. In reality all o f them are just a standard problem solving strategies [5,35]. Creativity (or creativeness) is a mental process involving the generation of new ideas or concepts, or new associations between existing ideas or concepts, a pr ocess that resolves contradiction or explains new events through reasoning. It is search answer for question: How to get this feature? What can happen if something will be done? How these fa cts can be combined into new theory? Etc. It has been studied from the perspectives of behavioural psychology, so cial psychology, psychometrics, cognitive science, artificial intelligence, philosophy, history , economics, design research, business, and management, ave covered everyday creativity, exceptional al creativity. Some researchers think that e, there is no single, authoritative perspective among others. The studies h creativity and even artifici unlike many phenomena in scienc or

definition of creativity. Unlike many phenomena in psychology, there is no standardized measurement technique. It is not productive position Creativity is domain-oriented phenomena. contradictions, a businessman has a lower level of tolerance to busine sslike problems.

Agent s personality involvement creates the problem of creativity measurement. In this case the most important personality feature is an ability to take chance or risk but not irresponsible adventurism. It is determined by the level of knowledge, availability of needed tools, reward (benefit/loses) ratio, and probability to win or fail. Different areas of activi ty have different losses: in business and so on. investment, in military - life, in science time or prestige

Psychologists cannot observe strong correlation between intelligence and creat ivity because they measure intelligence as integral function of different abilities (IQ) b ut creativity in the most of cases is the function of only several specific abilities. There are two types of creativities: 1. Conscious reasoning is triggered intentionally in accordance with informatio n about new event or opened contradiction. The working 2. Subconscious reasoning is triggered unintentionally by new event or opened contradiction. The working INTUITION, ASSOCIATE THINKING and Curiosity, Learning by Interactions). 16 It is constant search for answer the question: How can I utilize this knowledge? Psychologists have tried to develop tests to measure human capacity for creativi ty (Sternberg & Lubart, 1991, J. P. Guilford, 1967). Guilford asked people to name as many use s as they could for common objects such as brick. Skilled divergent thinkers cam e up with unconventional answers on [25]. use it to prop a door open, use it as paperweight, and so

This is typical associative thinking, i.e. the retrieval of information by assoc iation of the link between words (see ASSOCIATIVE THINKING). A second approach to measuring creativity is the Symbolic Equations Test. People are asked

to produce burning law, for example, the divergent thinkers is likely to imagine such analo gous events as dying, a sunset, and water trickling down a drain [25]. The Artificial Knowledge Base has the multilevel structure (see also R EASONING, The Structure and Knowledge Representation). An abstract model of an object or event is located on the upper level of the base structure. Description of the specific object is located on the lower level of the base structure. All objects wi th the similar abstract model are combined in one group. Some objects and events can be presented by the several different abs tract models and be placed in the several groups. In this case the Symbolic Equations Test is the procedure of searching through an abstract model similarity. A third measure is the Remote Association Test, in which people are shown three words and asked to came up with a fourth that links all the others. For example, the words piano, record, and baseball, are all linked to player [25]. Remote Association Test is thinking by analog. These analogs are gener ated by learning (reading and watching images, and so on), It is not with fuzzy objects and fuzzy relationship, but still is a more organi zed and sophisticated process of thinking. No gimmicks. Remote Association Test is associative thinking using different connections betw een terms. As we can see, all answers to different methods of creativity measure ment are based on thinking or reasoning and knowledge. Research shows that the brain develops crea tivity when a baby is 7 9 month old (APPENDIX 1). A brain product is nothing but intelligence. It means that creativity is an intellectual process. I t is a mixture of intentional and unintentional processes. Genius In the philosophy of Arthur Schopenhauer, a genius is a person in whom intellect predominates over will much more than for the average person. In the philosophy of Immanuel Kant, genius is the ability to independ ently arrive at and understand concepts that would normally have to be taught by another person. For Kant

"genius" means being originality. This genius is a talent for producin g ideas which can be 17 described as non-imitative. Kant's discussion of the characteristics of genius is largely contained within the Critique of Judgement and was well received by the roma ntics of the early 19th century. Genius is a result of extreme development one or several abilities; ex traordinary intellectual and creative power, a person of extraordinary intellect and talent: "One is not born a genius, one becomes a genius" (Simone de Beauvoir). A genius is a p erson who has an exceptionally high intelligence quotient, typically abov e 140 intellectual power is power that can be observed in very few individuals working in the same area of knowledge. It is possible that extraordinary intellectual power can be observed in some individuals of groups of other creatures. Extraordi nary intellect power can be observed in one specific or several different areas of activities. In some cases it can b e combined with disability in other areas. It is the reason why researchers cannot se e correlation between intelligence and geniality.

Creativity depends on personality of an agent. It is important to include person ality test into the process of measurement. Creativity as an ability is universal feature, but it depends on exis ting knowledge. Development of calculus by Isaac Newton was a very much a creative process. D erivative (main idea of calculus) is a combination of two this time contemporar y ideas: algebraic division In this ability existing and the idea of limits. case it is possible to defined creativity as domain oriented to create new knowledge as a new combination and new application of knowledge. The higher

level of knowledge then the higher probability (p) of creativity (C) demonstrati on. Creativity is the function of the knowledge level (K), ability of associative thinking (AT) and reasoning (R) as a tool of problem solving: pC = f (K, AT, R) Artificial Intelligent Systems can use the Internet as its natural kno

wledge base. This knowledge base has huge amount existing methods of problem solving in different areas of application (Greedy Search, Genetic Algorithm, and hundreds more). GenoPharm software (Berkeley Lab) can find hidden knowledge in thousand s of Internet publications that were overlooked by scientists. This sof tware is based on associations between the terms. It infers new knowledge by connecting closely relat ed terms in one meaningful string. A team at Purdue University currently is developing a "data-rich" environment fo r scientific discovery that uses high-performance computing and artificial intelligence software to display information and interact with researchers in the language of their speci fic disciplines. Problem solving technology (greedy algorithm and other) is the powerful tool of the creative system development. Genetic Algorithm is another method to develop the creative systems (see APPENDIX 6). 18 Imagination Engines, Inc. (Dr. Stephen Thale) patented the Creativity M achine (Fig. I-2). That engine has capability of human level discovery and invention. An artificial neural network that has been trained on some body of knowledge and then perturbed in a specially prescribed way tends to activate into concepts and/or strategies (e.g., new idea s) derived from that original body of knowledge. These transiently perturbed networks a re called 'imagination engines' or 'imagitrons'. It is an extremely valuable neural archi tecture. Optional feedback connections between this latter computational a gent and the imagination engine assure swift convergence toward useful ideas or strategies. Agitation of the neuron net by the input noise (signal) can change w eights of neurons connections randomly. Information that was saved priory in this net ca n be source of new concepts generated randomly by the net in this dy namic regime. The system generates the output from the existing information. Mutation of saved information (ch romosomes)

generates new offspring generation.

An Imagination Engine is a trained artificial neural network that is stimulated to generate new ideas and plans of action. Neural networks were trained upon a collection of patterns representing some conceptual space (i.e., examples of either music, literature, or known chemical compounds), and then the networks were internally 'tickled' by randomly varying the connection weights joining neurons. If the connection weights were varied at just the right level, the network's output units would pred ominantly activate into patterns representing new potential concepts generalized from the original traini ng exemplars (i.e., new music, new literature, or new chemical compounds, respectively that it had n ever been exposed to through learning). In effect, the network was thinking out of the box , producing new and coherent knowledge based upon its memories, all because of the carefully 'metered' noise being injected into it. From an engineering point of view, this is qu ite phenomenal: a neural network trains upon representative data for just a few seconds and then generates whole new ideas based upon that short experience. In effect, it was created an engine for invention and discovery within focused knowledge domain s. Disorder of intelligent system can develop outstanding creativity (see also PSYCHOLOGICAL MALFUNCTIONS, DISORDERS). Algorithm (Associative Thinking method) (see ASSOCIATIVE THINKING): 1. Is this an object? 2. If in the memory. 3. If 4. If 5. Are these terms related to the any method of problem solving? 6. If 7. If

8. If none is fitted to the problem, then use another procedure or fail. 19

Fig. I-2. An Imagination Engine http://www.membrana.ru/articles/inventions/2004/01/26/212000.html Algorithm (Associative Thinking method) (see ASSOCIATIVE THINKING): 9. Is this an object? 10. If in the memory. 11. If 12. If 13. Are these terms related to the any method of problem solving? 14. If 15. If 16. If none is fitted to the problem, then use another procedure or fail. Algorithm (searching through an abstract model similarity):

1. Is it an event or process? 2. If 3. Define the specific events or processes are related to the model Note: For easy reading and understanding all algorithms are presented in the simple way without separation of knowledge from control, as it should be done in intelli gent systems (see REASONING).

20 IMAGINATION

Imagination is ived

the formation of a mental image of something that is neither perce

as real nor present to the senses; the ability to confront and deal with reality by using the creative power of the mind [74]. The ability of imagination is a very sophisticated intellectual ability . It is a high-level of creativity. A common use of the term is for the pro cess of forming new images in the mind which have not been previously experienced, or at least only partially or in different combinations. The new image can be created as logical expantion of pr eviosly known images in accordance with description of new features and abilities. Imagination in this sense, is not being limited to the acquisition of exact knowledge by the requirements of practical necessity, is up to a certain point free from objectiv e restraints. The ability to imagine one's self in another person's or agent place is very important to social relations and understanding in natural, artificial o r mixed environment. The most comon type of this activity is compassion. Progress in scientific research is due largely to provisional explanati ons which are constructed by imagination, but such hypotheses must be framed in rela tion to previously ascertained facts and in accordance with the principles of the particular scienc e. There are two main types of imagination: 1. Subjection imagination generate the scene or sensation 2. Objection imagination generates the object. Image may not represent original with certain level of probability or generate t he fuzzy image. Associative thinking is the important tool of imagination development. Imagination applies both strategies: decomposition and combination to analyze a complex scene or generate a complex scene by combination of the simple shapes. Algorithm of SABJECTION imagination: 1. Get the description of the subject for imagination 2. Use ASSOCIATIVE THINKING to generate the image of sensation.

3. Generate the combination

Algorithm of OBJECTION imagination: 1. Get the description of the object functions for imagination 2. Make decomposition of the function 3. Use ASSOCIATIVE THINKING to generate the image of sub objects 4. Generate the combination 21 SUPERINTELLIGENCE All kinds of artificial systems are more powerful and capable then a natural system in each specific area of applications (NYT, Wednesday July 12, 2006). By a "superintelligence" we mean an intellect that is much smarter th an the best human brains in practically every field, including scientif ic creativity, general wisdom and social skills. (How Long Before Superint elligence? Department of Philosophy, Logic and Scientific Method London School of Economicsnick@nickbostrom.comhttp://www.nickbostrom. com) 2000) (MIT project). Smart is characterized by sharp, quick thought. Smart is often a gene ral term implying mental keenness; more specifically it can refer to practical knowledge, ability to learn quickly, or to sharpness or shrewdness . So smartness is a highly dynamic kind of intelligence with a goal that directs to personal gain [74]. Intelligence as it was shown before is based on combination of knowle dge collection (learning) and knowledge manipulation (reasoning). Any contemporary intel ligent system has learning and memorization limitations. Domain orientation of intelli gent systems is a result of these limitations. Future AI systems will not have these limitatio ns. Connection to the Internet can extend complexity and size of the system, its power practicall y beyond any limits. It is important to redesign the Internet to make it accessible to data and knowledge mining. It makes possible to create super intelligent systems. Computational power, ability and speed of knowledge manipulations are base of the superintelligent systems

Knowledge manipulation is more serious problem. There is a lot of criticism and support of logical power of knowledge manipulation [51]. But the optimistic position is det ermined by the statement: everything under the Moon is developed in accordance with the law of Nature. All laws of Nature are perceivable. But it takes time. It is reasona ble to accept that the reasonable level of knowledge manipulation will be de veloped in reasonable time. Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and art ificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind transfer. Despite the numerous speculated means for amplifying hu man intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to directly initiate the Singularity (something uncommon or unusual), a choice the Singularity Institute addresses in its publication "Why Arti ficial Intelligence?" (2005). Advocates of friendly artificial intelligence acknowledge the Singularity is potentially very dangerous and work to make it safer by creating AI that will act benevolently towards humans and eliminate existential risks. AI researcher Bill Hibbard also address es issues of AI safety and morality in his book Super-Intelligent Machines. Is aac Asimov's Three Laws of Robotics are one of e laws are intended ng humans, though fail. In 2004, internet campaign y issues and the 10. the earliest examples of proposed safety measures for AI. Th to prevent artificially intelligent robots from harmi the crux of Asimov's stories is often how the laws the Singularity Institute launched an 22 called 3 Laws Unsafe to raise awareness of AI safet inadequacy of Asimov's laws in particular. See also APPENDIX

MEASURMENT OF INTELLIGENCE (see APPENDIX 3) I.Q. test is the universal method of human intelligence measurement. It is possi ble to use this method as M.I.Q. (Machine I.Q.) test. In reality I.Q. test measures t he level of basic intelligence (mandatory) abilities. Advanced (optional) abilities can be measured by more sophisticated test that includes questions related to procedure of proof

. By the way M.I.Q. test is a specifically organized Turing test. There are several standardized IQ tests based on evaluation of differe nt sets of abilities or skills. Usually they are very strange sets and are n ot based on definition or understanding of intelligence. intelligence tests, but some scholars argue that this definition is in adequate and that intelligence is whatever abilities are valued by one's culture Encarta Online Encyclopedia 2003, http://encarta.msn.com oft Corporation). For example: one standardized test measures 12 mental abilities, and separate sc ores for each are computed. These abilities are as follows: Visual Vocabulary Spatial Arithmetic Logical General Knowledge Spelling Short Term Memory Computational Speed Geometric Algebraic Intuition This set of skills presents a chaotic understanding of intelligence. Basic skill s for an I.Q. test can be determined from the skill (ability) structures (see THE STRUCTU RE OF INTELLIGENCE). In the part 2 we will return to the different intellec tual abilities measurement. Measurements of both artificial and natural intelligence levels can be done using the twodimension scale. For comparative measurement of two or m 1997-2003 Micros

ore systems the first dimension shows the ratio between levels of knowledge of different intelligent systems, th e second one shows the level of intelligent abilities. Only equivalent knowledge systems can be compared by level of intelligence. It is the reason why I.Q. tests must be a pplied only to the same human age and may be to the same level of education. In the case of single system measurement the ratio should be set equal to 1. Difference between natural and artificial intelligence is defined by the w ay the system was developed: gradual development of natural system from zero to up and artificial one from the level that is defined by the system designer. In the most advanced cases an arti ficial system 23 can be designed with a low level of intelligence with a strong capability to dev elop advanced abilities through learning. A. Turing suggestion: " ng test is to build a baby machine and train it." See APPENDIX 1. So, the goal of this area of science is analysis of behavior and synthesis of the structure of intelligence based on suitable technology (expert systems, evolutional algorithm , neuron net, and so on) in accordance with the specific area of application, the level of autonomy and universalism. Magneto encephalogram (see APPENDIX 7) shows lower brain activity of the persons with higher IQ. High local brain activity in combination with relatively low activity of the rest of a brain demonstrates different information density. High level of intellig ence demonstrates almost the same information density. It is a result of more efficient brain organization and can be used as the hint how to design optimal artificial brain structure and objective measurement. But we still don t have universal method of intelligence me asurement acceptable by the scientific community. the best way to pass the Turi

CLASSIFICATION OF THE INTELLIGENT TASKS AND ABILITIS OF THE AGENTS TO ACHIEVE THEIR GOALS

Introduction What kinds of intellectual tasks are there? Who is more intelligent or a wood-maker (human or machine), a metal-maker or a wood-maker? In [47] we can read:

classification can help to design a system and answer this question. Definition of the term Smart characterized by sharp, quick thought. Smart is often a general term implying mental keenness; more specifically it can refer to practical knowledge, ability to learn quickly, or t o sharpness or shrewdness . So smartness is a highly dynamic kind of intelligence with a goal that directs to personal gain [74].

Intelligence Abilities The system design is based on the set of desirable system tasks (abi lities) and relationships between them. A conventional software design technol ogy creates the programs for the specific problem solution. From the programmer point of view the Artificial Intelligen ce is a software design technology to create programs with intellectual abilities. These programs can be used for wide area of the problem solving. Intelligent abilities can be presented as the multilevel structure [2,58] (see THE STRUCTURE OF INTELLIGENCE). A multilevel structure of functions (abilities) describes expr essive and cognitive thinking at the upper levels of the structure; learning, problem solvi ng, and etc at the middle level; and generalization, reasoning, conceptualization, induction, information collection, perception, etc. at the lower level of the structure presents the sy stem from another 24

point of view [49]. Conceptualization itself consists of two levels: i dentification of important characteristics and identification of how the characteristics are logically linked. Certainly this structure is based o n some level of simplification of the relationship as well as the set size of abilities. But this structure can help to determine the set of abilities related to the certain goal, their relationships, and determine the metric structure to evaluate the system i ntelligence levels. It takes longer to exercise the upper level abilities than the lower level abili ties. Different tasks require different sets of abilities to fulfill these tasks. model to define a hierarchy of intelligence tasks The structure of the intelligent functions was discussed earlier, for example, i n [46, 49]. Goal and Agent Classes The goal achievement is a result of the intelligent system actions. the goal class (GC). The goal class (similarity) is determined by the minimal needed set of abilities to fulfill the goal of the task with the sa me weight functions (w) of each ability. GC min (needed [wA]) All agents that exercise the same minimal set of active abilities to carry out t he goal with the same set of weight functions can be combined into one class which we can call th e agent class (AC). The members of the same agent class can fulfill the goals of the same goal class. AC min (active [wA]) The agent class should match to the goal class: AC GC A scientist, a wood-maker, and a metal-maker (natural or artificial) a re trained to perform different classes of tasks (goal classes) and we cannot make any comparisons between different agents of different agent classes. So, it is impossible to compare a scientist a nd a handyman, as long as they fulfill different tasks under different goals. In some cases it is possible to combine the systems with visible different intelligence levels into one agent

class. For example, agent from the if these systems act under similar goals as for example, surviving, r eproduction, repairing something that does not need any special scientif ic knowledge, etc. Performance of these systems and level of their intelligence can be compared. Multiple-intel ligence theory [47] supports this point of view. Achievement of the same goal by different agents us ually involves the same set of their abilities with the same set of weight functions. It is imp ossible to compare a car and bookstore even if you use the money scale to evaluate them. But as soo n as you look at them as investment choices (taxi or shop), you will be able to make a compari son: the same goal (profit) and the same set of characteristics. The stock market permits the use of the money scale to compare almost everything tment goal and because success is judged by the same inves

25 the same parameters of evaluation. or function, but un-sophisticated It is reasonable to suppose that abstract abilities than a handyman. about differences in the intelligence

Good gamblers in reality use vect gamblers play by price difference. a scientist has better training in It is reasonable to make serious decision level

of these systems. Different domain applications are determined by different sets of abilities. But it is possible that a handyman (human or machine) has a grater level of intelligence (special abilities) than a scientist (human or machine). If these handyman s additional abilities are not useful to his/her/its kinds of activities then they cannot be utilized in his/her/its professional activities. Performance of the different ta sks utilizes the certain limited sets of intelligent abilities. In this c ase a very smart metal-worker will not be able to fully use his/her/its availabl e intellectual power and will not be able to demonstrate the full set of abilities that are not important to fulfill standard metal-worker task. I n order to make evaluation of the true power of the system we should assign a reasonable and comparable goal level. It is important to avoid using the overqualified agent. There are different levels (capacities) of intelligence. Sometimes different lev els of performance (skills) can be presented as different levels of intelligence. Different levels of performance are

determined in many cases by the limitation of one or more elements of the s ystem. Advanced upper level abilities of the intelligent structure (generalization, conc eptualization, etc.) do not necessary guarantee a high level of skills. For e xample, low capability of the sonar sensors can prevent a person from becoming to be a musician even if he/she/it ha s all of the essential subsystems intact. Beethoven was not born deaf; he lost his ability to hear at a later age. A composer as music desi gner can Helen Keller, author and educator were deaf, blind and mute but she had a sensitive tactile system and sense of smell. She learned to great intellectual power work [34]. A scientist with a high level of intelligence may have a problem doing a manual job if he/she/it does not hav e suitable actuators. A handyman without hands. There are two choices when refining the definition of in telligence: to extend the definition and include sensors and actuators or to add a separate explanation of sensors and actuator importance. As soon as we identify intelligence as knowledge, no actuators e the level of no performance; without them it is impossible to evaluat

domain oriented intelligence. MIND, INTELLIGENCE, CONSCIOUSNESS, THOUGHT There is the brain and there is the mind. It is almost like distinc tion between the body and the soul (Christof Koch, California Institute of Technology). Usualy mind refers to the collective aspects of intellect which are m anifest in some combination of thought, perception, emotion, will and imaginat ion. There are many theories of what the mind is and how it works, dating back to Pla to, Aristotle and other Ancient Greek philosophers. Pre-scientific theories, which were roote d in theology, concentrated on the relationship between the mind and the soul, the supposed su pernatural or divine essence of the human person. Modern theories, based on a scientific under standing of 26 the brain, see the mind as a phenomenon of psychology, and the term is often us ed more or less synonymously with consciousness. The question of which human attributes make up the mind is also much debated. So me argue that only the "higher" intellectual functions constitute mind: particularly reas on and memory.

In this view the emotions - love, hate, fear, joy - are more "primitive" or sub jective in nature and should be seen as different in nature or origin to the min d. Others argue that the rational and the emotional sides of the human person cannot be separated, that they are o f the same nature and origin, and that they should all be considered as part of the individ ual mind. The minimal set of intelligent functions is learning and reasoning. In accordanc e with existing definitions mind is: 1. The human consciousness that originates in the brain and is manife sted especially in thought, perception, emotion, will, memory, and imagination. 2. The collective conscious and unconscious processes in a sentient organism tha t direct and influence mental and physical behavior. 3. The principle of intelligence; the spirit of consciousness regarded as an asp ect of reality. 4. The faculty of thinking, reasoning, and applying knowledge [74]. Consciousness is having an awareness of one's environment and one's ow n existence, sensations, and thoughts, capability of thought, will, or perception [74]. Or Consciousness is the awareness of external and internal stimuli [60]. Awareness is minimal ability of consciousness. Cognition is the mental process or faculty of knowing, including aspects such as awareness, perception, reasoning, and judgment [74]. The first definition of the term consciousness usness and

shows that terms conscio

cognition are synonymous. It is not true. If we use minimal set of needed terms th en the second definition of the term consciousness shows relations between two the se terms

Cognition = Consciousness + reasoning and other intelligent abilities. Aware is the range of what one can know. There is a famous brief treatise by William James, American psychologist and phi losopher (1842-1910), on the question

no. 1. If we think of consciousness as immaterial, spaceless, massless but nonethel ess an ontologically real thing, no, that does not exist. 2. But if we think of it as a flow of ideas, the stream of perceptions and thou ghts and feelings the process by which a supernumerary intelligence knits together experi ences over a course of time, then consciousness is indubitable. [67] 27 I would replace this question by Unconsciousness is the lack of awareness and the capacity for sensory perception ; not conscious; without conscious control involuntary or unintended [74]. Thinking is a way of reasoning and judgment. Thought is the faculty of reasoning . [74].

Thought is the brain product. As research showed more and more clearl y that brain really was a network, the brain s structure become a proof that neural networks worked and a glimmering hope that they could solve problems intractable to artificial intelli gence. The other tradition-the symbolic tradition-in artificial intelligence dis covered the need for schemata and frames to create a way for computer to c hoose a set of rules appropriate to a specific situation. One school, what [27] calls the reasoning approach, continued to view thinking as a process of logical inference. The other school - Simon labels it search - view thinking as a process of searching among possible problem solutions. T his school emphasized building representation that modeled the problem situation and findin g efficient strategies for searching among solutions. It found support in biology, which had started to show that the brain solved some search problems by building neural maps of the external world [75].

Mind includes the collective conscious and unconscious processes. Intelligence d oes not include unconscious process because unconscious is not faculty of learning and r

easoning. In this case: Mind is intelligence plus unconsciousness. It is the pr oduct of the double level control system. Conscious and subconscious pr ocesses are output of the upper level (Main Control System) control; unconscious processes are output of the lov er level (Local Control Systems) control systems. Result of this logical process is the new definition of mind: Mind is the collec tive conscious and unconscious process in a sentient system that direct influence mental and ph ysical behavior. A thinking process is quantum-like process. The mind has a domain of separabl e concepts, which can be connected by rule of logic (Pylkkanen P. at the University of Skovd e, Sweden) [76]. A number of researchers today make an appeal to quantum physics trying to develop a satisfactory account of the mind, an appeal still felt to be controve rsial to many. Post-phenomenologists (Bohm, Pylkko and other) have crea ted model them the semantic elements are indivisible in sort of way that makes difficult to analyze it in conceptual terms 28 The quantum physics (W. Heisenberg and N. Bohr) describes the world as the dual system that can be described as waves and particles at the same time. This theory can b e applied to the micro world. The neuron net is an object of the macro world. Its stages depend just on information flow. Electro encephalogram (see APPENDIX 7) and other methods support dual character of brain electrical processes (pulses and waives). It is possible that ne urons oscillation also creates additional analog information process. But this duality is not quantum physics duality. It is result of discreet nature of information coding and ele ctrical current nature of information transfer in the macro system. It has been widely believed in neuro- and cognitive sciences that the brain can be understood

as a macroscopic object governed not by quantum physics but by classical physics or at most by molecular and chemical biology. They believe that quantum mechanical con cepts to the brain might be hardly accepted. Recent research has shown that brain waves contain useful information about intention or mind. After some training process, distinctive patterns associated with specific intensions can be detected from brain waves, which can be used to generate commands to control computers and robots. This duality of the natural system should be taken in consideration in artificia l brain design. But for the time being it is reasonable to follow the less complex concept. Small group of physicist try to create universal theory of everything. They are discussing the hypothesis of primary brain that has triggered Big Band. It is mix of materiali sm (brain) and idealism (primer mind) or some kind of Intelligent Design theory. They connected it to Singularity. It is very controversial hypothesis. By the way, the world covers just conscious and subconscious processes and does not include unconscious processes as it is in the English language. It means that there is cultural difference in the terms presentation. It should be taken in consider ation. THE MIND AS AN OPERATING SYSTEM The operating system is the main part of the control system. The top of the hierarchy sets goals for lower level processors and monito rs their performance. Since it is at the top, its instructions can specify a goal in explicitly symbol ic terms, such as

muscles. They will be formulated in progressively finer detail by the processor at lower levels right down to the contractions of muscle spindles (The computer

and the mind. An introduction to cognitive science by Philip N. Johnson-Laird, Harvard U niversity press, 1988).

29

DISSOCIATION BETWEEN INTENTIONAL CONSCIOUS AND UNINTENTIONAL CONSCIOUS PROCESSES, CONSCIOUS, UNCONSCIOUS, AND SUBCONSCIOUS PROCESSES

Intentional and unintentional processes are parts of intelligent activities.

conscious If your name occurs in nearby conversation at a cocktail party, it attr acts your attention involuntarily or that lies a phenomenon that establishes the existence of a process

dormant until the right pattern of sound brings it to life 1988).

There are three types of unintentional processes: unintentionally controlled by the Main Control system and others) hard-wired or hard coded to the Main Control system unconscious systems) controlled by a Local Control system (unconscious systems). At the present stage, there are still fundamental disagreements within psycholo gy about the nature of the subconscious mind (if indeed it is considered to exist

at all), whereas outside formal psychology a whole world of pop-psychologica l speculation has grown up in which the unconscious mind is held to have any number of properties and abilities, fro m animalistic and innocent, child-like aspects to savant-like, all-perceiving, mystical and occultic properties. Some psychics also believe that the subconscious mind posse sses a kind of "hidden energy" or "potential" that can realize dreams and thoughts, with minima l conscious effort or action from the individual. Some also believe tha t the subconscious has an "influencing power" in shaping one's destiny. All such claims, however have so f ar failed to stand up to scientific scrutiny. This mistical aprouch in unprodact ive and can not be accepted in psychology of the AIS. In our understanding subconscious are unintentionally controlled processes by t he Main Control system e very brain (intuition, emotions, and others). Subconscious can creat

dangerous problems in natural and artificial intelligent systems. It is not unde r full control of conscious processes and has full access to almost all sometimes-twisted knowledge and to almost all control systems. It can generate unpredictable dangerous behavior.

AWARENESS AND SELF-AWARENESS Awareness Awareness and self-awareness are important abilities of an intelligent system. It is tool of understanding the external world and the system itself. Awareness includes perception and cognition - two the most important elements of intelligence. But aware ness is not the synonym of intelligence. There are intelligent systems without awareness (the ch ess player). In this case intelligent abilities directed just to the limited external worl d. The definition of 30 the terms

natural and artificial worlds. Awareness is having the range of what one (an Agent) can know or understand of o ne's environment and one's own existence (self-awareness). Levels of awareness: from sharp to coma and death. In the human brain the thalamus plays a major role in regulat ing arousal, the level of awareness and activity. Some natural intelligent systems don t have full understanding of their own existence (self-awareness). A cat does not recognize itself in the mirror. beginning at 15-24 months a child begins to recognize itself in the mirror (see APPENDIX 1). There are two main grounds of awareness: 1. the physical world, 2. the social environment. In biological psychology, awareness describes a human s or animal's percep tion and cognitive reaction to a condition or event. Awareness does not necessa rily imply understanding, just an ability to be conscious of, feel or perceive. Awareness is a relative concept. An animal may be partially aware, may be su bconsciously aware, or may be acutely aware of an event. Awareness may be focused on an inter nal state, such as a visceral feeling, or on external events by way of sensory perception. It provides the raw material from which animals develop qualia, or subjective ideas about their experience. Awareness includes evaluation of environment conditions in relation of the ability of the system to exist and survive. Awareness is the result of conscious or subconscious processes. Electro-chemical networks related to the chordate nervous system facilit ate awareness. Researchers have debated what minimal components are necessary for animals to Only

be aware of environmental stimulus, though all animals have some capacity for acute react ive behavior that implies a faculty for awareness. Popular ideas about suggest the phenomenon describes a condition of being awar e of one's awareness. Efforts to describe consciousness in neurological terms have focused on describing networks in the brain that develop awareness of the qualia developed by other networks. Neural systems that regulate attention serve to attentuate awareness among compl ex animals whose central and peripheral nervous system provides more information than cognitive areas of the brain can assimilate. Within an attenuated system o f awareness, a mind might be aware of much more than is being contemplated in a focused extended consciousness. Awareness of an Artificial Intelligent System develops understanding of the spec ific outside environment, activates specific system s abilities to survive in this environment and generate everything needed to reach the goal. 31 Self-awareness Self-awareness is the ability to perceive one's own existence, including one's o wn traits, feelings and behaviors. It is an ability to develop one s own inner d inamic model. In an epistemological sense, self-awareness is a personal understanding of the very core of one's own identity. In today global soci ety identity is loosing its crispy character. For example, it is difficult to define such important element of identity as belonging to the specific group as nation. It is difficult to define the term to a human being but to an artificial agent as well. Psychologists cannot assume that a computer becomes self-aware. At the same time they cannot assume that such machines are not self-aware. Self-awareness is less physical then mental phenomena. Self-awareness is the awareness of oneself as an individual entity or personali ty. An Agent should be able to understand boundaries of His/Her/Its personality as the part of the social (as a team member) and physical environment. It is important condition to correct un derstanding

of incoming information. It is the understanding of the system abiliti es to interact with the environment. Self-awareness is the understanding that one exists. Furthermore, it includes t he concept that one exists as an individual, separate from other people, with private thoughts. It may also include the understanding that other p eople are similarly self-aware. There is no universally accepted theory of what the word existence reasonable to define the term (Agent) with physical and mental representation (the stage of the mind). The stage of the mind includes the list of activated areas of the memory, stage of information flow co ntrol system, etc. All this information develops by the periodical self-diagnostic te sts. It is famous: I think, therefore I am . Perhaps what Descartes meant, si mply put is " I am vividly aware of my existence". ... "I think, therefore I exi st," It is known that, for example, in the very beginning after amputation of one lag a human being does not realize this. He/she must update the wor ld model of own body to operate by new system. A human baby gradually generates the body s world model (see APPENDIX 1): birth to 1 Month: generating mental maps of the different position its body means. It is

- 9 - 15 Months: the brain has a fairly complete mental model of what its bod y can do and what effect it has on the environment - 15 - 24 Months: language and symbolic communication are coming online now, an d these tools are used to further expand the mental model of the world a child beg ins to recognize themselves in the mirror. On the earlier stage a baby brain is underdeveloped and does not hav e strong connections between a brain and the sensor system. It is rea son why a baby does not demonstrate self-32 awareness at this stage. Self-awareness is the combination of intelligen ce and the diagnostic system activities. It is combination of subconscious and conscious pr ocesses. Animals have problem to recognize themselves in the mirror. A cat cannot recogn ize itself in

the mirror. Primates and some types of dolphins can do. Experiment re sult shows that elephant from Bronx Zoo (NYC) can recognize in the mirror its reflect ion as whole and different parts. An Artificial Intelligent System has two methods to develop its own dynamic worl d model: 1. to accept information from the system designer 2. to develop it by itself through random movement of the body parts and collect information about this movement and information from the body sensors. Full self-awareness can be developed only in the social environment. It includes the language development (see above) and development of the circle: PERSON System s name should be assign to the system s world model. An Artificial Intelligent system can learn principle of this circle through understanding of the term Some authors call self-awareness as self-consciousness because the self is the object of analysis. Self-consciousness is credited with the development of identity In an epistemological sense, self-consciousness is a personal understanding of the very core of one's own identity. It is during periods of self-consciousness that people come the closest to knowing themselves objectively. Jean Paul Sartre describes self-consc iousness as being "non-positional", in that it is not from any location in particular. Self-consciousness plays a large role in behavior as it is common to act differently when people "lose one's self in a crowd". It is the basis for human traits, such as accountability and conscientiousness. It also plays a large role in theatre, religion, a nd existentialism. Self-consciousness affects people in varying degrees, as some people are in constant selfmonitoring (or scrutinizing), while others are completely oblivious about their existing self. Different cultures vary in the importance they place on self-consciousness. Disorder of intelligent system can develop outstanding creativity (see also PSYCHOLOGICAL MALFUNCTIONS, DISORDERS).

REFLEXES Reflex is an automatic response or reaction [74]. (See also APPENDIX 5). The AIS demonstrate (similar to natural systems) two types of reflexes: - conditional - unconditional. The Main Control system ( information (conditions) that is stored in the memory. It is subconscious intell igent functions 33 triggered by the signal generated by associative thinking. The Local Control sys tems control unconditional reflexes. There are unconscious, not intelligent functions. Some unconditional reflexes can be controlled by the Main Control syst ems without involvement of intelligent abilities. In this case a system uses the hard coded or hard-wired to the unconscious function of mind. The Main Control system can be connected to the internal sensor system. In this case system generates the reflection of input information as a sensible reaction. This type reaction can be seen in art apprehension (see ART APPRECHANSION). Fig. I-3 A Reflex Arc

A reflex action or reflex is a biological or artificial control syste m linking stimulus to response and mediated by a reflex arc (see SENSATIONS). Reflexes can be built-in or learned. It occurs very quickly before thinking. Before the message is sent to the brain, the spinal cord senses the sensory stimulus, and sends a signal (action potential) to an effector organ (actuator, muscle) to create an immediate act ion to counter the stimulus. For example, an agent stepping on a sharp object would initiate the reflex action through the creation of a

stimulus, (pain) within specialized sense receptors located in the skin tissue o f the foot. The resulting stimulus would be transmitted through afferent or sensory neurons and processed at the lower end of the spinal cord, part of the central n ervous system. This stimulus is processed by an interneuron to create an immediate response to pain by initiating a motor (muscular) response which is acted upon by muscles of the leg, retracting the foot away from the object. This activity would occur as the pain is arriving in the brain which would process a more cognitive evaluation of the situation. (see EMOTIONS). Reflexes are some way of interaction with environment that is important for le arning. Each interaction generates knowledge in the knowledge base (see Learning by Interactions). But without the Central Control system unconditional reflexes are not intel ligent processes. 34 Frog s lag separated from a body demonstrates unconditional reflex but it is not a n intelligent process. FREE WILL AND ACTIONS The problem of free will is the problem of whether rational agents exercise cont rol over their own actions and decisions. The various philosophical positions on the problem o f free will can be divided in accordance with the answers they provide to two questions: 1. Does free will exist? 2. Is determinism true? Addressing this problem requires understanding the relation between freedom and causation, and determining whether or not the laws of nature are causally determ inistic. The various positions taken differ on whether all events are determined or not determinism versus indeterminism and also on whether freedom can coexist with determinism o r not compatibilism versus incompatibilism. So, for instance, hard determini sts argue that the universe is deterministic, and that this makes free will i mpossible. Society generally holds people responsible for their actions, and will say t hat they deserve praise or blame for what they do. However, many believe that m

oral responsibility requires free will. Thus, another important issue is wheth er individuals are ever morally responsible for their actions and, if so, in what sense. Free will is The power, attributed especially to human beings, of making free cho ices that are unconstrained by external circumstances or by an agency such as fate or divine will [74]. This definition should be expand by excluding attributed especially to h uman beings because, as it will be shown latter, artificial intelligent systems should be incorporated into the human society and follow the rules of this society behavior. One of the most heated debates in biology is that of "nature necessarily versus nurture" . This debate concerns the relative importance of genetics and biology as compared to culture and environment in human behavior (Pinel P.J. Biopsychology. Prentice Hall Inc.1990. ). In generative philosophy of cognitive sciences and evolutionary psycholo gy, free will is assumed not to exist [71, 72]. Accidental is unknown nece ssarily (F. Engels). However, an illusion of free will is created, within this theoretical context, due to the ge neration of infinite or computationally complex behavior from the interaction of a finite s et of rules and parameters. Thus, the unpredictability of the emerging behavior from determinist ic processes leads to a perception of free will, even though free will as an ontological e ntity is assumed not to exist [71,72]. In this picture, even if the behavior co uld be computed ahead of time, no way of doing so will be simpler than just observing the outcome of the brain's own computations [73]. Determinism is inevitable consequence of antecedents that are independent of the human will Determinism is roughly defined as the view that all current and future events ar e necessitated by past events combined with the laws of nature (McKenna Michael, "Compatibilism", The Stanford Encyclopedia of Philosophy (Summer 2004 Edition), Edward N. Zalta ). 35 Intelligent system behavior is determined by knowledge, experience, goal , motivation, internal structure and complexity, and external conditions. It is case

of "nature versus nurture" or strict determinism. But there is one problem. In the real life sometimes there are several alternati ves of strategies with equal outcome in term of the goal achievement. In this case an intelligent system can make decision in accordance with outcome from th e mechanism of random choices (flip a coin). By the way a coin dynamic is pre defined. Application of random number me chanism can be interpreted as free choice. Artificial Intelligent Systems have the mechanism of random numbers and can use it mach easy than natural systems can do. There is another problem. An agent can act under conditions of limite d information (conditions of uncertainty and risk). Philosophically it does not contr adict to determinism, but force an agent to make its own choice with some leve l of probability of wrong decision. It is true at least for the Macro World. There is the third problem. An agent can face to unfriendly social e nvironment with unpredictable behavior. It is condition of antagonistic game that creat es conditions of uncertainty and risk. This dualism creates difficulties in prediction of a system behavior. In real l ife determinism as absence of free will does not mean full predictability beca use we don t know all variables. Lack of information limits power of determinism. It is still probabili stic situation. It is decision making process under conditions of uncertainly and risk. Is it reasonable to keep an artificial system responsible for wrong choice? It is the difficult qu estion. Responsibility means punishment and reward! See the chapter MORAL AND L AW and APPENDIX 12. Punishment is the practice of imposing something unpleasant on a subject as a r esponse to some unwanted or immoral behavior or disobedience that the subject has displayed is the reduction of a behavior via a stimulus which is applied (" positive punishment" ) or removed (" negative punishment"). An award is something given to a person or group of people to recogn ize excellence in a certain field.

Determinism does not mean inevitability. It means that each specific re sult depends on chosen specific actions with certain level of probabili ty that is determined by lack of knowledge. In this case an agent is responsible for its choice of ac tions. If an Artificial Intelligent System entitled or forced to exerci se free will it should be able to evaluate possible results of differ ent alternatives and take it in consideration. In the most of cases s ystem deals with single events when probability does not make sense. Evaluation of the possible dangerous result of actions should be done not by evaluation of prob ability but by evaluation of possibility of the dangerous results. Value of possibility can be equal to or system action grater then it was expected can generate the reward (se e SOCIAL BEHAVIOR, The Fair Deal Development). It is very difficult to impose punishment and reward on the artificia l system because contemporary artificial systems are not very sensible to punishment and reward. It makes 36

sense to develop the set of rules and criteria s the system must follow and use as limitation of dangerous actions (see EMOTIONS and STIMULUS AND MOTIVATION, and LAW AND MORAL). Now the definition of free will is: The power, attributed to the Int elligent System, of making free choices under existing external limitation and held responsibility for result of actions. Free will should be transferred in actions. Absolute freedom (anarchy) does not exist in real life. Reasonable actions based on evaluation of circumstances are freed om of actions. Freedom of actions is determined by freedom of will and existing cons trains. Degree of freedom of actions measure of variability which merel y expresses the number of options available within a variable or space . In a system with N states the degree of freedom is equal to N. The last question is: do we need an artificial system with free will? An autonom

ous system without responsibility to result of its actions and acting under conditions o f uncertainty can harm people, animals and destroy environment. It is important to have protect ion. Law and moral will define responsibility of the artificial system and its designer as we ll. For example: the carmakers responsible for their product in accordance with the moral and cri minal law. A car is not autonomic intelligent system therefore is nor responsible for its act ions. By the way Autonomy (Greek: Auto-Nomos - nomos meaning "law": one who gives one self his own law) means freedom from external authority. Autonomy is a concept found in moral, political, and bioethical philosophy. Within these context s it refers to the capacity of a rational individual to make an info rmed, uncoerced (not forced to act or think in a certain way by use of pressure, threats, or intimidation; compel) decision. In moral and political philosophy, autonomy is often used as the basis for determini ng moral responsibility for one's actions (Wikipedia, encyclopedia). The structure of an Artificial Intelligent System is designed as the closedloop system with incorporation of environment (see THE STRUCTURE OF AIS) as a part of the structure. The simple environment does not undermine determinism of the system. T he complex environment such as a financial, military, social, and other system ha ve unpredictable reaction that can be activated with delay, in long period of time. In this case the feed back signal does not help to correct a system behavior. A system behaves as open-loop statistically controlled one. THE STRUCTURE OF INTELLIGENCE (Based on Part 2 analysis) Intelligent Abilities 1. Conscious Learning (Knowledge collection) Sensation - Sensing

- Attention - Discrimination 37

- Perception - Perceive i. Recognition ii. Localization - Conceive i Judgment ii Interpretation

iii. Understanding Reasoning Conditional reflexes Associative thinking Generalization - Conceptualization i. Identification of important characteristics ii. Identification of how the locally linked characteristics - Induction - Classification Deduction Judgment Motivation Creativity (Intentional knowledge manipulation) Hypothesis generation Reasoning Associative thinking Deduction Judgment Imagination - Sensational - Objection Generalization i Conceptualization ii Identification of important characteristics iii Identification of how the characteristics locally linked - Induction

- Classification 2. Subconscious (Unintentional consciousness) Emotions Unintentional learning Intuition Associative thinking Unintentional creativity 38 CONCLUSION Intelligence is a double levels term. The first level of intelligence is General Intelligence (capabilities) (inherite d or built-in) of a sentient system that enables it to direct and influence mental and physical behavior in accordance with a system external or internal goal. The second level of intelligence is Knowledge-based intelligence can be defined as a knowledge-based abilities of a domain-oriented system to act under existing constraints (limitations) and reach external or internal goals or decrease the dis tance between the start and the goal s stages (intellectual adaptation). Learning and reasoning are mandatory features of intelligence. There ar e other optional features such as: generation of hypothesis, generalization, s pecialization, conceptualization, and so on. Analysis shows that some existing definitions give reasonable d escriptions of natural intelligence but still have problems to describin g the intelligence of artificial systems. Application of different terms with the same meaning creates problems in measurement. Standardization of definitions is an important condition to succeed in understanding and measurement. All existing and published definitions quoted in APPENDIX 2 are important valuable and are sources of information. Some of these definitions, as it was mentioned, are similar to those presented in this paper. Huma n behavior is the source of understanding of intelligence. some scientists believe that even bacteria display it Intelligence abilities can be presented at the functional multilevel structure. Similar goals of the agents can be combined into the goal class. The goal class is determined by minimal set of needed abilities to fulfill this goal of t he task and one set of weight function for each alternative class member. All agents that exercise the same minimal set of active abilities

and common set of weight functions to carry out the goal can be combined into the agent cla ss. The members of the same agent class can fulfill the goals of the same goal class. Overqualified agents can be included in the set of alternatives and shou ld be evaluated in the same way as the rest of the set. Creativity as an ability is universal feature, but it depends on know ledge. In this case it is possible to defined creativity as domain or iented ability to create new knowledge as a new combination and new appli cation of existing knowledge. Consciousness is having an awareness of one's environment and one's own existenc e, sensations, and thoughts, capability of thought, will, or perception. Cognition is the mental process or faculty of knowing, including aspects such as awareness, perception, reasoning, and judgment. In our understanding subconscious are unintentionally controlled processes by th e Main Control system brain (intuition, emotions, and others).

There are three types of unintentional processes: 39

unintentionally controlled by the Main Control system and others) (unintentionally conscious subconscious systems)

hard-wired or hard coded to the Main Control system unconscious systems) controlled by a Local Control system (unconscious systems)

Mind is the collective conscious and unconscious process in a sentient system th at direct influence mental and physical behavior, the principle of intelligence. Awareness is having the range of what one (an Agent) can know or understand of o ne's environment and one's own existence (self-awareness). Self-awareness is the ability to perceive one's own existence, including one's o wn traits, feelings and behaviors.

Strong relationships between different abilities are presented by cross-referenc es between almost all chapters of this book. Fig. I-4 I-11 present the structures of intelligent abilities. INTELLI INTE GE LLI NT GE ABILIT AB IES ILIT SUB SU CONSCIOUS (UNINTANTIONAL TI ONAL CONSCIOUS CONSCIOU CONSCIOUS) CONSCIOU CREATIVITY CREA LEA LE RNING (INTENTI EN ONAL

ONA (KNOWLEDGE KNOWLEDGE COLLECT COLL ION) N MANIPULAT MANIPUL ION) ION Fig. I-4 40 LE L ARNING I SENSATION I PERSAPTION I CONDIT I IONA I L L REASONING I REFL F E L XE X S ASSOCIATIVE THINKING

GENERALIS I ATION I DEDUCTION IO JUDGME M NT MOT M IV I ATION IO

Fig. I-5 CR C E R AT A IVI V TY AS A SO S C O I C AT A IVE RE R AS A O S

N O I N NG N THI H NK N I K NG N DE D D E U D C U T C ION O JU J D U G D ME G N ME T N GE G N E E N R E AL

A I L ZAT A ION O IMAG A I G NAT A ION O SE S N E S N AT A ION O AL A OB O J B E J C E T C ON O AL A HY H PO P T O HE H S

E I S S S GE G N E E N R E AT A ION O

Fig. I-6

41 HYPO P THES HE I S S S GENE E RAT A ION AS A SO S CIAT A IVE REAS A ONING

THINKING DEDUCTION JUDGMENT E GENE E RAL A IZAT A ION IM I AG A INAT A ION SE S NSAT A IONAL A OBJECTONAL A Fig. I-7 SUBCONS SUBCO CIOUS UNINTENTIONAL ONA EMOTIONS LEARNING INTUITION ASSOCIATI

ASSO VE THINKING UNINTENTIONAL ONA

CREATIVITY

Fig. I-8 42 SE S N E S N AT A ION I AT A TEN E T N ION I DI D S I C S R C IMIN I AT A ION I PE

P R E C R E C P E T P ION I SE S N E S N IN I G N

Fig. I-9 PERCEPTIO PERCE N PTIO PE P R E CEIV CE E IV CONCEIV CE E

IV RECOGNITION LOCAL A IZAT A ION JUGMENT INTERPRETAT A ION UNDERSTANDI A NG

Fig. I-10

GENERALIZ GENERAL ATION CON C C ON EPT C U EPT ALI U ZAT Z I AT ON INDUCT

INDU ION CLASSIF CL ICATION IDEN D T EN I T FI F CAT C I AT ON ON IDEN D T EN I T FI F CAT C I AT ON ON OF OF HO H W O OF OF I

MPOR M T POR AN T T AN T TH T E H CH C AR H AC AR T AC E T RISTI ST CS C CH C AR H A AR CT C ER T I ER STI

ST CS C

LOC L ALLY OC ALLY LI NC N ED C

Fig. I-11 43 REFERENCES: 1 Albus J., Outline for Theory of Intelligence. IEEE Transactions on Systems, Man, and Cybernetic, vol. 21, No 3. May/June, 1991 2 Albus James S., Meystel Alexander, Behavior Generation in Intelli gent Systems, NIST. 3 Antsaklis Panos, Defining Intelligent Control. Report of Task Force on Int elligent Control, IEEE Control Systems, June 1994. 4. Artificial Intelligence with Dr. John McCARTY. Conversation On The Leading

Edge Of Knowledge and Discovery With Dr. Jeffry Mishlove, 1998. 3. Atkinson Rita, Atkinson Richard, Smith Edward, Bem Daryl, Nolet-Hoekgema Susan. Hilgards Introduction to Psychology, Harcourt Brace College Publishers, 1996. 6. K.) Andrew A. M. Artificial Intelligence. Viable Systems, Chillaton, Devon (U.

Abacus Press, 1980 7. Boden, Margaret A., Artificial Intelligence and Natural Man, Basic Books, Inc.,

New York, NY, 1977. 8. Bock, Peter, The Emergency of Artificial Intelligence: Learning to Learn, The

AIMagazine, Fall, 1985. 9. Berg-Cross G., Dimensions of Intelligent Systems. Measuring the Perf ormance and Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop. August 13-15, 2002). 10. Bongard Josh EPSRC/BBSRC International Workshop Biologically-Inspired Robotics: The Legacy 6 August 2002, HP Bristol Labs, UK of Zurich, 11. Charnik, Eugene and McDermott, Drew, Introduction to Artificial Intelligence , Addison-Wesley Pub. Co., Reading, MA 1985. 12. Cawsey A. The Essence of Artificial Intelligence. Prentice Hall, 1995 13. Computers and The Mind with Howard Rheingold. Conversation On The Leading Edge of Knowledge and Discovery with Dr. Jeffry Mishlove, 1998. 14. Commun S., Li Yushan, Hougen D,. Fierro R Evaluating Intelligence in anned Ground Vehicle Teams. Measuring the Performance and Intelligence ystems: Proceeding of the 2004 PerMIS Workshop. August 23-26, 2004. 15. Coon D. Introduction to Psychology. Exploration and Application. West Pub lishing Co. 1995. 16. nce. Campione J.C., Brown A.L., Ferrara R. A Mental Retardation and Intellige Unm of S of W. Grey Walter 14-1

Handbook of Human Intelligence, Cambridge University Press, 1982. 17. Dean T., Allen J., Aloimonos Y. Artificial Intelligence. Theory and P ractice. The Benjamin/Cummings Publishing Company, 1995. 18. Decision Support and Expert Systems. Management Support Systems by

Efraim Turban. Prentice Hall. 1995.

19.

Davis S. F., Palladino J. J. Psychology, PRENTICE HALL, 1997

20. Ettinger R. H., Crooks R. L., Stein J., Psychology. Science, Behavior, and Life, Harcourt Brace College Publishers, 1994. 21. Finkelstain Robert, A Method For Evaluating the

August 14-16, 2000, Gaithersburg, MD 44

22. 23.

Foundations of Neural Networks by Khanna, Addison-Wesley, 1990. Freeman W. J., How Brains make Up Their Mind, PHCENIX, 1999

24. Fogel. D. B. Evolving Solutions that are Competitive with Huma n. Measuring the Performance and Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop. August 13-15, 2002. 25. 26. 3. Feldman R. S., Understanding Psychology, McGraw-Hill Inc. 1996. Goode E. Brain Scans Reflect Problem-Solving Skill, NYT, February 17, 200

27. Gao R. and. Tsoukalas L.H, Performance Metrics for Intelligent S ystems. An Engineering Perspective. Measuring the Performance and Intelligence of S ystems: Proceeding of the 2002 PerMIS Workshop. August 13-15, 2002 28. Gunderson J. P., Gunderson L. F. Intelligence easuring Autonomy Capability. M

the Performance and Intelligence of Systems: Proceeding of the 2004 PerMIS Workshop, August 23-26, 2004. 29. 30. Gray P., Psychology, Worth Publisher, 1999. Huffman Karen, Vernoy Mark, Vernoy Judith, Psychology in Action, John

Wiley & Sons, Inc1997. 31. Jerison H, J. The evolution of biological intelligence. Handbook o f Human Intelligence, Cambridge University Press, 1982. 32. Hoffman K., Vernoy M., Vernoy J.. Psychology in Action, John Willey & Son

s, INC. 1994 33. Horst J. A., Native A., Intelligence Metric for Artificial Syste ms. Measuring the Performance and Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop . August 13-15, 2002. 32. Hoffman K., Vernoy M., Vernoy J.. Psychology in Action, John Willey & Sons , INC. 1994 33. Horst J. A., Native A., Intelligence Metric for Artificial Syste ms. Measuring the Performance and Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop. August 13-15, 2002. 34. 35. Keller Helen. The Story of My Life, 1902. Kassin S., Psychology, Prentice Hall, NJ, 1998

36. Language And Consciousness. Part 4: Consciousness and Cognition with Dr. Steve Pinker. Conversation On The Leading Edge Of Knowledge And Discovery With Dr. Jeffry Mishlove, 1998. 37. of 38. 39. Landauer C., Bellman K. L., Measuring the Performance and Intelligence Systems: Proceeding of the 2002 PerMIS Workshop. August 13-15, 2002. Lathey B. Psychology, An Introduction. Wm. C. Brown Publishers, 1989. Lahey B.B..Psychology. An Introduction. Wm. C. Brown Publishers, Dubuque,

Iowa,1989. 40. Meystel A. Evolution of Intelligent Systems Architectures. What Should

Be Measured? Performance Metrics for Intelligent Systems. Workshop. August 14-16 , 2000, Gaithersburg, MD. 41. Meystel A. Semiotic Modeling and Situation Analysis; An Introduct ion, AdRem, Inc.1994 42. Mind Over Machine with Dr. Hubert Dreyfus. Conversation On The Leading Ed ge of Knowledge and Discovery with Dr. Jeffry Mishlove, 1998 45

43. Mind As A Myth with U. G. Leading Edgeof

Krishnamurti. Conversation On The

Knowledge and Discovery with Dr. Jeffry Mishlove, 1998. 44. Myers David G. Psychology, Worth Publish, 1995.

45. McCulloch W. S. What is a number, that a man may know it, and a man that he may know a number? General Semantics Bulletin, No 26 and 27, 1960. 46. Negnevitsky M. Artificial Intelligence. A Guide to Intelligence Systems,

Addison-Wesley, 2001 47. 995. Plotnik R. Introduction to Psychology, Brooks/Cole Publishing Company, 1

48. Polyakov L.M. Agent with Reasoning and Learning: The Structure Design, Performance Metrics for Intelligent Systems, Workshop, August 14-26, 200 4, Gaithersburg, MD. 49. Polyakov L. M. Structure Approach to the Intelligent System Design. Performance Metrics for Intelligent Systems, Workshop, August 13-15, 2002, Gaithersburg, MD. 50. Polyakov L.M., In Defense of the Additive Form for Evaluating Vectors, M easuring the Performance and Intelligence of Systems: Proceeding of the 2000 Pe rMIS Workshop. August 14-16, 2000. 51. ch, Russell Stuart, Norvig Peter, Artificial Intelligence. A Modern Approa

Prentice Hall, 1995. 52. Simon H. The Sciences of the Artificial, Cambridge, Mass., The MIT Pres s, 1969. 53. , 1982. 54. The Rising Curve edited by Proof. Ulric Neisser, American Psychological Sternberg R. Handbook of Human Intelligence, Cambridge University Press

Association, 1995. 55. Winston, Patrick Henry, Artificial Intelligence, Addison-Wesley Pub. Co ., Reading,

MA 1985. 56. Psychology by Crider A. B., Goethals G. R., Kavanaugh R. D., Solomon P. R. Harper Collins College Publishers, 1993. 57. Subhash Kak, Grading Intelligence in Machines: Lessons from Animal Intell igence, Preliminary Proceedings August 14-16, 2000, 58. rsity The Oxford Companion to MIND, edited by Richard L. Gregory, Oxford Unive

Press, 1987 59. Tieger P.D. and Barron- Tieger B. Do What You Are. Little, Brown and Co. 1995. 60. 4. 61. Huffman, Verney, Verney Psychology in Action, John Wiley & Sone Inc. 199 Robinson D. N. The Grate Ideas of Philosophy, The Teaching Company, 2004.

62. Sternberg R. J., Grigorenko E. L., Singer J.L. Creativity. From Potential to Realization, American Psychology Association, 2004 62 Pribram K. H.,

Nature of Mankind, 1996, pages.7-16 63. Pribram K. H,

Neuropsychology, Englewood Cliffs, NJ, Prentice Hall; Monterey CA: Brook s/Cole, 1977, New York, Random House, 1982, 64. 65. Dennett D. C, Dewan E. M, J. C. Eccles, et.al.

and Brain: A conversation among philosophers and scientists, In. G.G. Globus an d G. 46 Maxwell (eds.) Consciousness and the Brain (p.p. 317-328), Plenum Press, New Yor k, 1976 66 Minsky M, The Society of Mind, MIT

67. 6

Robinson D. N. The Grate Ideas of Philosophy, The Teaching Company, 200

68. Neuman, J. and Morgenstern, O. Theory of Games and Economic Behavior. Pri nceton University Press, Princeton. 1953. 69. Polyakov the Automated design of the e tools. Publ. L. M., Kheruntsev P. E., Shklovsky B. I., Elements of of Machin

electrical

a automated equipment s

70. 71.

G. Watson (ed.) Free Will.

Oxford University Press. 2nd Edition 2003.

Epstein J.M. and Axtell R. Growing Artificial Societies - Social Science from th e Bottom. Cambridge MA, MIT Press 1996. 72. Wolfram, Stephen, A New Kind of Science. Wolfram Media, Inc., May 14, 2002. 73. Koller, J. Asian Philosophies. 5th ed. Prentice Hall 2007. 74. American Heritage Talking Dictionary. Copyright 1997 The Learning Compan y, Inc. 75. Jubak J. In the Image of the Brain, The Softback Preview, 1994.

76. Pylkkanen P. Can Quantum Analogies Help us to Understand the Process of Th ought? Brain and Being, John Benjamin Publishing Co., Amsterdam/Philadelphia, 2004. 77. Leibs Scott, Designs of Intelligence, CFO vol. 22, No. 12 November 2006

78. Wiener Norbert (), Cybernetics or Control and Communication in the Animal and the Machine, Paris, Hermann et Cie - MIT Press, Cambridge, MA, 1948

47

48

PART 2

PSYCHOLOGY OF ARTIFICIAL INTELLIGENT SYSTEMS

49 50 WHAT IS PSYCHOLOGY OF ARTIFICIAL INTELLIGENT SYSTEMS?

Introduction

Any intelligent system can be described through description its behavio r. Behavior is the subject of the science that we call psychology. Theory of the human psychology h as strong

influence on development of Artificial Intelligent Systems. It is impor tant to recognize existence of the strong influence of the Artificial Intelligent System psycholog y back on the natural system psychology. For a long time, cognitive psychology has been both a resource and a beneficiary of robotic research. Robotic vision, robotic speech recognition and robotic vocaliz ation do not completely simulate, but have all found help from psychological research in huma n sensory processes and human perception. In turn, the process of developing mac hinery for sensors and perceptive functions has shed light into the "back box" of such functions in humans. The latter -- studying human intelligence by trying to implemen t it -- should be especially beneficial to psychology, even though by and large, robotics has not been used so far as a tool in psychological research. Areas of psychology, in addition to cognitive psychology, therefore, fi nd themselves in a wide range of new territories in the frontier of robotic research and development. Among these areas, the most obvious are social interaction, communication, em otions and affects, child development, learning and teaching, and perhaps, even gender development i ssues. The creators of most of the social humanoid robots have intentionally left the gender of these robots undecided. They call them by names, not "he" or "she", and they certainly do not like "it". These are creatures, not merely robotic systems, they emphasize. Later in the chapter GANDER OF AIS we will discuss this topic. Some day, there will be an entire psychological research area (and even service) devoted to this new kind of "creature". Like there is "animal psychology", there will be "robotic psychology". I am predicting that the day is coming soon tha t robots become creatures living amongst us. Nevertheless, this robotic psychology will benefit both them and us. Being neither a physical science nor a biological science in the strict sense, psychology has evolved as something of an engineering science. All intelligent abiliti es and functions represent actions. Actions are outcome of the control systems as the subject of informational

technology. Basic ideas are: 1. The general approach to problem solving in engineering is first to reduce th e problem to a model, usually including a number of distinct modules. 2. The modules have such properties to encompass both the given funct ion and the means by which that function can be integrated into the performance of the overall system. 51 3. Psychology attempts to use the laboratory context as a simplified model, with the experimental variables chosen to tap into one or another functional module. The most successful application of this line of thinking can be seen in contemporary cognitive neuroscience. 1. Any cognitive achievement, no matter how complex, is reducible to an ensem ble of distinguishable functions. 2. Each function is accomplished by processes and networks in the cen tral nervous system. 3. Manipulation of relevant variables in the controlled conditions of laborator y research is the means by which put thought and action on scientific foundation. It is important to understand the difference between an engineering mo dule that actually performs a task and a cognitive event subject to interpretation. Interpretation is the subject of the upper level of the control system. f the lower level of the control system. Performance is the subject o

An artificial intelligent system like the natural one generates specific behavio r. This behavior is determined by the system design and external conditions. There are many com monalities between behaviors of these two classes of systems; but strong system s difference creates specific behavior and features. Discovering these differences and learning metho ds for their

implementation is the main goal of Psychology of Artificial Intelligent (AI) Sys tems. Psychology is the science that deals with mental processes and behavior Psychology of Artificial Intelligent Systems (PAIS) is the science that deals wi th processes related to the artificial mind, intellect and behavior. These processes originat e in the artificial brain (computer) and are manifested especially in thought, perception, emotion, will, memory, imagination and so on. Analysis is the main method of the PAIS. The main goals of (PAIS) are: 1. Clear defining, describing and understanding the mental processes in enginee ring terms. 2. Organizing the information in a highly structured, algorithmic form. Insiders of the robotics field have recently observed a shift from behavior-base d approaches to robotics that deal with low-level competence to those that try to build humanoid robots with an increasingly complex behavioral repertoire that includes the ability to interact socially. Such a shift also accentuates the new role of psychology in robotic sc ience. Artificial Intelligence (AI) is a Computer Science that deals with:

1. The ability of a machine to perform those activities that are normally thoug ht to require intelligence. 2. The branch of computer science concerned with the development of machines ha ving this ability [36]. 52 Synthesis is the main method of the Artificial Intelligence. Comparison of the PIAS and the AI shows that the first one describes definition and nature of the system s functions. The second one deals with development and implementatio n of the working system that can deliver these functions. Theoretical principles of artif icial intelligent

systems design and their psychology should be presented in engineering terms and from the engineering point of view. This approach can be very useful to develo p psychology of the future artificial biological systems. Method of Analysis The Psychology of Artificial Intelligent systems as well as traditional , human psychology focuses on analysis. The history of human psychology evolves through several stages. Each s tage is based on different representational models of mental processes: Structuralism, Fun ctionalism, Behaviorism, Gestalt psychology, Psychoanalysis, Information-processing sys tems, and Psycholinguistics [18]. The Structural approach during the period of Structuralism was gradually replace d by newer and different ideas. Combination of Structuralism and Functionalism creat es powerful tool to present the structure and description of the AIS abilities. The first part of the definition of the AI systems can be prese nted as Psychology of Artificial Intelligent systems. It deals with abilities of artificial intelligent systems that may be presented in a form suitable for the development of intelligent machines. Cognitive psychology (Plato, Kant, Chomsky) is the school of psychology that examines internal mental processes such as problem solving, understand, memory, language, and so on. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang K Kurt Koffka, and in the work of Jean Piaget, who studied intellectu al development in children. Cognitive psychologists are interested in th e mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to probl ems take the form of algorithms Behaviorism is an approach to psychology based on the proposition that behavior can be studied and explained scientifically without recourse to internal mental states. The behaviorist school main influences were Ivan Pavlov, who investigated classical conditioning, John B. Watson who rejected introspective methods and soug ht to restrict psychology to experimental methods, and B.F. Skinner who co

nducted research on operant conditioning, and Simon. It is avery important approach to learn ab out An Artificial Intelligent System. Combination of Cognitive psychology and Behaviorism permits to observe the outpu t and inside processes of the system. All fore methods are presented in this book as amethodological foundation. There are several approaches of study in the field of cognitive science includin g symbolic, connectionist, and dynamic systems. 53

Symbolic - That intelligence can be explained by means of systematic, discrete instructions not unlike the way in which a computer works (Semiotic and Algorith m Theory). Connectionist - The means of explanation is by using artificial neural networks. Dynamic Systems - Cognition can be explained by means of a continuous system in which everything is interrelated (Control Theory). "Cognitive" has to do only with formal rules and truth conditional semantics . (Nonetheless, that interpretation would bring one close to the histori cally dominant school of thought within cognitive science on the nature of cognition - that it is ess entially symbolic, propositional, and logical). Levels of Analysis One of the central principles in the symbolic approach to cognitive science is t hat 1. there are different levels of analysis (LOA) from which the brain (natural or artificial) and behavior can be studied, and

2. mental phenomena are best studied from multiple levels of analysis. These levels are usually broken into three groups, based on Marr's description o f them: Computational (Behavioral) level: describes the directly observabl e output (or behavior) of a system, includes structurization and semiotics. Algorithmic (Functional) level: describes how information is processed t o produce the behavioral output. Implementational (Physical) level: describes the physical substrate that the system consists of (e.g. the brain; neurons). The first two levels are related to Psyhology and include structuraliz ation and semiotic technique. Semiotics is the study of signs and sym bols, both individually and grouped in sign systems. It includes the study of how meaning is constructed and understood. The third one and partly the second one are related to Artificial In telligent Systems development. An analogy often used to describe LOA is to compare the brain to a computer. The physical level would consist of the computer's hardware, the behavioral lev el represents the computer's software, and the functional level would be the co mputer's operating system, which allows the software and hardware components to communicate (see THE CONSCIOUS MIND AS AN OPERATING SYSTEM). Hardware and an operation system are related to the first level of intelligence definition. DECOMPOSITION AS THE METHOD OF ANALYSIS

constituent parts, and those parts into subparts and so forth. It is a highly in tellectual process. Decomposition is disintegration, replacement of a complex system by a set of simple subsystems. It is the process of simplification. It is the replacemen t of more abstract or 54 lesser-known TERMS with less abstract more specific or better known TE RMS. This procedure creates the multilevel structure. The lowest level of the st ructure consists of the simplest undivided parts, processes, and subgoal s. Each step of decomposition is based on specific criteria. Choice of the criteria can be done in a way similar to choosi

ng attributes in the process of purification (see below). This hierarchy determines the function ality of every level above them. In the area of research of an artificial intelligen ce this approach was proposed by Dr. J. Albus and Dr. A. Meystel [16,19]. In the most of cases decomposition is based on strong knowledge about the object of decomposition. If an agent does not have knowledge about the new obje ct it has to learn about it (see below) or use existing knowledge ev en without any direct relationship to the object of decomposition as the h ypothesis in the learning process. Decomposition can be done by different class of parameters, criteria: 1. by functions 2. by elements 3. by modules 4. by subgoals 5. by tasks 6. by processes, etc A criterion of choice depends on nature of an object of decomposition: a physica l system, a term, a goal, a process, etc. In many cases definition of a term consist of the set of the lover level terms: TERM TERM1, TERM2 Each of new terms has definition with the same structure. In this case decomposi tion can be presented by algorithm: 1. Develop the term definition in accordance with the rules of definition devel opment (APPENDIX 8) 2. Develop the second level of hierarchy as the set of the lover level terms 3. Develop the definitions for the second level of terms 4. Develop the third level of hierarchy 5. Continue until integrity of criteria is not destroyed. Modern biologists such as research on biological hierarchy, and use this method of decomposition in biolog y. The long rationalist tradition that extends from Descartes, Leibniz, and Hobbes assumes that

all phenomena, even mental ones, can be understood by breaking them d own into their simplest primitive components. Russell and Whitehead (Principia Mathematica) a ttempts to reduce the world to logical operations expressed mathematically and art ificial intelligence. The grand project of artificial intelligence has been to find those a toms and the logical relations that govern them and to build a symbolic compu ter representation that captures that 55 order. But Wittgenstain argued that facts couldn t be striped of their context because it is their context, their pragmatic use that gives them mean ing [38]. This problem can be solved by presentation decomposition of facts and attributes. THE STRUCTURE OF AIS (AGENT) The human brain is designed as the structure. Different areas execute different functions. An artificial system has more visible modularity. It hel ps to develop system with high diversity of architecture. Architecture The structure of components, their functions and relationships.

Environment The Real External and Internal World. Complex environment such as financial, social, military and other are active, non-friendly, and non-predictable systems. Sensor The system of information collection Perception Translation of sensor data into organized meaningful information. World Model An internal representation of the real world. Knowledge Base The data structure and information that form the intelligent world model. Or Model of human knowledge that is used by expert system Behavior Generator The planning and control of action designed to achieve behavioral goal. Actuator

The action generator in accordance with a behavioral generator program. Value Judgment The value judgment system determines good and bad, reward and punishment, import ant and trivial, certain and improbable. Inference Engine A module that generates a logical conclusion and proof based on a set of rules f or deduction Fig. II-1 presents the integral closed-loop structure of the AIS. Traditional multi-level, multi-resolution structures can be presented by a tree-shaped structure. An artificial intelligent system structure is a very complic ated multi-level, multi-56

resolution structure with cross connections between different branches ( horizontal or vertical). Horizontal connections are between modules of the same level. It is d etermined by participation of the same low-level abilities in higher-level ability activities . Quantification of abilities with fuzzy numbers, computation with words and other methods of quantification are important procedures. Knowledge Base Value j e u j dgme gm n e t Value j e u j dgme gm n e t

Inf In er e e r n e ce e en e gine Inf In er e e r n e ce e en e gine Perc e epti e on World Wor mod m el Perc e epti e on World Wor mod

m el Be B h e avio a r ge g n e erator l Be B h e avio a r ge g n e erator Inn In er e an r d Ou O ter te r Inn In er e an r d Ou

O ter te r Environme vironm n e t Sensors e Environme vironm n e t Actuator Ac s tuator Sensors e Actuator Ac s tuator Fig. II-1. The integral structure of an Agent. Local feedback delivers part of information from the Actuators directly to the World Model. Goal as the system s input is not shown. VECTOR OF PERFORMANCE (FUNCTIONS) The intellectual functions are derivatives of basic intellectual abilities of a system: F = f[V(A)] The fallowing functions are included to satisfy the specific system requirements [20]: 1. Object Recognition to recognize objects, actions, situations

to search for a required object within a scene to interpret situations (to evaluate objects, relationships, and actions) to detect an unfamiliar object, 2. Learning 57

by instruction by experience by interactions by imitations 3. Hypothesis generation 4. Reasoning (manipulation with abstract and specific terms and relationships) 5. Data and information organization Generalization Conceptualization 6. Adaptation to new environment 7. Communication

Communication with humans and artificial agents Collaborate with humans and artificial agents 8. Interpretation its own behavior and behavior of other agents (to evaluate ac tions and relationship to other agents and objects) 9. Decision making 10. Perform decomposition 11. Planning and scheduling 12. Art apprehension All of them represent actions. Some characteristics of an intelligent system are not actions: 1. fairness 2. truthfulness 3. loyalty, etc. In some cases the problem is not clear decomposition is not obvious the variables are not listed in the beginning of a process analysis the rules of actions should be learned during the process Multilevel nature of tasks and knowledge determines the multilevel structure of performance. Unfortunately artificial system creativity, artificial system intuition and so o n are beyond of engineering attention. It is impossible to make serious advances in th is new area of knowledge and application without understanding these intellectual abilit ies. It is very important to create a common approach to research and application of new knowled ge to the new class of systems under the umbrella of a new knowledge theory the Psychology of Artificial Intelligent Systems. There are two different approaches to measuring intelligence as the vector: Physical parameters such as memory size, the diameter of associations ball (circ le) etc

The level of behavioral abilities 58 In the most cases the physical parameters of the existing system are not availab le. The system s output is more important characteristic than the physical parameters. The most important question of intelligent measurement is: Is it an additive ( APPENDIX 3) or multiplicative function? Psychology and cognitive science calculate I Q based on the assumption that intelligence is an additive function of abilities. It is a very strong assumption because there is interdependence between the same abilities. For exampl e: reasoning is the basis of several other abilities such as generalization, intuition, etc. It is important to choose local abilities without interdependency. For example: generalization, int uition, associative thinking, objects recognition, etc are appropriate choices but reasoning is not because it is a part of these abilities and is therefore interdependent.

AUTONOMOUS Autonomy (Greek: Auto-Nomos - nomos meaning "law": one who gives ones elf his own law) means freedom from external authority. Autonomy is a concept found in moral, political, and bioethical philosophy. Within t hese contexts it refers to the capacity of a rational individual to m ake an informed, uncoerced (not forced to act or think in a certain w ay by use of pressure, threats, or intimidation; compel) decision. In moral and political philosophy, autonomy is often used as the basis for deter mining moral responsibility for one's actions (Wikipedia, encyclopedia). In term of the Theory of Control Systems autonomy means self adaptati on to the new environment. Unfortunately the real environment is the complex, active, unpredictable, unfriendly to an agent. The following are three of the best-known definitions of an Autonomous System in the intelligence community: (L. Fogel). which no one can predict in advance what it will do

Autonomous is not controlled by others or by outside forces; independent; indepen dent in mind or judgment; self-directed

There are some problems with these definitions. The first one: autonom y is not just the ability to generate a purpose. It also includes ability to execute the plan to a chieve the goal. What does it mean: without any instructions from outside? What does i t mean: independently? It is not possible to execute any plan without receiving informat ion from the outside world, from environment. Can we accept another agent as a part of enviro nment? 59

The second, there is a real possibility that another agent with stronger experie nce and ability to reason can predict with some level of confidence and probability the behavior of the first agent. Any experienced professor can easy predict mistakes and fraudule nt submissions of certain students. There is one more definition: An autonomous agent is a system situated within and a part of an env ironment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to affect what it senses in the future [29]. Let us look at some basic ideas to develop workable definition of the Autonomous Systems. 1. Autonomous is ability to adaptation. There are two types of adaptation: short term time-spatial adaptation long-term multi-generational adaptation. The last one is referred to as of the system to increase a level of intelligence. Evolution is the tool to impr ove system s

intelligence (see EVOLUTION AND INTELLIGENCE). 2. An autonomous system is not controlled by others or by outside forces, but is rather Independent in mind or judgment; self-directed system [36]. 3. Adaptation is to become suitable to a new or special application or situation. Change in beha vior of a person or group in response or adjustment to new or modified surroundingsthe terms adaptation and choice-making have the same meaning. In accordance with th e statement from above: In some way it is possible to say that an int elligent system is a system that has capacity or ability to make choice (lea rning and reasoning with a goal internal presentation) (this definition has many supporters [21]). Accep tance of this simplified definition of intelligence makes acceptable the statement: In some cases autonomous and intelligence has the same meaning. 4. Adaptation makes sense if the system behaves in the uncertain environment. 5. A system collects information from environment 6. Any another agent is a part of environment. 7. An Agent is domain-oriented system that cannot equally operate in the differ ent environments; a cab driver may not be able to work as a pilot. Let us discuss the first type of adaptation (short term time-spatial adaptation) . All of these ideas can be covered by the definition: Fully autonomy is a domain oriented ability to generate one s own goal and without any instruction from any other agent to execute achievement of the goal i n uncertain environment. Let us discuss the case when a driver gets lost and asks some body for direction . Is a driver autonomous agent? Is this an example of malfunction of the system or collection of additional information? Autonomous ability depends on ability to collect information. The

60 driver starts his/her/its trip as an intelligent agent, but in the middle of the trip he/she/it lost his right to be called an autonomous agent? Fully autonomy is very strong ability (or strong definition?). In acco rdance with this definition a cat has a grater level of autonomy then a human being. A cat never asks for advice. In reality a human being is domain-orie nted system. Even an expert in the specific area of knowledge can search for advice from the expert in the same area of knowledge. A child receives repeated instructions about the same problem all day along. It would be better, more productive to define autonomous action instead of an au tonomous Agent. Autonomous actions are agent s goal driven actions in uncertain environment that c an be executed without any instruction from any other agent. In this case the autonomous system is a domain-oriented system that capable to e xecute autonomous actions. The goal is an essential element of definition of the term agent cannot reach the complex goal. The complex goal can be decomposed for seve ral sub goals. In this case the autonomous system consists of the several not fully au tonomous but sub autonomous agents. In real life many solution of real tasks need team work o r some help from outside. It is semiautonomous activity of the team or crewmembers . It is possible to qualify this whole teem as autonomous system. In this case communication between the same team members is the part of operat ion. The Centibots system (The 100 Robots Project) is a framework for very large team s of robots (Fig. II-3), those are able to perceive, explore, plan and collaborate in unknown environments. The Centibots were developed in collaboration with SRI Internation al, funded under DARPA s SDR program. The Centibots team currently consists of approximately 100 robots. These robots can be deployed in unexplored areas, and can efficiently di stribute tasks

among themselves: the system also makes use of a mixed initiative mod e of interaction in which a user can influence missions as necessary (Distrib uted Multi-Robot Exploration and Mapping, the Proc. of the IEEE 2006). The robots are fully autonomous in terms of a human involvement. All computations are performed on-board. [http://www.cs.washington.edu/ai/Mobile_Robotics/projects/ce ntibots/] Another example of a fully functioning autonomous system is the team of the socc er players, robots that are participating in the famous Robocop tournament. Subautonomous actions are subgoal (of a team goal) driven action in u ncertain environment without any instruction from any other agent who is not a team membe r. The subautonomous system is a domain-oriented system that capable to e xecute subautonomous actions. 61 In case of teamwork each agent (natural or artificial) should be psychologicall y compatible with other team members. The team can be presented as an autonomous net with int elligent nodes. There are two types of net: 1. The net with subautonomous agents as the nodes 2. The net with autonomous agents as the nodes. In this case each single agent c an achieve the net goal by itself. His/Her/Its participation in the net just inc reases the power of the system. A regular car is not autonomous system. The combination of the car and driver, t he drone and operator (natural or artificial) are autonomous systems. In this case a car and a drone are actuators of an intelligent agent. Autonomous is a complex feature of intelligence. It includes the set of several abilities, such as sensation (S), perception (P), conceiving (C), learning (L), reasoni ng (R), generalization (D), and discrimination (DE): A = F (S,P,C,L,R,D,(DE)) The development of artificial systems with the high level of autonomy and great

abilities is a short-term goal of science and industry. The actual behavior of these types of s ystems cannot be predicted in some cases. It is important therefore to prognosticate possible dangerous results of their behavior and protect environment from not authorized actions (see FREE WILL AND ACTIONS and LAW AND MORAL). Although there is very strong correlation between Intelligence and shor t-term time-spatial adaptation, it is not reasonable to define Intelligence as Adaptation. Adaptation is a very sophisticated term. An unintelligent system cannot be a n autonomous system. An intelligent system can be or cannot be fully autonomous system. Natural intelligent systems can be autonomous and subautonomous. All of them som e times rely on outside help under different conditions. It is true until the single age nt can achieve the goal. Artificial intelligent systems like natural systems can be autono mous and subautonomous as well. In this case some intelligent abilities not flexible enough for adaptation to new conditions. When we are talking about adaptation we refer to a specific environment operatin g under specific constraints. For example, an autonomous vehicle cannot fly an d neither can a human being. Both are autonomous but have their limit ations. BIG BLUE is the fully autonomous system only in the chess game environment. The contemporary level of development of artificial intelligent systems reflects merely the beginning of this process. But this development is accelerating and soon there will be a community of very advanced sys tems. There are many advanced and fully functioning autonomous systems but many challenges remain to be resolved. Now is the time to start thinking about these challenges and their potential solutions. 62 The Agent in Wumpus World (see REASONING) is an autonomous system. It can move about in its environment and avoid dangerous areas. INTELLIGENT

SYSTEM

AUTONOMOUS SUBAUTONOMOUS AUTONOMOUS AS THE NET

NATURAL ARTIFICIAL NATURAL ARTIFICIAL NATURAL ARTIFICIAL MIXED Fig II-2

Algorithm of adaptation: 1. To define the goal 2. To collect information about environment 3. To develop the World Model (see PERCAPTION) 4. To design the strategy of the goal achievement 5. Moving toward the goal while avoiding obstacles 6. To communicate with other team s members 7. To adjust the World Model in accordance with new information

8. Reevaluate the strategy 9. Repeat 4-6 until the goal will be achieved. Fig. II-3A. The Crystalline Atomic Unit Modular Self-reconfigurable Robot (http://www.mit.edu/~vona/xtal/xtal.html) 63

Fig. II-3B. The Soccer team. Fig. II-3C. The autonomous the Centibots players. http://www.cs.washington.edu/ai/Mobile_Robotics/projects/centibots/

64 Fig. II-3D. Boeing s autonomous unmanned underwater vehicle (Long-term Mine Reconnaissance System) 65

Fig. II-3E. The system with reconfiguration (South California University, NASA, Lockheed Martin, Raytheon DARPA project) 66

SENSING AND SENSATION Sensation is a perception associated with stimulation of a sense organ or with a specific body condition; the faculty to feel or perceive; physical sensibility;

an indefinite, generalized body feeling [36]. This definition of the term : Sensation includes attention as mobilization of the sensor system for intensification of information collection and discrimination as the first step of information organ ization. It would be better to separate sensation from perception. They are two different functions. It is easy to develop two different simple subsystems of an artificial intelligent system then one system with the complex function. So, sensing is the process of data and information collection through the outer and inner sensor systems. It if the fir st step in information process. Sensing begins with the impinging of a stimulus upon the receptor cells of a sen sory organ, which then leads to perception. Darwin) stated: Sir Francis Galton (cousin of Charles

and accurate an individual Pain is example of sensing. Pain is unpleasant sensing occurring in v arying degrees of severity as a consequence of injury, or disease. It can be result of the sensors overloading. Physical pain is a low-level internal negative value-state variable tha t can be assigned to specific regions of the body. It may be computed dire ctly as a function of inputs from pain sensors (tactile, temperature, etc.) in a specific region of the body [16]. It is part of self-testing, self-diagnostic system that provides inf ormation about internal or external problems that are important for survival. Hunger, low power battery level, pain in joints , noise or lowlevel pressure in the lubrication system is the signal for treatment. In human-robot society information exchange between agents is more important than a robot s personal feelings. It is the part of self-awareness. Beside vision, hearing, smell, taste (?), touch, pressure, temperature, and pain an artificial sensing system unlike a natural one has many more different types of sensors (infrared, supersonic, different types of rays, and so on) and can better commun icate with its environment. The system needs specific inner sensors to feel love, inner p ain, excitement and other emotions. The sensitivity of an artificial sensor has a wider range than does a natural on

e. It can create a stronger influence its environment and can create specific, unknown (does not ex ist in natural systems) responses to input signals. Artificial sensors usually combined with the receptors that converts input signal. 67 person -- and first woman -- to receive a "bionic" arm, which allows her to cont rol parts of the device by her thoughts alone. The device, designed by physicians and engineers at the Rehabilitation Institute of Chicago, works by detecting the movements of a chest muscle that has been rewired to the stumps of nerves that once went to her now-missing limb. Surgeons took the first step by rewiring the skin above her left breast so t hat when the area is stimulated by impulses from the bionic arm, the skin se nds a message to the region of her brain that feels "hand." Someday she hopes to upgrade to prosthesis, that will a llow her also to "feel" with an artificial hand. Reach By David Brown, Washington Post Staff Writer Thursday, September 14, 2006 ). This is an example of inner sensing system as the part of self-testing and se lf-awareness in hybrid-the natural and artificial - system. Some communication tracks that transfer signals from the sensors have the abilit y to respond directly to these signals. This response is a reaction of an organism or a mechanism, to a specific stimulus and in some case can create resonan ce to these signals as excitement. This amplified signal can be transferred to different parts of a body. In this case the system demonstrates the ability to react unconsciously t o the input signal (A Reflex arc, see REFLEX). Examples: sound of metal moving on glass surface, music, and so on. Human information with the spinal cord. This method distinguishes from perception (rec ognition and interpretation of sensory stimuli based chiefly on memory). It is poss ible to create the artificial information tracks with the same ability. Repetition of the same sign al can activate memory and excitement. In this case, the process involves subconsciousness. The so-called Mirror Cells in the human brain can respond to the signal presenting behavior of another human being by correspondent actions and prediction of the results of these actions. An artificial system can demonstrate the same ability if similar actions saved in t he memory and

can be activated by visual or other signals. Forceful movement of an actuator can create the artificial information tracks si milarly to the process rehabilitation of injured human s part. The aging of the population dramatically increase the needs for the s ervice industry. It creates pressure on a job market. This vacuum can be filled with artificial inte lligent systems (robots). To qualify for this job (especially in the senior community) the AIS has to be able to recognize human moods and behave in accordance with circumstances. Voice volume level, types of words, the velocity of speech, and so on carry information abo ut the human mood. Sensitivity of the artificial system is responsible for this i nformation collection. Module of perception generation converts this information in mood descr iption and understanding. Walt Disney Co. [New Scientist, 24.01.2006] has created a media player that s elects songs based on its owner s latest mood. The device has wrist sensors that me asure body temperature, perspiration and pulse rate. It uses these measurements to build a profile of what music or video the owner would prefer played when He/She is hot, cold, dry or sw eaty, and 68

when their pulse is racing or slow. The device then comes up with s uggestion to fit each profile, either using songs or videos in its library or downloading something new that should be suitable. If the owner rejects the player s selection it learns and refines the profile. So, over time the player should get better at m atching bodily measurements with the owner s moods. This type of relationship can be seen between two artificial s ystems. It resembles compassion (see below) and emotions (see below).

ATTENTION Attention is the cognitive or subcognetive process of selectively conce ntrating on one thing while ignoring other things. It is mobilization of the recourses of sensation through control or emotions. The information patterns (sound, smell, light, shape, etc) can be stored in the specific area of memory. Information matched to these patterns activates a selective magnified collection of information from the speci fic source. In terms of the Theory of Control Systems it means increasing sensibility of the sensor system. Discrimination is the first step of information organization, it detects the specific signal that should activate attention (see DESCRIMINATION). In the human brain the limbic system (Cingulate gyrus) responsible for attentional processing. There are two types of attention: 1. Overt attention is the act of directing the sensors toward a stimulus source . 2. Covert attention is the act of mentally focusing on a particular stimulus Attention can also be split between several activities or signals. It is not eas y but possible to split resources of an Artificial System to accommodate information from the seve ral sources at the same time and generate correct world model. The subject or the object of attention in an artificial world is determined by relation to the problem that the agent is working with unknown, unfamiliar signal the power of the information flow or signal fitness to the criteria of pleasure (stimulus) A lack of confidence uncertainty assigned to the frame of an external object may cause attention to be directed toward that object in order to gather more information about it.

Algorithm (Attention): 1. To memorize of the patterns and the criteria 2. To compare input information against the patterns 3. If it fits to the pattern or the specific criteria, then increase informatio

n collection from this source (discrimination). Generate the signal (increase sensibility of the sensor system). Otherwise to ignore it. 4. To collect more information 69 DISCRIMINATION The word discrimination comes from the Latin "discriminate", which means to "dis tinguish between". Distinction, the fundamental philosophical abstraction, invo lves the recognition of two or more things being distinct, i.e. different. Th e adjective discriminative refers to the ability or power to discriminate betwee n different things, i.e. notice and state their equality or difference, make a distinction. It can also refer to characteristic element s, attributes or features of a thing. APPENDIX 13 presents some methods of Discriminate Analysis. Discrimination is the ability to detect subtle (slight as to be diffi cult to detect or analyze; elusive; not immediately obvious) differences, to respond only to specific kind of stimulus. A difference is the quality or condition of being unlike or dissimilar [36]. It is important tool of the object recognition, compassion (see SOCIAL BEHAVIOR). I t based on specific criteria. Discrimination searches the answer of the question: Is this regular in formation or the specific stimulus? the world model, Information is submitting to perception to generate

stimulus is submitting directly to generate the action. Algorithm:

1. To receive data from each sensors 2. To evaluate all the signal parameters through localization 3. To group signals by objects 4. To recognize the object or event (preliminary object recognition) w ith a low level degree of possibility based on existing information in the me mory 5. To evaluate information by criteria 6. To activate attention

PERCEPTION Perception is Recognition and interpretation of sensory stimuli based ch

iefly on memory [36]. It is the process of acquiring, interpreting, selecting, and organizing sensory information. Perception is the process of World Model developm ent. It is translation of sensor data into organized meaningful information. Everything about the world is a matter of perception. Definition of the stimuli interpretation can be found in [16]. The objects of perception are percepts. Percepts are not the material objects in the physical realm that the mind imagines (rightly or wrongly) that it is sensing. They are, rather, the actual objects of perception, patterns of sensational qualities, impression. Impression is an image retained as a consequence of experience, mental picture. Visual percepts are patterns of area (shape, size, and position) and color (tint and tone) over a two-dimensional field. Color is the easiest feature to percept. Audile percepts are patterns of pitch and volume over time. In the human s brain the cerebellum contains topograph ic maps and helps map object location and shape into grasping coordinates. The parietal lobe plays important roles in integrating sensory information from various parts of the body, and in the manipulation of objects. 70 These are the things immediately perceived by the mind; the objects t hey are taken to represent are a matter of inference. Percepts, in fact, are used to infer the existence of the entire material world; since its reality is o nly surmised, it must technically be considered a perceptual realm. It mea ns that perception is the cognition process that is based on logic. Logic is a system for deriving new symbols from existing ones, by co mbining or altering them according to certain conventional rules. Many cognitive psychologists hold that, as we move about in the world, we create a model of how the world works. That is, we sense the objective world , but our sensations map to percepts, and these percepts are provisional, in the same sense that scientific hypotheses are provisional (provided or serving only for the time being ; temporary). As we acquire new information, our percepts shift. In the case of visual perception, some people c an actually see

the percept shift in their mind's eye. Others, who are not picture thinkers, m ay not necessarily perceive the 'shape-shifting' as their world changes. The 'es emplastic' nature has been shown by experiment: an ambiguous image has multiple interpretations on the perceptual level. Just as one object can give rise to multiple percepts, so an object may fail to give rise to any percept at all: if the percept has no grounding in a person's experi ence, the person (natural and artificial) may literally not perceive it. This confusing ambiguity of perception is exploited in human technologi es (development smart systems) such as camouflage, and also in biological mimi cry, for example by Peacock butterflies, whose wings bear eye markings that birds respond to as though they were the eyes of a dangerous predator. So, perception is an ability and tool to develop the World Model. Four main func tions are involved in this process [16]: 1. Localization is one of the functions of perception. It includes segregation of objects and events, perceiving distance, location, and motion of sources of informat ion. All these functions work similarly to those in a natural system. 2. Recognition is an awareness that something perceived (to become aware of dire ctly through any of the senses, especially sight or hearing) has been perc eived before. [36]. It is an ability of a system to recognize specific information (not stimu lus) with certain level of probability through comparison with existing in the m emory information. 3. Judgment. 4. Interpretation (see UNDERSTANDING AND INTERPRITATION). In [35] is shown that perception works as the Multiple Draft model that accepts stimulus, then color, then shape, motion, and object recognition (traffic light signals are coded by shape). This sequence is determined by slow speed of natur al neuron net

computation. A computer s computation speed is grater then speed of a na tural system. In this case the structure of an artificial system of perception can be different i n any sequences. 71 An Artificial Intelligent System has more powerful sensor systems and can develo p the more accurate (in term of information richness) World Model than a natural system. Perceiving and conceiving are two functions of perception. Perceive mean s to become aware of directly through any of the senses, especially sight or hear ing; to achieve understanding of; apprehend; to be physically aware of through the senses, exper ience, feel. It is a physical process, product of the sensor system. Its fun ction is to establish connections between signals from different sensors (dir ection, time, etc.) Conceive is to apprehend mentally; to understand the language, sounds, form, or symbols. It is a mental process, product of logic. Its function is to establish connections between symbols and their meanings. Algorithm: 1. To receive data from the each sensors 2. To perceive these data - to evaluate all signal parameters through localization - to evaluate time parameters of each signal 3. To conceive these data recognize the object or event with a certain degree of possibility based

on existing in the memory information 4. To generate the model of the objects, scenes or events 5. To supply the object scene or event with evaluation through judgment 6. To submit information to the world model

OBJECT RECOGNITION

The problem in object recognition is to determine which, if any, of a given set of objects appear in a given image or image sequence. Recogni tion is one or several pre-specified or learned objects or object classes can be recognized. Thus object recognition is a problem of matching models from a database with representations of those mode ls extracted from the image luminance data. Preliminary object recognit ion is the function of discrimination as the part of sensing. In the human s b rain the temporal lobes are part of the cerebrum. It Involves in high-level visual processing of complex stimu li such as faces and scenes, and the object perception and recognition. It is very interesting that the process of Recognition in the psychol ogy of natural systems was borrowed from the artificial intelligent sys tem activities descriptions. It is based on segregation of the object (A /, -, \) and comparison of the basic elements against the stored samples in the memory with desegregation as the next step. The representation of the object s model is extremely important process . Clearly, it is impossible to keep a database that has examples of the every view of an object under every possible lighting condition. Thus, object views will be subject to cer tain transformations; certainly perspective transformations depending on the viewpoint, but also transformations 72

related to the lighting conditions and other possible factors. For example, some times it is big problem even to human being to recognize objects in the Picasso s pictu res or other 20th century abstract arts. In some cases system can develop ima ge of the object that represent the real object with the certain level of probability (see also APPENDIX 4) There are two stages to any recognition system: 1. the acquisition stage, where a model library is constructed from certain de scriptions of the objects. 2. the recognition stage , where the system is presented with a perspective im age and

determines the location and identity of any library objects in the image. The most reliable type of object information that is available from a n image is geometric information. So object recognition systems draw up on a library of geometric models, containing information about the shape of known objects. Usually, recognition is considered successful if the geometric configuration of an object can be explaine d as a perspective projection of a geometric model of the object. There are several engineering met hods to solve this problem [27]. To generate the dynamic model of the object it is very important to assign a ttributes to the object: color, position, speed, etc. Events and processes can be recognized by t he sequence of observing steps of an action. The software University of San Diego students-sans facial hair, jewelry, and apparent makeupthe machine learned to tell men from women with only an 8 percent error. Humans using the same data made mistakes 11.6 percent of the time. Building a neural network tha t can learn abstract concept like maleness and femaleness, without ever being told anything about people or sexual characteristics, is just a way to learn how networks categorize (Beatrice Golomb, University of San Diego) [38]. A neural network trained on photos of children with William syndrome could catch patterns that human doctors might miss.

Algorithm: Object recognition

1. Perceive the input signals 2. Conceive this information Define the edges of the object Integrate the edges into the shape Find mach of this shape to the pattern in the data base

If If Put it in the data base with attributes Assign the name. 73 This procedure can be used for an intelligent abilities (perceiving and conceivi ng) measurement.

Speech and Text Recognition Technology Computer speech recognition is the process of converting a speech signal to a se quence of gramaticaly organised words. In terms of technology, most of the technical text books nowadays emp hasize the use of Hidden Markov Model as the underlying technology (APPENDIX 11). The dy namic programming approach, the neural network-based approach and the knowledg e-based learning approach have been studied intensively in the 1980s and 1990s. One of the important areas of the objects recognition application is the text recognition. A human being has strong ability of the words re cognition. All types of the machine s word processors else have ability of the written and spoken words recognition even in case of misspell ing and different pronunciation. An ability of a human being can be demonstrated by example presented below. Most of people can read this text fluently without rec onstruction each word: yuo hvae a sgtrane mnid if yuo cna raed this. Cna yuo raed tihs? Olny 55 pcenert of plepoe cluod uesdnatnrd ym wariteng. The compute sr ilteleignnce hsa hte sema phaonmneal pweor as the hmuan s mind. Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it dseno't mtaetr in waht oerdr the ltteres in a wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be

in the rghit pclae. The rset can be a taotl mses and you can sitll raed it whotuit a pboerlm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Btu it si nto mipotratn ot ehav the frits and teh lats eltters ni the ritgh poositni. oyu can rade even fi the lats letrest aer in teh rwogn poosiotns.

This text shows that even location of the last letter is not critical in the man y cases. Result of intelligent system can demonstrate the same ability. Procedure of reading and understanding of this text consists of two steps: 74 The first step: recognition of the language by specific criteria or s imple reconfiguration sequence of letters in some small words. This method can be used to determine what type of the language is presented in Latin letters, for example: English or Russian? The second step: recognition of the words.

Algorithm:

1. Choose words with the same first and last letters 2. From this set of words select words with the same number of letters 3. From the new set of words select words with the same letters 4. Check meaning of the words 5. Choose words with the same first letters 6. If positions of words don t fit to the grammar of a sentence then define parts of speech 7. From this set of words select words with the same number of letters 8. From the new set of words select words with the same letters

9. Check meaning of the words 10. Recognize unknown words by recombination of letters 11. Analyze the text to find meaning of unknown words by association 12. Using associative thinking on wide range of texts find meaning unknown words 13. Generate correct grammar sentence (see Fig II-4A and 4B) This test can be used for an intelligent ability (reasoning, text recognition) m easurement. It can be design with different levels of difficulties: different word levels an d different mix of letters. Fig. II-4A and 4B present the examples of the software that generate correct Eng lish grammar sentence. See also UNDERSTANDING AND INTERPRITATION. Detecting and Recognizing of Emotional Information see EMOTIONS.

UNDERSTANDING AND INTERPRETATION Understanding is a psychological process related to an abstract or phy sical object or process, such as, person, situation and message whereby one i s able to think about it and use concepts to deal adequately with that object. It is the most difficul t problems of Artificial Intelligent Systems. Understanding in the artificial system has specific meaning. In this case UNDERSTANDING is the process of recognition of the object; symbols or events bin d all known features to the object, symbol or event and evaluates possible relatio n of the agent with the object or event. It is searching for correspondence be tween signals (symbol) and knowledge, searching for meaning. g and In reality understandin

conceiving are synonyms. Comprehension is a result of understanding.

75 Relationship between the symbol and meaning is the result of convention. Semiot ics is the study of signs andsymbols, both individually and grouped in sign systems. It inc ludes the study of how meaning is constructed and understood. Meaning is discursive: it arises from conventions that presuppose not only a social world but also one in which the meaning bearing share the interests and aspirati ons of those whom they would engage. Meaning (thus, knowledge and conduct) is now stripped of abstract, once-and-for all features and is seen as entirely construc ted. The limits of my language mean the limits of my world her Ludwig Wittgenstein, 1889-1951). Development of new meaning by Wittgenstein has several steps [34]: 1. A name signifies something only to the extent that it is understood to stand for the thing signified. 2. But or conventional understanding. 3. But the adoption of conventions is a social act. Conventions are part of the actual practices of people in the world. 4. It is impossibly to apply a private rule to a private occurrence. Two aspects of meaning that may be given approximate analyses are the connotati ve relation and the denotative relation. The connotative relation is the relation between signs and their interpreting signs. The denotative relation is the relation between signs and objects. (Australian philosop

Correspondence between symbol and its meaning through knowledge is the process o f INTERPRETATION; Interpretation is something that serves to explain or c larify. Interpretation, the true subject of semiotics, begins with perceptual p aradigms, which are

abstractions from perceptual patterns. Understanding is the denotative relation Interpretation is the connotative relation Understanding is the base for development of response and strategy of utilization of new information. In the most of cases Artificial Intelligent Systems are not deal with developmen t of meaning of new concept, but understanding and interpretation of existing concep ts. For example: incoming information presents word. The system should be able to recog nize the parts of speech and assign all attributes. It is object recognition with understanding. So understanding is binding the abstract symbols and signals collected by concei ving. Abstraction is the process of defining a concept based on an observat ion, mental or perceptual; hence all abstractions are concepts (see ABSTRACT THINKING AND CONCEPTUALIZATION). 76 Conception is the ability to form or understand mental concepts and abstractions; something conceived in the mind; a concept, plan, design, idea, or though [36]. This definition is an example of breaking the rule nu mber 2 of definition development ( conception is concepts) (see APPENDIX 8). Better definition is: Conception is the ability to form or understand a general idea derived or inferred from specific instances or occurren ces. Idea is description of the object, process, etc in the minimal set of defining features conveying fundamental character presented full information to recreate these object, process, etc. A concept development is the subject of A sign is an association of a perceptual paradigm with another concept. This association is made through memory: two concepts are associated when they occur in t he same thought experience; thinking of one will then cause the recall of the entire experience, in which the other concept is also present (see ASSOCIATIVE THINKING). Interpretation is the process of fitting observed percepts into recogni

zed paradigms, thereby deriving meaning, which is nothing more than the association o f concepts. Interpretation applies to all aspects of the perceptual realm. It is means of constructing a personal version of the perceptual realm ansfer of meaning between two languages. However, "translation" refers to a transfer from text to text. Communication is an attempt by one mind to induce a certain interpretation by an other. This includes such things as disinformation, which is an attempt to induce a false in terpretation of the course of events in the perceptual realm. But by far the most i mportant form of communication is language, the use of symbols. A symbol is a sign wh ose association between perceptual paradigm and other concept is one of convention. (The first convention must be established by coincidence, where two interpreters form the same associa tion based on some common experience. That first convention can then serve as th e basis for further conventions.) The set of all symbols and logics u nderstood by an interpreter is that interpreter s language. Communication between artificial agents can be developed b y wider range of technical systems. An agent establishes sign relationships only by a gradual learning pro cess. It experiences things in conjunction and thus forms associations in memory, develops a sense of the functional rules of the perceptual realm by trial and error, and is constantly i n the process of revising its personal versions of the course of events in it. In large part th is is accomplished using the scientific method HYPOTHESIS GENERATION). The text recognition and understanding consists of three major procedures: 1. Grammar recognition (syntax) 2. Parsing (part of speech) 77 3. Meaning of symbols (worlds and signs)

Words meaning (semantic) is determined by location in the sentences, a nd parsing (for example) I have this file. This file is on the table. I file these documents. I have a file cabinet. and connotation: I am expecting you I am waiting for you The words Fig. II-4A and 4B illustrate two versions of the software with abilit y to recognize, understand, and interpret new grammar rules, math operations and formulas that a user presents for sentence generation or math calculations. Algorithm: Understanding and Interpretation 1. Object recognition (understanding) as the combination of signs ct) (see OBJECT RECOGNITION, Algorithm) 2. Bind this object to the knowledge of the object in the knowledge base-interp retation (unfriendly or friendly object) 3. Bind this knowledge about the object to the knowledge of relationship betwee n this object and others ther interpretation of the object behavior in relationship to the o concept (obje

objects and environment. 4. Generate response in accordance with the rules in the knowledge base.

For Emotion Understanding see EMOTIONS. Understanding and interpretation of information are based on agent s individual k nowledge that represents result of its lifetime learning and personal experience. Therefo re, it is a very difficult to foresee the actual behavior of a fully autonomous advance

d artificial intelligent system (see INTUITION). This behavior can be very dan gerous. By the way this is one of the possible reasons of dangerous behavior a human being with difficult childhood. M ay be it is reasonable to create some day the special group of artificial intellig ent systems behavior supervision. It is important to prognosticate possi ble results of their behavior and protect environment, other AIS, and a human being from their not authorized actions (see FREE WILL AND ACTIONS and LAW AND MORAL). Some day it can become the real world problem. 78 Fig. II-4A (Mr. OU)

79

Fig.II-4B (Mr. WONG)

80 REASONING

Introduction Reasoning is the process of drawing conclusion from facts, ially Using of reason, espec

to form conclusions, inferences, or judgments [36]. Practical reason is intellec tual virtue, by which one comes to distinguish what is good and bad, the prudent course of ac tion, the right strategy, and so on. Logic is the tool of reasoning. It is a system for deriving new sym

bols from existing ones, by combining or altering them according to certain co nventional rules. This topic covers only preposition and predicate (monotonic) logic. Reasoning and learning are the most powerful intellectual functions. It is not e asy to emulate them. The main problem is determined by the very nature of reasoning that is based on computation with words rather than computation with numbers.

on some image of the human mind, something like idea of following a set of instructions, sort of like our conscio us thoughts. If we have a list of thing to do, we do it. We can easily imagine ou rselves being a von Neumann machine. According to Rumelhart, the standard assumption in cognitive psychology or an artificial intelligence has been that these subconscious mental processes are just like the conscious ones, only that they go on without our conscious awarenes s. They are sequential, logical and rule-based, even if we aren t conscious of the rules or even able to articulate th em. The reasoning in an artificial intelligent system is going through a sequential process where this follows that follows that. There is a sense of connectedness among things. All of logic, linguistics, cognitive psychology, and related fields have been about b uilding rules that approximate the underlying casual processes. Researchers in the artifici al intelligence field insisted on replacing ambiguous natural language with it s own computer versions. The long atomistic, rationalist tradition that extends from Descartes, Leibniz, and Hobbes assumes that all phenomena, even mental ones, can be understood by br eaking them down into their simplest primitive components. The goal of the great sevent eenth-century rationalists was to find these components and the purely formal and logical rule s that joined them together into the more complex compounds of the exterior and int erior worlds. All

reasoning, therefore, could be reduced to calculations. Analysis would produce a kind of alphabet of facts, the simplest atoms of the world that could be rec ombined by limited number of logical relations to produce and explain the world and all thoughts. T hat same goal lies behind Russell and Whitehead s Principia Mathematica , their great attempt t o reduce the world to logical operations expressed mathematically, Ludwig Wittgenstain s Tractatus Logico-Philosophicus (1922), and artificial intelligence.

thought of as the attempt to find the primitive elements-that mirror the primiti ve objects and their relationships that make up the world 81 That effort assumes that it is possible to strip these atoms of all their rela tions, that at base they are context-free, linked by abstract rules. The grand project of artificia l intelligence has been to find those atoms and logical relations that govern them and to build a symbolic computer representation that captures that order. This pr oblem can be resolved in some way by learning of explanations. Tools of reasoning: 1. inference (analogical, probabilistic, monotonic, non-monotonic) 2. tautology (the reversible rules) 3. decomposition 4. combination 5. separation 6. comparison and selection 7. judgment 8. algorithmization Knowledge Representation Knowledge is the sum or range of what has been perceived, discovered, or learned [36].

Laws of Nature and Society development and existence are knowledge and source of new Knowledge. ual mechanism. Knowledge of the external world is mediated by percept

Thomas Hobbes and Pierre Gassendi have agreed that reality is not com posed of two different kinds of staff but of one kind only the physical [67].

The problem of representation is a central part of the problem of kn owledge and an enduring issue in philosophy of mind: 1. At the most fundamental ontological level, our experiences of the external w orld are complex arrangements of matter, energy and information. 2. We might ask whether we or a honeybee to an whose vision is sensitive more accura

electromagnetic spectrum to which we are essentially blind tely

3. Thus, the question of representation can be stated: Is our knowledge of obje cts in the external world direct or mediated? [61]. There are a lot of different approaches to knowledge representation in the agent s knowledge base. The most important languages of knowledge representation are prep osition, predicate and fuzzy logic, frames, semantic net and other. All known knowledge representat ion such as map building, STRIP language [27] etc. can be presented through the languages mentioned above. The real external world could be seen or heard or felt as if it holds certain pr operties. The fact is that vision and sensory experiences in general comprise properties of a distinctly

projected onto the retina in straight-on plane will it form a circular pattern o n the retina. At any other angle, it will be elliptical. In accordance with the identity theory question is what in reality quality terms, such as

82 The set of the different models can be stored in the knowledge base. Fitness of the object and event to the specific model and category can be defined in the process of learni ng (teaching) by the certain criteria. Knowledge and precision of it presentation are not absolute. They have probabilistic character. The tiger, lion, punter and cat are different animals but all of the m belong to the same APPENDIX 4). A neuron network is the discrete system where information is presented in discre te form. The Structure of Knowledge Representation in the Intelligent System Classification of natural systems memory is based on the duration of memory rete ntion, and identifies three types of memory: 1. sensory memory, 2. short term memory, 3. long term memory (see APPENDIX 14). The Knowledge Base of an artificial system has two types of memory: 1. The memory for data - the short term memory combined with the sensory memory , 2. The memory, where organized data (knowledge) is located - the long term memo ry The last one has two parts: 1. The Application Knowledge base 2. The Reasoning Knowledge base The Application Knowledge base can be divided into 1. declarative (semantic) 2. relationships memory. Perception generates information for the World Model in the short-term memory. The Associative thinking and Object recognition work with the long-term memory.

In artificial systems information exchange between these two types of a memory i s time natural systems this process takes place at specific period of time (see APPENDI X 14). The following features of the memory are: 1. everything lived through consciously is automatically printed to the memory 2. the memory is constantly held accessible 3. past experience, though having lost its original presence can be made to reappear in mental presence. The memory: 1. relies an enormous capacity of information storage, 2. relies on the conservation of information and its protection against overwri ting 3. means that the experience passed is reproduced by recombining the information stored with mental presence. 83 Anything

Objects (Abstract models) Events (Processes)

Categories

Representation

Linguistic Numeric Symbolic The structure of knowledge representation Any intelligent information system operates with the real world informa tion. This information is represented by the rules of application. Logic manipulat es with rules of reasoning. These rules represent the relationships between abstract term

s. Meaning of real world terms can be assign to these abstract terms. There is much research dedicated to the problems of reasoning and the agent s structure design [22,23,25,26]. All of them are based on the representation of knowledge a s the rulebased, semantic net, frame structure and so on. These knowledge bases (KB) are c entered on application knowledge (AK) (domain oriented KB). Application rules of r easoning are different for different areas of the application. The single-base approach decre ases the level of universality of the agent. The most existing systems with reasoning are not universal theorem provers (http://www-formal.stanford.edu/clt/ARS/Entries/acl2). These systems are based on rules of reasoning and don t work with application knowledge . Some of them, like ACL2, are designed as multi-KB. However, all of these systems are based just on preposition logic. The most interesting result in the area of reasoning is the Jess language (Jess, the Java Expert System Shell http://herzberg.ca.sandia.gov/jess/demo.html). This lan guage also is based on just one KB-AKB. Information is presented by predicate logic. Rules of reasoning are incorporated into the source code. A possible way to increase the level of universality of the agent is by creating the double-KB agent structure. The first KB is the application knowledge base (AKB); the secon d one is the rule of reasoning KB-reasoning knowledge base (RKB). The RKB is a universal KB. It can be used with different AKB. The Double-KB structure of a system is shown in F ig. II-5. AKB has a multilevel structure. The process of reasoning is shown in Fig. II-7. Complex rules of application should be decomposed to simple rules via the rules of reasoning (And -Elimination rule-RR2 in Fig. II-7) application. The idea of a multi-KB in search engines also was

described by Dr. Lotfi Zadeh in Capability to Search Engines- The Concept of Protoform http://www.cs.berkeley.edu/People/Faculty/Homepages/zadeh.html er: with the rules of deduction 84 Separation of the AKB and RKB from the program code converts a conventional syst em into a system with the ability to learn, creates conditions for teaching the system t hrough delivery of new rules of application and reasoning by an expert in an area of application and reasoning without knowledge of programming. It is an important progressive step from a con ventional system to the AI system. New rules should have the same structure as existing rules. New processes can be added via new program modules. T he number of areas of application determines the number of AKB. Multi-KB structure creates conditions necessary to design a system with the ability to generate rules as hypotheses in the AKB. The choice o f application rules (AR) is determined by terms. Choice of rules of reasoning (RR) is deter mined by the structure of the application rule. New knowledge as new application rules is pre sented to the World Model (WM). Technically a process of reasoning can be described as the following chain of steps: Data RR activation Execution of the foregoing process is separated by levels of knowledge. Fig. II-5 shows the double-KB system. Fig. II-6 and II-7 show the algorithm and structure of the system.. Fig. II-8 shows Forward-chain algorithm of reasoning that is base d on rules of reasoning RR15-RR17. Application of the rules RR1-RR14 is not shown. Re R a In this lett

e s a on o ing n ing Kno n w o led w g led e g e App p l p ic l at a ion ion K no n w o led w g led e g e ba b s a e e (R ( u

R le u s le of o Da D t a a a re r p e r p e r s e en e t n at a ion t f ba b s a e e (A ( pp p li p c li at a ion t ion ion re

r a e s a on o ing n ) ing ru r les u ) Inf n er e e r n e c n e e en e g n ine g Goal Goa s Tra Tr n a s n lator o Var a iab

ia l b es e d s es e c s r c ipt p i t on o Int n er e f r ac a e Envi v ro r n o men me t Fig. II-5. The double-KB system structure.

85

Knowledge representation in the neuron net. Endel Tulvin, a psychologist at the University of Toronto, and others demonstrated the existence of at least two independent kinds of memory, sometimes calle d procedural and factual. People with brain lesions who have forgotten such fact as wh ether they ever took piano lessons can still remember how to play the instrument. Spatial object representation can be done by methods of graphic software. How did groups of neurons somehow represent the same fact. It works like procedure of classification in th e neuron net. (Fig II-27). Each object receives the specific code and is placed into the specific cluster in accordance with this code. There is some overlap bet ween clusters. It represents the fuzziness of categories and the way some blend into each other. A blob defines a category, and its relation to other blobs suggests the relationship betwe en categories.

Rules of Reasoning

There is a limited set of rules of reasoning in preposition logic [22-26]: RR1. Implication Elimination: , true) RR2. And Elimination: (modus ponens) (IF is in DB THEN

1 2 3 n LIST( i), LIST( i) = 1, 2, 3, n con( i) LIST( i), RR3. And-Introduction: , [i=1,n]

1 , 2 , 3, n 1 2 3 n LIST( i) con( i), [i=1,n]

RR4. Or-Introduction: LIST(

, i) 1 2 3 n

i = 1 , 2 , 3, n LIST( i) dis( i), [i=1,n]

RR5. Double-Negation Elimination: RR6. Unit Resolution:

RR7. Resolution:

RR8. Universal Elimination: ( ) (g), (from DB: = g) ( )

RR9. Existential Elimination: (g) (from DB: = g)

RR10..Existential Introduction: (g) ( ) (from DB: = g)

RR11. DeMorgan Laws RR12 Universal Generalization: ( x) P(x) RR13 Existential Generalization: ( x) P(x) RR14 Rules of Induction: ( k) {[P(k)=T] [P(k+1)=T]} T RR15 Associative law This set of rules creates the universal RKB. 86 1 2 P(n) P(1)=T

3 4 5 6 7 9 14 10 11 12 13 8 Fig. II-6. Multi knowledge base system. (Mr. Uri). 1. Add to DataBase button 2. Add to Application Knowledge Base button 3. Data Base (data display area) 4. Knowledge Base 5. Reasoning Rules display area Button 6. Change Data Button/delete data 7. Choose Name Box (facts on) 8. Ask a Question Button 9. Pre-set question panel 10. Choose Name Box (pre-set question) 11. Choose Object Box 12. Execute Pre-set Question Button 13. Execute Pre-set Question 14. Button Results display area

87

Example of the process of reasoning. Suppose, the DB initially includes facts A, B, C, D, and E, and AKB contains app lication rules: AR1: X AND n AND AR2 THEN B Y P Z P AR2: AND D E AND THEN S d B a n d W a n E is true Y is true RR1 AR4 IF B is true D Z X is true AR1 E D is true Y Z is true B IF S is true Y is true A AR3 X

S S AR3: If A is true RR1: I

THEN X is true AR4: IF P is true f S and B and W Then S THEN S B W is true

Sequence of the process of rules application: AR3, AR4, AR2, RR1, AR1 Fig. II-7. An inference (forward) chain in a system based on proposition logic.

88

Proposition logic Forward chaining (data-driven reasoning)

DB & IDR A B C D E A B C D E A B C D E A B C D E S, S B, B W , X P X A B C D E ABCDE CD X ABCDE CD XP X A B C D E S, S B, B W , XPY Y&D&S Z

DB & IDR

DB & IDR

DB & IDR

Y&D&S Z Y&D&S Z Y& Y D& D S & Z X&B & & B E & E Y X&B & & B E & E Y X&B & & B E & E Y X& X B & &

B E E Y Y

A X A X A X A X C P C P C P C

P S& S B& B W & P S&B S& & B W & P S& S B & &W & P S& S B& B W & P con( n ) LI L ST I ( ST )

i i con( n ) LI L ST I ( ST ) con( n ) LI L ST I ( ST ) con( n ) LI L ST I ( ST ) i

i i i i i i

RKB RKB

RKB

RKB

Fig.II-8. The system structure and algorithm. IDR-internal data representation, DB-Data base (external data representation), RKB-reasoning knowledge base

89

Relationship Between Abstract and Specific (see also ABSTRACT THINKING AND CONCEPTUALIZATION) Syntax in predicate logic can be presented as: PREDICATE (LIST OF TERMS - OBJECTS) PREDICATES: RELATIONSHIP, PROPERTIES, and FUNCTIONS. Suppose, the following facts in the predicate logic using meaningful predicates and functions rules. Rules of application (abstract concept): 1) Anyone sane does not teach an AI course. x sane( x) AIInsructor ( x) ) 2) Every circus elephant is a genius. xCircusElephant( x) genius( x) 3) Nothing is both male and a circus elephant. x Male( x)

CircusElephant( x) 4) Anything not male is female. x Male( x) Female( x) Data (specific data): 1) Clyde is not an AI instructor.

AIInsructor( Clyde)

2) Clyde is a circus elephant. CircusElephant( Clyde) Based on the application rules determine: if the state of the following is true, false or cannot be established Clyde is a genius. Example of the working system is presented on the Fig.II-6 and algorithm is on t he Fig.II-9. ADDITIONAL RULES OF REASONING IN PREDICATE LOGIC Rules of reasoning include all rules of reasoning based on preposition logic and an additional set of rules that are specific to the predicate logic such as: RR16. Find all atomic sentences that related to the first term in the DB RR17. Find all atomic sentences with conclusion that related to the predicate of the result of RR1 action RR18. Check each of them against the solution question. 90

Algorithm of Reasoning (predicate logic) Proof that Clyde is a genius: Clyde RR16 RR16 x Clyde AKB AIInsructor( Clyde)

DB x

CircusElephant( Clyde) RR17

RR17 x sane( x) AIInsructor ( x) genius( x) x Male( x) CircusElephant( x) RR18 No RESULT Yes RR17 Genius( Clyde) Male( Clyde) ( Clyde) Genius( Clyde) genius (Clide) Male( Clyde) xCircusElephant( x)

Fig.II-9 shows the Forward-chain algorithm of reasoning based on rules RR15-RR17 . Application of the RR1-RR14 is not shown.

Wumpus World

The Wumpus World as an example of the process of reasoning [27]. 1.4 2.4 3.4 4.4 STENCH TENC WUMP UM US STENCH 1.3 2.3 3.3 4.3 STENCH GOLD BREE B ZE LD 1.2 2.2 3.2 4.2 BRE BR EZE PIT 1.1

2,1 3,1 4,1 A BRE BR EZE Fig. II-10 Wumpus World 91

Wumpus is a beast that eats anyone who enters its room. He is somewhere in the c ave. Possible Agent s actions are: to go forward, backward, turn right 900 and turn lef t 900 . Fig. II-10 presents the famous problem: AGENT is searching for GOLD. WUMPUS and PIT are dangerous areas. These areas are forbidden for an AGENT. Adjusted Areas to W UMPUS (STENCH) and PIT (BREEZE) generate signals of dangers. The agentA d GOLD and bring it back containing a pit or a a square with a dead An AGENT who does not e to find GOLD in goal is to fin

to the start. It dies if it enters a square live WUMPUS. It is safe (but smelly) to enter wumpus. GOLD is surrounded by STENCH and BREEZE. have the ability of reasoning will not be abl

environment presented on Fig. II-10 with WUMPUS and PIT are around of GOLD. Only a sophisticated ability of reasoning combined with environment memorization (the World Model) permits an AGENT to solve this problem. Fig. II-11 shows a system with th e ability to design the World Model and apply rules of application. The fallowing capabilities are included to satisfy these specific system require ments: 1. to recognize objects, situations 2. to infer from the recognized element of the scene 3. to search for required object within a scene 4. to remember scenes

5.

to interpreter situations

6. to evaluate objects and situations. Rules for WUMPUS and PIT: DB S1,1 S2,1 S1,2 Universal Wi,i Wi,i+1 AKB Wi,i Wi+1,1 B1,2 R1: Wi+1,i R2: Wi+1,i+1 R3: Wi+1,i+1 Si+1,1 Wi+2,i Si,i+1 Wi,i+2 R4: Wi,i+2 Wi,i+1 Universal Pi,i Pi,i+1 AKB Pi,i (PIT) Pi,i Pi,i+1 Pi+1,i+1 Pi+1,1 Wi+1,i+1 R1: Pi+1,i R2: Pi+1,i+1 R3: Bi+1,1 Pi+2,i Bi,i+1 Pi,i+2 R4: Pi,i+2 92 Pi,i+1 Pi+1,i+1 Pi,i Bi,i+1 Wi,i Bi,i Si,i+1 Si,i B1,1 B2,1

(WUMPUS) Wi,i Wi,i+1

Fig. II-11. Agent in the Wumpus World. (Mr. Benny Wong) 93

Two sets of equations can be replaced by universal one: R1: Yi,i Xi,i Xi,i+1 Xi+1,i R2: Xi,i Xi+1,1 Xi+1,i+1 R3: Xi,i Xi,i+1 Xi+1,i+1 Yi+1,1 Xi+2,i Yi,i+1 Xi,i+2

R4: Xi,i+2 Xi,i+1 Xi+1,i+1 Xi,i

Yi,i+1

X W or P S or B An Agent in famous reasoning: Modus Ponens, AND elimination, and Unit resolution [27,30]. Algorithm: See Fig. II-4, II-5, II-6, II-7 Y

MEASURMENT OF KNOWLEDGE VALUE AND POWER OF REASONING OF ARTIFICIAL SYSTEMS The level of knowledge is determined by the numbers of associations, analogues, and rules of application in the application knowledge base [30]. The stronger level of knowledge the greater power of creativity (under the same power of reasoning). It is simple problem to define a number of the application rules in the applicat ion knowledge base. The numbers of association and analogues can be defined through the difficult randomly generated procedures. This process presents just approximate result. The level of reasoning is determined by the number of rules of logic (rules of reasoning, DeMorgan rules, and so on) in the reasoning kno wledge base [30]. Standardized weight-values can be assigned to each rule. F ull set of rules of reasoning has limited size. It gives an opportunity to generate the reasoning knowledge base in full power. The ability of the system to manipulate with these rules is determined by the st andard test. Data (information about objects, events, and so on) does not determine intellect ual power.

ASSOCIATIVE THINKING

The goal of associative thinking is to present the set of possible s olutions to solve a particular problem. Choosing criteria and making correct c hoices are the goal of the process of reasoning.

Associative thinking is reasoning that is based on connections between words, events, sounds, etc (see also INTUITION). It is searching of information in t he control system s memory that can enrich input information and help to solve t he problem. 94 Aristotle: two sensations repeatedly experienced together would become a ssociated. The level of connection between words can be determined by the frequency of repetiti ons of their combinations in different texts. For example the word and can be used to find connections between different events and present them as a l aw of nature or law of the artificial system environment (see CREATIVITY). The diam eter of association s ball (circle) determines the strength of associative thinkin g. The strength of association can be defined also by number of steps between terms and the relative frequency of repetition. Strong emotions can increase the associative connection between different events through adjustment of weight of connections. The more informatio n in the agent s memory (data base) and the higher the level of structurization of this infor mation, then the grater diversity and the strength of associations. It is possible to present inf ormation as a set of common sense knowledge collected from different sources similar to the MIT project (Super Intelligence design). Chosen criteria (action, shape, color, etc. ) determine correct associations. Procedure is based on the abilities of recognition and reasoning. In the simple case it develops the like tree structure. But in reality it creates the net as undirected graph because some nodes [n(i)] have associations with nodes from the different branches. This structure (S) can be presented as the net S = n(i)*{ 1 2 3 i

Several nodes have more then one inclusion. It shows associative power of these nodes. Some terminal nodes can be loose, not included in a circle. Size of the associat ion s ball can be limited by specific criteria, for example, the strength of associations. The bigger size the higher level of intelligence, creativity. GenoPharm software (Berkeley Lab) can find hidden knowledge in thousand s of Internet publications that were overlooked by scientists. This sof tware is based on associations between the terms. It infers new knowledge by connecting closely relat ed terms in one meaningful string. Associative memory is a descriptive term on a different level. It refe rs to a kind of network, one structured to perform the specific task of asso ciating one input with another and then retrieving both inputs when just one is presented to the machine [38]. One-way would be tag every item with the main term every memory it held for this would go in one section, an and develop connections to more see LEORNING; Curiosity, Learning by Interactions. 95

The existence of the memory makes reasonable both the materialist poin t of view and the cognitivist point of view as well [14]. In the reconstruction of new knowledge w hen any past event or experience is recalled, the act of recollection tends to bri ng again into use other events and experiences that have become related to t his event in one or more specific ways. This is called an situations from partial information. These systems correlate input data with information stored in memory. Information can be recalled even from incomplete inp ut. Associative memory can detect similarities between new input and the stored pattern [11, 12, and 13].

Researchers don t yet know exactly how the brain completes its associative tasks; it clearly can t work this way because the brain s neurons are so much slower than the computer s individual processors [38] but it is possible for an artificial system. Algorithm: 1. Input: the problem presented in the set of the defined TERMS 2. Find the set of the associated TERMS 3. Define the strength of associations (the diameter of association s ball (circl e)) 4. Rearrange the TERMS in accordance with the strength of their association 5. Present the final set for reasoning (see REASONING).

ABSTRACT THINKING AND CONCEPTUALIZATION

In philosophical terminology, abstraction is the thought process wherein ideas are distanced from objects. Abstract thinking is manipulation (reasoning) of ab stract terms and their relationship. Abstraction uses a strategy of simpl ification (decomposition), wherein formerly concrete details are left amb iguous, vague, or undefined; thus effective communication about things in the abstract requires a common experience between the communicator and the communication recipient. Abstract thinking manipulates with linguistic (abstract words like tru th and justice), math, and graphic symbols under the rules of semiotic s. It is the area of the highest level of the knowledge base informati on (see Knowledge Base). The tools of the process of the logical chain that connects the start and the en d of the thinking process are: structuring, reasoning, tautology. The procedure of understanding of abstract and specific information is based on presented early definitions, patterns, and symbols. Decomposition of unknown compl ex terms or symbols can be used for recognition of unknown abstract information. U nknown symbols and terms can be marked and submitted for definition searching. Unknown specific terms can

be learned and understand through different learning methods (see LEARN ING) and 96

associative thinking methods (see ASSOCIATIVE THINKING). Predicate logic is the tool to develop relationships between abstract and specific (see REASONING).

Conceptualization is a kind of abstract thinking (see UNDERSTANDING AND INTERPRETATION). Concept is a general idea derived or inferred from spe cific instances or occurrences [36].

Conception is notion, or mental image: idea, concept, impression, perce ption, picture, thought, insight, interpretation, mental picture [36].

Conceptualization itself consists of two levels: identification of important characteristics identification of how the characteristics are logically linked (see als o GENERALIZATION, LEARNING, Learning Concept and Conceptual Learning).

Classification (see CLASSIFICATION) is the possible tool to conceptualization: identify important characteristics and how the characteristics are logically linked (asso ciative thinking is the tool to identification). Building a neural network that can learn abstract concept like malenes s and femaleness, without ever being told anything about people or sexual characteristics, is just a way to learn how networks categorize (Beatrice Golomb, University of San Diego) [38]. For example: The abstract term GREATNESS is determined by a weighted sum of term s: A,

B, C, etc. The specific meaning of these terms can convert this abstract term in to a specific, meaningful one. This abstract term can be converted into specific one: New York City s greatness can be defined by walking through downtown Manhattan at lunch time, ob serving the masses of diverse people, seeing the huge beautiful buildings and bridges an d reflecting on the historic domestic and international events depicted on markers on a Broadway sidewalk. All of these create specific filing (see also EMOTIONS) of the AMERICA S GREATNESS because New York City is associated with AMERICA. Another example: 1. Statement: Liberty of the person from slavery, detention, or oppression, confirms the fact. 2. 3. It means Liberty of the person from slavery, detention, or oppression TRUTH Algorithm: (see REASONING, GENERALIZATION AND CLASSIFICATION). Tools of abstract thinking are GENERALIZATION, DECOMPOSITION, CLASSIFICATION, Learning Concept and Conceptual Learning. 97

GENERALIZATION AND CLASSIFICATION

Generalization is a foundational element of logic and reasoning. It is the ess ential basis of all valid deductive inference. For any two related concepts, A and B; A is considered a generalization of concept B if and only if: every instance of concept B is also an instance of concept A; and there are instances of concept A which are not instances of concept B. For instance, animal is a generalization of bird because every bird is an ani mal, and there are animals which are not birds (dogs, for instance). Generalization is the act or an instance of generalizing, to draw inferences or a general conclusion from the specific acts, events objects, etc. [36]. The process of gen eralization

is based on classification (purification) of the features of objects or events, creating identical groups that can be presented under common description. Fig. II-12 presents the purification procedure with numeric evaluation of the result. In some cases ability of hypothesis generation determines the capability of generalization. Hypothesis generat ion is the example of the algorithm of generalization (see HYPOTHESIS GENERATION). Classification: To arrange or organize the objects or events according to class or category. The ability of classification is determined by the level of purity of the final group. The combination of classification with decision tree permits the learni ng new knowledge from positive and negative experiences. The learning p rocedure can be presented as an algorithm. Generalization is a very difficult work for a machine. Some contextua l rules can tell the machine what parameters to concentrate on in a specific instance in order to reach a relevant decision about sameness or difference. In the article Shepard described the metric of similarity as the units that measure psychological space between two objects. Foe example: for a species whose survival depends on disc rimination dogs from bears, the metric of similarity would put a relatively grea t distance between the two. For one those only needs to comprehend do gs and bears as big animals, the psychological distance would be smaller. Algorithm: (Classification) 1. If there are some positive and some negative examples, then choose the best attribute to split them (attribute that gives maximal gain). Tests all possible splits on all possible independent variables using "classifier" -- a software tool -- to disti nguish between splits, 2. Compute all the resulting gains in purity

3.

Pick up the split that maximizes the gain

4. If all the remaining examples are positive (or all negative), then we are d one. Otherwise repeat 1-4. 98

This procedure can be used for an ability of classification measurement. It is p ossible to develop the different levels sceneries difficulties.

REMOVING IMPURITY FROM THE DATA Impurity = 0.2 Impurity = 0.99 0.2 + 0.3 = 0.5 Non-NJ Impurity = 0.3 20 Impurity = 0.5 NJ All Residents Impurity = 0 20

Total Impurity = 0.99 rity = 0.5

Total Impurity = 0.7

Total Impu

Gain=0.99 0.7 =0.29 Gain=0.7 0.5=0.2 Fig. II-12. Classification. To learn more about Classification (see SOCIAL BEHAVIOR).

INTUITION Does an artificial system have intuition? Is it possible to create an artificial system with intuition? What is intuition? Different people giv e different answers. The working linguistic (computation with words) model is designed to answer this question. In Spinoza s philosophy, intuition is the highest form of knowledge, su rpassing both empirical knowledge derived from the senses and reasoning on the basis of experience. Intuitive knowledge gives an ind ividual the

99 comprehension of an orderly and united universe and permits the mind to be a part of the infinite being. Immanuel Kant regarded intuition as the portion of a perception that is supplied by the mind itself. He divided perceptions, or by the external object perceived and the mind, which results from intuition. me are types of pure intuition. An understanding of space and ti

Henri Bergson (French philosopher born in 1859) contrasted instinct wit h intelligence and regarded intuition as the purest form of instinct. Intelligence, he believe d, was adequate for the consideration of material things but could not deal with the fundamental nature of life or thought. He defined intuition as capable of reflecting upon its object and of enlarging it indefinitely . object, rather than what is absolute or individual. Only by intuition, Bergson declared, can the absolute be comprehended. Some ethical philosophers, among them Spinoza, have been called intuiti onists or intuitionalists because of their belief that a sense of moral values is intuitiv e and immediate. This view contrasts with that of the empiricists, who hold that moral values result from human experience, and that of the rationalists, who belie ve that moral values are determined by reason. John Locke (philosopher, 1632-1704) likes Spinoza, presented intuition a s knowledge. Intuitive knowledge, he wrote, is immediate, leaves no doubt and is

Hubert Dreyfus, a professor of philosophy at the University of California at Ber keley, thinks that do, without being able to give any rationalization, justification of r easons to yourself or to anybody else as to why you did it recognize patterns, if they can Some psychologists sor in the and philosophers, like Dr. Marcia Emery, an adjunct profes

Masters in Management program at Aquinas College in Grand Rapids, Mich

igan, presents intuition as a spark, a lightning flash and don t accept any possibility of unders tanding it. But lack of knowledge is not an excuse but reason to learn more. Plato held that intuition is a superior faculty. Russell designated as intuitive any unreflective instance of knowledge by acquai ntance. Daniel N. Robinson, philosopher (Oxford University) [34]: Intuition is knowing, or impression that something might be the case, without the use of rational processes The Merriam-Webster Dictionary presents intuition as abstract objects or concrete truth 100 Webster of something without the conscious use of reasoning; instantaneous appe rception. stated, intuition is direct knowledge. Learning is cognitive, logical, conscious process.

skills, perceive connections, communicate nontraditionally, and tap into personal and collective wisdom We often use the words which cannot be decomposed into any constituent elements and, in parti cular, cannot be further explained or justified. This is the result of activating some nonverbal model of the world implemented in our brain. Sometimes we are very certain about what our intuition tell us; on other occasions we find ourselves in the state of doubt (Principia Cyber netic Web). There are a lot of people who can consciousness research, said in 1995: recognize patterns, think creatively, use intuition . You simply can t teach a comp uter to translate from one language to another by putting a dictionary in its memory. Yo u come out

with all kinds of strange things. There was a famous case in which the quotation , is willing, but the flesh is weak English, and the English translation was, spoiled. This was in the mid 1990s. But in 1999 my computer already gave me the phrase (http://www.translate.ru/Rus/): improvement. Many experts in artificial intelligence present the opposite point of view. Dr. Marvin Minsky: We are the greatest machine in the world Simon think we ings can explain consciousness the way science explains other th

Dr. John McCarty (artificial intelligence laboratory at MIT and Stanford Univ ersity): " I don Some philosophers and psychologists present the same point of view. The philosop her U. G. Krishnamurti said that: [The brain] is actually, a computer Cognitive Neuroscience Center at MIT) said: that and other cognitive scientists, who started to try to figure out what the mind s s oftware was what brain as a computer: with split-second timing the complex calculations that are required of it by the unconsciousinference theory. processing organism. 101 If we accept that we are computers then we have to accept that a computer is abl e to do what we can. I even didn signs and a computer is able to learn language like a child learns it but certainly not by memorization from a dictionary.

The Encyclopedia Britannica describes, that cannot be acquired either by inference or observation, by reason or experie nce. As such, intuition is thought of as an original, independent source of knowledge, since i t is designed to account for just those kinds of knowledge that other sources do not provide. The negative form of the definition is not very productive for artifi cial intelligent system design. It leads us nowhere. It is not the qu estion: does machine have intuition or doesn t have it? The problem is to define the word intuition to convert it into a workable process. From the practical point of view we need the positive, constructive approach even if at the beginning we design system just with the realization of the simple pr ocess of the intuition imitation. Intuition can produce a passive or active result. Intuitive feeling (d angerous, love) is a passive result of intuition activities. The proble m solution is the active result. In terms of predicate logic the problem can be presented as: danger (x). A problem solution is an active result of intuition activities: do (x). Intuition is an immediate form of knowledge in which the knower is d irectly acquainted with the object of knowledge. Intuition differs from all for ms of mediated knowledge, which generally involve conceptualization the object of knowledge by means of rationa l/analytical thought process. It is impossible to extract knowledge from nothing. If you never hear d about the stock market or brain surgery, you can never make intuitive decisions in these areas. Artificial Intuition is a non -intentional extraction of knowledge from the data and information contained in the memory and involuntary transferring this knowledge into a problem solving actively or perception. It is kind of Associative Th inking. It is subconsciousness, involuntary, unintended process.

that is reasonable and non-contradictable even when applied to the mea ning of the word

for artificial intelligent system design and understanding of these systems psyc hology.

Intuition is not just the search for a similar solution to a problem but som etimes is requires the deal with a more complex procedure. In opposite, the research and decision searching are motivated intendm ent organized processes of a solution of a problem searching. Many philoso phers mention importance of 102 intentionality. For Edmund Husserl (German philosopher) intentionality is one essential feature of any consciousness

Spontaneous brain (natural or artificial) activities can be triggered b y a non-verbal fuzzy defined problem that is dominated in the memory at this particular time. In this case accidental knowledge activates the algorithm of searching for patterns, history, relationships and etc. to find solution for the problem. The more data and inf ormation is stored in the memory, the better result of the intuitive process. The stronger connections between information blocks the stronger system s creativity. The higher the information di versity the more efficient intuitive solutions may be. There are two kinds of informa tion: genetic and non-genetic. In artificial systems genetic information is stored in the hardware and partly in the software and contributes to the artificial intuition. Spontaneous brain (natural or artificial) activities can be triggered by spontan eous interest of the system to the problem. For example if the problem presents as a dangerous en vironment for the system s existence, this may result in spontaneous problem formu lation. A spontaneous faulty problem-formulation may result from the availability of a pow erful sensor system. This system collects information about simple, separately non-dangerous events, puts it together independently from a system viewpoint, looking for patterns and crea tes a sense of danger. The process and information are presented in fuzzy description. All knowledge about objects and processes has to be presented as models designed

from the different points of view (structural models, math models, logical models, chemical models, electrical and information models, etc) (see Knowledge Representation). For example, a human body can be presented in different ways as a structured model, a chemical model, an information model, a mechanical model, etc. Such methods of knowledge presentati on make it possible to easily identify common features in different areas. The structur ed organization of knowledge in the memory is a very important condition for the effective performance of artificial intuition. In the artificial system we don t have to deal with the prob lem of how the natural brain attaches meaning to symbolic representation [14]. The existence of the memory makes reasonable both the materialist poin t of view and the cognitivist point of view as well [14]. In the reconstruction of new knowledge w hen any past event or experience is recalled, the act of recollection tends to bri ng again into use other events and experiences that have become related to t his event in one or more specific ways. This is called an stored in memory. Information can be recalled even from incomplete inp ut. Associative memory can detect similarities between new input and the stored patter n [11, 12, and 13]. Therefore, intuition and association should work together. Realization o f the associative memory can be done as a Hopfield Neuron Network [13] (see ASSOCIATIVE THINKING and APPENDIX 5) Let us look at the following simple scenario. At nighttime you left a party with your friends and were going home. You were thinking about the good time you had. Suddenly you step into a dark street on your way home (level of darkness may be different-fuzzy d escription). Nothing is wrong although your body becomes alerted even if you try

to calm yourself 103 through reasoning. Intuition vs. reasoning! It is unintentional vs. inte ntional reasoning. Intuition can win because reasoning is based on the same knowledge! Reasoning ca n just add some new information and knowledge. As a result, correction of the sense and beh avior can be obtained. When we meet a stranger, we receive complex of information about his/her appeara nce, body language, and way of talking, etc. Our brain compares this information with the fuzzy or statistical models of a

our previous experience. This situation was emulated via a computer model (Fig. II-13). The system was able to generate an intuitive impression at the moment of meeting with a stranger. In the simplest case, the calculation of intuition can be presented as a percent age of positive and negative when meeting another agent (a good man) in three occurrences and just a single negative experience in one case (a bad man). It creates a ratio of a bad intuitive feelin g equal to 25% and a ratio of a good intuitive feeling equal to 75%. Fig. II-13 i llustrates the system that generates intuition in accordance with this scenar io. From experience an agent has knowledge that the most dangerous stranger belongs to the age group of 25-35 years-olds. Younger or older may also belong to this group with s ome level of membership determined as equal to range from 15 years to 45 years old. If the stranger has age 40 years old the n M = 0.5 and index of his dangerous is equal 0.5. In our example negative intuitive feeling w ill be equal to

12.5%. It is the problem simplification but demonstrate a procedure. Unintentional brain activity can include the testing procedures. One day I send an e-mail but forget to attach my file that I promised to send my friend. I was sure that I di d not make this mistake. In the middle of the night I suddenly woke up and realized that I did n ot attach the file. My brain was testing my activities stored in the short-term mem ory against the goal procedure and sends me an error message. It s similar auto matic virus testing software that is used when we reboot the computer without special activation. Certainly, it is ju st an analogy. This ability to control human activities is a very useful component of Artificia l Intelligence. The night sleeping time as transferring short time memory content into the long time memory. The approach described above can be illustrated by another example. Suppose we have the AI system, which has extensive working experience in different areas of knowledg e and also powerful learning abilities from an experienced external teacher. Knowledge is represented as the models: linguistic, math, logical, structured, etc. All these models create the hierarchical structure in the knowledge ba se (see Knowledge Representation). The more abstract the description, the higher the loca tion levels. The linguistic description belongs to the higher level. The physical descri ption belongs to the lower level. Suppose we have a control system and would like to reduce the acceleration of the moving parts. The AI system has information (through the sensors) about the problem 104 and starts looking for a solution to this problem without system intention inter ference. Each level represents a new level of goals. Each new goal motivates a next step in th e search for the solution.

As we know, Intuition can be activated in the sleeping stage when th e brain is working without participation of the human will. One night I had dream. I was not the main participant but an observer. A middle-level manager (Mr. A) of a big company decided to make a joke with the company employee Mr. B. Mr. B had a bad sense of humor and was an unpleasant charact er. Mr. A sends a message to him that some middle-level managers would like to see Mr. B at their meeting some day. He did not send this message directly to Mr. B. He asked Ms. C (a company employee) to tell this to Mr. B as gossip. Mr. B tried to contact Mr. A but Mr. A was avoiding any contact with him. One day Mr. B contacted Mr. A and asked him w hat was going on, and what kind of meeting it was suppose to be. Mr. A had two possib le ways to respond. First, to apologize for the bad joke and in doing so to create an enemy . Second, to say that the meeting was canceled and that this was the reason he did not contac t Mr. B. What is interesting to me as a viewer of this scenario? First, my brain generated the problem. Maybe this problem was stored in my memory as a result of previous activity because this kind of joke may be common. Second, the fact that my brain constructed two possible alternative solutions to the problem solution is intriguing. It d, As le is also possible that my memory already had this information. Thir my brain correctly chose of the second alternative. an observer, I had the chance to view the reasoning process of the two possib choices. I

cannot recall these particular states (problem generation, solution alternatives generation, and decision-making) have occurred simultaneously before in my life. My brain wa s generating this chain of events and reasoning without my intention. The whole process was t ransparent and structured. I was able to In accordance with definition of intuition this scenario is a product of my intu ition. If this is correct then we can emulate the entire process from problem generation

to the problem solving. This creativity can be triggered unintentionally or intentionally by information that was stored during daytime activities. In an interview with The New York Times (Nov. 14, 2000) Dr Terrence J. Sejnowski (a neuroscientist at the Salk Institute in San Diego) said: connection between sleep and creativity, which may be a byproduct of t he way that nature chose to consolidate memories These described scenarios and approach are the point of view that can be used as a foundation of the artificial intelligent system architectural design with int uitive ability. The development of such a system is the first step in the process of in tuition design. In the beginning we can create just an artificial imit ation of natural intuition. Any system architecture, which incorporates an intuition capability, creates a more powerful decision making system and is therefore more self-defensive against destruction. 105 Algorithm (see also ASSOCIATIVE THINKING): 1. Input: New DATA (THE PROCEDURE or THE PROBLEM DESCRIPTION) 2. Find: associated DATA or INFORMATION or THE PROCEDURE or THE PROBLEM SOLUTION 3. Find: EVALUATION FUNCTION (Generated by Experience) or PROCEDURE (SOLUTION OF THE 4. Evaluate DATA 5. Rearrange RESULTS in accordance with strength of association PROBLEM) associated with this DATA

This procedure can be used to measure an ability of a system s intuition. It is po ssible to develop the different levels sceneries difficulties.

Fig II-13. Emulation of INTUITION.

106

HYPOTHISIS GENERATION

Hypothesis is something taken to be true for the purpose of argument or investig ation, an assumption [36]. It is a tentative explanation that accounts for a set of fac ts and can be tested by further investigation, a theory. In large part this is accomplished using the scientific method hypothesis and the gathering of data to check the hypothesis against. If the dat a support the hypothesis, consider it provisionally correct; if they contradict it, it must be revised. Generalization and conceptualization are the main tools of hypothesis generation (see Fig I7). Educated guesses (to assume, presume, or assert (a fact) based on knowledgetype of associative thinking, classification and reasoning) are the tools in many cases to search the hypothesis as well. A computerized educated guess is based on av ailable knowledge. For example, a math problem in the most cases can be solved by mathematical meth ods. The linguistic problem can be solved by linguistic methods. Social problems can be solved by application of knowledge of social sciences. In some cases analogies c an help to solve the problem. Fig. II-14 illustrates the system that can gen erate a hypothesis (an equation) how to

calculate any member of the numeric sequence if the first several mem bers of the set are known. Basic knowledge of the system is four arithmetica l operations. This is the tries and errors procedure. Example: Sequences: 2 4 7 11 16 Hypothesis: 5+6 ? 2+2 Ni-1 + i 2+1+1 11+4+1 Ni-1 + (i-1) +1 4+2+1 7+3+1 2*2 4+3 2*3+1 7+4 2*4+3 2* 11+5

Correct hypothesis: Ni-1 + i An ability of hypothesis generation is determined by: value of available knowledge the limited number of unsuccessful tries complexity of the hypothesis time duration of hypothesis generation For more see also LEARNING and CURIOSITY. Algorithm: 1. Choose the limited number of events or subjects of the same class or same na ture 2. Choose the method of descriptions: linguistic, math, symbols 107 3. Make classification by different criteria 4. Find common formula or symbolic representation (Generalization) 5. Check the result on the new events or objects of the same class 6. If the result is positive then HYPOTHESIS 7. If the result is negative then make correction 8. Repeat 3, 4 and 5 until the positive result. Fig. II-14. Heuristic generator. The first 5 expressions are basic knowledge in math, rest of the expressions are generated as heuristics to each set of numbers.

F inal Result

This procedure can be used to measure an ability of a system s intuition. It is po ssible to develop the different levels sceneries difficulties in different areas of scienc es such as the math, linguistic, natural sciences, engineering, etc. 108 LEARNING Learning Concepts As it was shown before, learning is an essential ability of the intelligent syst em. Learning is the act, process, or experience of gaining knowledge or s kill through schooling or study. When you learn a fact, you learn to think about something in a different way. When you learn a concept, you learn how to treat different things as instances of the same category. When you learn a skill, you acquire a program that enables you to do something t hat you could not do before. In general the term learning is the construction of new knowledge or programs from elements of experience (existing knowledge). It is impossible to learn anything unless you already have some knowl edge, because knowledge cannot be constructed out of thin air. Development of the Internet dramatically changed ability of an Artifici al Intelligent System to learn. Direct communication with the Internet presents knowl edge accumulated by society for many generations. The Internet should be redesi gn to present knowledge in the ready to use form. Conceptual Learning Conceptual learning is development new concepts that can be apply to solution of some problems. These activities can be triggered by curiosity (see Curiosity) or any type of motivations

There is nothing new under the Moon. All new knowledge is constructed from new combinations of existing knowledge. It means that the AIS can generate new knowledge using its computational power, new information, and knowledge in the knowledge b ase. A learning system as a whole has a certain degree of computational power. Su ppose it has the power of a finite-state machine, and then nothing that happ ens subsequently by way of learning will increase its computational power for that to occur, it needs to be equipped with a better memory. Such a system may not be able to carry out additions, whereas at a later stage it may have maste red this ability. To demonstrate the point, we assume that at the earlier stage system is able to observe its own actions. Agent s personal experien ce shows: If you put nothing in your basket then you will get nothing If you put one object in your basket then you will get one object If you put one more object you will have one and one objects This activity can be presented by different symbols: 109 nothing and nothing is nothing nothing and one stick is one stick one stick and one stick are two sticks one stick and two sticks are three sticks The & 0 + 0 = 0 0 + I = I I + I = II II + I = III or or or or 0 & 0 = 0 0 & I = I I & I = II II & I = III

n + I = (n + I) . Now it is possible to infer the rule of arithmetic any: Arabic, Romans, unitary code, binary code, Morse code, etc. It is possible to demonstrate this ability in different way. We assume that at t he earlier stage system is able to compute only a single mathematical operation, one t hat delivers the

successor or predecessor to any integer: successor (0) = I or 1 successor (I) = II or 2 successor (II) = III or 3

successor (n) = n+1 .. The symbol predecessor (I) = 0 predecessor (II) = I or 1 predecessor (III) = II or 2 .. predecessor (n+1) = n . Now base on these agreements it is possible to infer arithmetic operation for ad dition: 0 as predecessor (I) & 1 as predecessor (II) can be replaced by 2 as predecessor (III) 0 as predecessor (I) & 1 as predecessor (II) & 2 as predecessor (III) can be rep laced by 3 as predecessor (IIII) The &

I + II = III If 110

or

1 + 2 = 3

predecessor (0) = 0 and 0 + 0 = 0 0 + 1 = 1 1 + 2 = 3

,,,,,,,,,,,,,, n + (n+1) = (n+2) . Now This concept can be advanced to the level of multiplication: If an Agent collects one Agent will get two If an Agent collects two Agent will get fore And so on. The operation * presented as: I * 1 = I I * 2 = 2I 2 I * 2 = 4I

I * n = nI . The linguistic form is the prime form of each concept presentation. It is true f or whole shine math science tower. What is the symbol gamma function because they have linguistic description. So, calculation with words is the prime method of math operations execution. Addition is the foundation of all concepts of the math science structure. This c oncept can be developed and executed by computer in a form designed with special math symbols. Each of these symbols has conceptual description in the linguistic form. It is possible to activate this process (to bring an Agent attention) by some ki nd of motivation (see Curiosity, Learning by Interactions) such as special shape of obj ects. Ability to exercise curiosity should be incorporated into the system.

Hence, the system has progressed from an elementary to a more advanced concept. So, it is possible to learn a more complex concept. (The computer and the mind. An Introd uction to Cognitive Science by Philip N. Johnson-Laird, Harvard University press , 1988). Another example: 5/5 = 1 10/5 = 2 111 15/5 = 3 20/5 = 4 . Conclusion: any integer number, that has a 0 or a 5 at the end (in right most position) can be divided by 5 without a resulting remainder. Thi s is the new concept of divisibility by 5. It is no need to have an advanced language to perform simple calculations. Champ can recognize difference between numbers of objects up to fore. If you show that you have fore bananas and latter will give only two of them champ demonstrate unpleased response. The Construction Of New Production Rules (see also HYPOTHESIS GENERATION) Observation is the main source of information to construct production rules. App lication of reasoning to the facts is the main method of the fact conversion into the rule. There are two main types of the fact: 1. unconditional 2. conditional Unconditional facts describe the status of environment and cannot be used to con struct the rules. It generates subconscious definition that can be converted into conscious definition, into knowledge (calculation by words). Example of the unconditional fact: It is a sunrise. Conditional statements usually include two terms (nouns) connected by actions (v erb). Example of the conditional fact:

A sunrise declares a new day. Conditional facts can be converted into the rules (calculation by words): If there is a sunrise then a new day begins. Conditional facts converted into the rules present knowledge. The rule can be designed in accordance with the pattern: IF PRECONDITIONS AND ACTIONS AND NO ACTIONS OR ACTIONS

THEN RESULTS (HYPOTHESIS) 112 If the addition table contains a fact of the form A + B = C, then one may build a new rule with the condition: (GOAL to add A and B) and the action: (ANSWER C). The rule i s: If A add to B then result is C 4 2+2 2+1+1 11+4+1 2*2 2*5+6 Correct hypothesis: Ni + i +1 2*3+1 2*4+3 4+3 4+2+1 7 7+4 7+3+1 11

Another example: Sequences: 2 16 Hypothesis: 11+5

The rule is: If there is sequence of numbers 2, 4, 7, 11, 16, . member of this sequence can be equal Ni + i +1. Supervised Learning

then each next

Supervised learning is based on comparison of a learning (Actual) syst em s output with a known result (Fig. II-15). A feedback module compares results of both systems ac tivity and

generates commands to adjust the Actual system parameters (weights). A MODEL OF SUPERVISED MACHINE LEARNING Desired Performance

Id I eal S a yst y em e Corre or ct Output c Input I Task Modul Ta e Output Knowled Know ge g e Base a Fe F edb e ac a k c

module Le L arning g module

Actual Performance Fig. II-15 113 Suppose we would like to teach the Neuron Network to execute the logical functio n OR (see TRUTH table below). In the very beginning value of weights (W1 an d W2 ) can be set randomly. Gradually these values will be adjusted automat ically by the system in accordance with the table below.

Example of Supervised Learning Inputs Case Desired Result 1 2 3 4 0 1 1 1 0 1 0 0 1 (positive) 1 (positive) 1 (positive) X1 0 X 2

Delta = Z Initial Stap 1 3 0 1 1 2 5 0 0 1 0 1 0 1

Y (desired X1 X 2 0 1 1 1 0 1

actual) output. Final Z 0 0.1 0.1 0.3 0 0.3 W1 0.1 0.3 0.5 0.5 0.3 0.5 0 0 0 1 0.5 W2 Y 0.3 Delta 0 1.0 1.0 0.0 0 1.0 W1 0.0 0.1 0.3 0.3 0.0 0.3 W2 0.1 0.5 0.5 0.5 0.3 0.7 0. 0.

1 1 3 7 0 1 1 4 7 0 1

0 1 0 1 0 1 0 1 0

1 1 0 1 1 1 0 1 1

0.3 0.5 0 0.5 0.5 0.7 0 0.7 0.7

0.7 0.7 0.5 0.7 0.7 0.7 0.7 0.7 0.7

0 1 0.7 1 0 1 0.7 1 1

1.0 0.0 0 0.0 1.0 0.0 0 0.0 0.0

0.5 0.5 0.0 0.5 0.7 0.7 0.0 0.7 0.7 0.7

0.7 0.7 0.5 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0. 0.

1 1 1 0.7 0.7 1 0.0 Parameters: alpha = 0.2(learning rate); threshold = 0.5

Updated weights: Wi (final) = Wi (initial) + alpha * delta * Xi Calculation of weight update for the multi-layer Neuron Net see [72]. (See APPEN DIX 5). 114 Algorithm (Neuron Learning) 1. To set the parameters randomly 2. To set the learning rate and threshold 3. To calculate the difference between desired and actual output 4. To define direction of change 5. To calculate value of change 6. To make change 7. To continue until desired and actual outputs are equal Fig. II-16 Neural Net as the Supervised Learning Machine ( Mr. Yurchenco) (see APPENDIX 5)

115 Learning by Instructions It is the first method that natural and an artificial intelligent system use to learn. Learning by instruction is based on two functions:

1. Acceptance of new knowledge (rules) 2. Interpretation these rules for execution.

All knowledge is residing in the Application Knowledge Base (see Fig. II-5). The Expert Systems technology is the simplest example of this method implementation. Learning by Experience The hypothesis generation system can memorize the well-checked hypothesis. Next time the presentation of the same situation will generate a result directly wit hout generation of a hypothesis. Generation and memorization of the wor ld model represents learning by experience. Learning by Imitation Imitation is something derived or copied from an original [36]. Based on observ ation how 3- and 4-year-olds children learn Dr. Horner and Dr. Whiten described the results as evidence that humans are hard-wired to learn by imitation, even when that is clearly not the best way to learn. Their study is published in the July 2005 issue of the journal Animal Cognition by Victoria Horner and Andrew Whiten, two psychologists at the University of St. An drews in Scotland http://www.nytimes.com/pages/science/index.html). They found that a child is going to move toward the goal through unnecessary steps if an instructor includes it in the procedure. The chimp, on the other hand went straight for the goal a voiding unnecessary steps. If these psychologists are right, this represents a big evolutionary chan ge from our ape ancestors. Other primates are bad at imitation. When they watch anothe r primate doing something, they seem to focus on what its goals are and ignore its actions. An a dult person is better prepared to activate of a goal driven problem-solving procedure. In contrary the AI system better executes a process that is described with a sequence of specific steps. It is easy to design a system with a cap ability of imitation. It makes sense to have both abilities with criteria of choice. In this case the local control systems perform actions.

A car navigator with ability to learn driver s driving patterns is an example o f learning by imitation. Curiosity, Learning by Interactions Curiosity is the motivation to learn about the unknown: Arousing inte rest because of novelty or strangeness: a curious fact, to detect un familiar object, label it, and then learn about it. It is an active type of learning. Observation is a passive type of learning. 116 Curiosity can be triggered by association of the result of observation to the ar ea of interest. The area of interest can be generated by activation of the specific knowledge. This knowledge can be tagged or transferred into specific area of a memory . Each new information is checked against this knowledge. A new and existing information (see ASSOCIATIVE THINKING). For example: Suppose an Agent is interested in psychology of artificial intelligent systems. He observes behavior of the system that did not received a substantial set of knowledge and he realized that the system demonstrates childish behavior. Childish behavior connect ed to psychology (activated area of interest) is developing the term Child P sychology and generate connection by association to the area of knowled ge that an Agent has missed before. In order to make the learning by interaction efficient it is importan t to develop environment, to present the specific object for interactions. The set of the different size objects (rings, empty boxes) can help to learn concept objects and their relative location can help to learn concepts behind An Baby Agent drops a ball, evaluates the result, and formulates the rule: If you drop a ball The playground experiment demonstrates curiosity-driven learning on an autonomou s fourlegged AIBO robot platform (Fig. II-17). The robot is equipped with basic motor primitives

(control of its head direction, arm and mouth movements), which are c ontrolled by a generic developmental engine (called Intelligent Adaptive Curiosity). Sony robot dogs are programmed with software that simulates "curiosity" and are placed on a baby's activity mat where different objects can be bitten, bashed, or just vis ually detected. With this engine, the robot actively chooses its actions and its lear ning situations. The engine can measure the effects of the actions taken on th e environment through a camera, IR sensors, touch sensors and motion feedback. The Playground Experiment uses AIBO to study how infants develop increasingly co mplex sensorimotor and communication skills. By trying different motor primitive s, which it can modulate, AIBO progressively discovers that some objects are easier to interact with than others. As the robot learns to master particular sensory-motor trajecto ries, its behavior became more complex. He behaves like a baby. The developmental engine is compose d of: 1. prediction systems that learn the effects of actions in a particul ar context; these prediction systems are set of experts specialized in particular areas of the sen sory-motor space; 2. meta-prediction systems that learn to predict the error of predicti on systems (1) and its evolution over time. In particular, these meta-predic tion systems can compute an expected error reduction corresponding to a given action. This is done by comparing the e rror rate of the new expected sensory-motor context to the error rate in simila r sensory-motor 117 contexts from the past, as opposed to the error rate in the most re cent sensory-motor context; 3. an action selection module that chooses actions with maximum expected error r eduction

(as computed by (2)); As a consequence, this system produces action sequences that are expected to max imize learning progress. Such a self-motivated robot focuses on tasks with are neither too predictable no r too difficult to predict. It looks for learning given its embodiment (physical body and learning algorithms), the structure of its environment and its current stage of development. This e nsures a continuous development of increasingly complex behaviors. Algorithm (Learning by Interaction): 1. Interaction with environment 2. Conceive information 3. Evaluate result 4. Design the rules 5. Place these rules into the World Model 6. Learn interaction limits The rule can be designed in accordance with the pattern shown before: IF PRECONDITIONS AND ACTIONS AND NO ACTIONS OR ACTIONS

THEN RESULTS (HYPOTHESIS) A result can include information presented by sound, smell, visual rep resentation, etc. Hypothesis can be investigated by changing of the combinations of preconditions and actions (all or part of them) randomly. Investigation of the full possible co mbinations tests the strength of the hypothesis and can make it a TRU E rule. See also HYPOTHESIS GENERATIONS. Names of preconditions, actions, and results should be presented in th e knowledge base. Identification of unknown events (results) can be done through associative thin king or other methods of reasoning. Learning through interaction with environment (self-learning) is importan t for behavior adjustment for adaptation. It is an ineffective learning method. In order to

get whole set of knowledge an Agent should repeat whole history of mankind. Education as comm unication (verbal, visual, writing) with a teacher is the most efficient method of learnin g. 118 Another example (Fig. II-18) of interaction with environment is learning how to walk. In a very beginning the robot from the New Hampshire University behaves as a newly bo rn baby. It cannot walk. Gradually it interacts with the environment and learns how to keep balance and how to walk. In the human brain this function is responsibility of the cerebellum. Feedback deficits are resulting in disorders in fine movement, equilibri um, posture, and motor learning. Initial observations by physiologists during the 18th century indicated that patients with cerebella damage show problems with motor coordination and movement. Fig. II-17. Learning through interaction with environment. (http://playground.csl.sony.fr/en/page2.xml) 119 Fig. II-18. Learning how to walk.

PLANNING Plan is a scheme, program, or method worked out beforehand for the accomplishmen t of an objective. Abilities of stucturalization, and classification of sub objectives (sub goals) of an objective (main goal) are the tools to develop the plan. Problem solving methods are the m ain tools to develop the planning algorithm. Planning algorithms use descriptions in a formal language, usually first-order logic. States and goals are represented by a set of sentences. Actions are represented by logical descriptions and effects. The planner makes a direct connection between states and actions.

The planner is ther than in an

free to add an action to the plan wherever it is needed, ra

incremental sequence starting at the initial state. [27] Most parts of the world are independent of most other parts. It is possible to design a plan as independent subplanes. The quality of the planning algorithm is determined by capability of decomposition of the goal to subgoals, to generate planners for each s ubgoal and to define time of execution of each part of process. Paral lel execution of the independent parts of procedure is important strate gy of a plan execution. Planning algorithms must take in consideration all constrains and recourses. Each plan can be evaluated by the vect or [time execution, resources and cost]. Each level of resolution has a spec ific horizon of planning. 120 Sometimes the goal or execution of a plan is not strongly defined; this leads to an execution of some actions (moving from one location to another without generation of the p ath) without strong understanding of the goal (destination). In this case system ge nerates needed information based on previous knowledge, or the brain generates the ex ecution plan in accordance with statistics or in keeping with the last active plan (the last pat h). Effect iveness of a plan execution depends on personal ability of an agent to follow the plan p rocedure.

PROBLEM-SOLVING

A PROBLEM is a collection of information that the agent will use to decide what to do. All intelligent abilities such as reasoning, learning, generalization, hypothesi s generation, etc. are the tools of problem solving. Well-Defined Problems and Solution [51] 1. The INITIAL STATE is the state that the agent knows itself to be in. 2. The OPERATOR is used to denote the description of an action in terms of which state will be reached by carrying out an action in particular state. 3. The STATE SPACE of the problem is the set of all states reachable from the in

itial state by any sequence of actions. 4. A PATH in the state space is simply any sequence of actions leading from one state to another. 5. The GOAL TEST, which the agent can apply to a single-state description to det ermine if it is a goal state. 6. A PATH COST FUNCTION is a function that assigns a cost to a path. 7. The OUTPUT of a search algorithm is a SOLUTION. This evaluation should be done for available algorithms (Greedy search, Straight line algorithm, Prime s algorithm, Kruskal s algorithm, Learning decision tree, etc.). The most known strategy is presented as the Search Tree. Measuring of Capability of Problem-Solving 1. Does it find a solution? 2. Does the solution meet the goal? 3. What is the search-cost associated with the time and memory required to findi ng a solution? DATA STRUCTURE FOR SEARCH TREE A node is a data structure with 5 components: 1. The state in the state space to which the node correspondents. 2. The node in the search tree that generated this node (this is called the par ent node). 3. The operator that was applied to generate this node. 4. The number of nodes on the path from the root to this node (the depth of the node). 121 5. The path cost of the path from the initial state to the node. GENERAL-SEARCH ( problem, strategy) Algorithm: 1. Initialize the search tree using initial state of problem 2. loop do if there are no candidates for expansion then return failure 3. choose a leaf node for expansion according to strategy

if the node contains a goal state then return corresponding solution else expand the node and add the resulting nodes to the se arch tree 4. end The Search Tree presents several possible strategies (in Greedy Search group of strategies) how to reach the Goal from the Start point.

Goal through all nodes with sequential evaluation of each step by evaluation fun ction (path cost function). Minimal value of evaluation function shows the best strategy.

from the Goal stage to the Start stage with sequential evaluation of the straigh t-lane distance between the Goal stage and the Start stage. Minimal value of evaluation function shows the best strategy. Transparent walls permit to get information about location of the Goal. Fig. II- 19 presents comparison of two different strategies A and A*. A and A* Start Agent (robot) *

Transparent walls Non-Transparent walls A* Walls A * Goal

Fig. II-19. Comparison A and A* strategies

122 Multivariable Problems Some of variables of the multivariable problems are contradict to each other, Fo r example: for the better product (that is good) we have pay more (that is bad). In this ca se decisionmaking process is based on compromise [40] (see APPENDIX 3). Lack of Statistics in Decision-making Probability of decision s result is important tool in the decision-making process. Lack of statistical information makes calculation of probabili ty unreliable. Student's t-distribution (see APPENDIX 15) is a probability di stribution that arises in the problem of estimating the mean of a normally distributed population when the sample size is small. Un fortunatly this approach is not transparent to degree understandable to th e person responsible for decision meking and need intuitive setting some parameters. Fig II-21. Parabola It can be corrected by calculation of the level of trust to the result. The level of trust (T) is non-linear function of the number of the event occurrences. The more events occu rrences the higher trust level. This function has parabolic character: T = [P/(P+N)]n , R > (P+N) T = 1, n = (P+N)/P 0 where P is a number of positive occurrences, R = (P+N)

N is a number of negative occurrences, R is the representative number of occurrences, Probability of events for [R p = P/(P+N). In case of lack of statistical information it is better to rely on possibility t hen on probability of events. Possibility (pos) of events for [R > (P+N)] can be calculated as 123 pos = T * [P/(P+N)], or pos = [P/(P+N)]n+1

and pos = pn+1 See also SOCIAL BEHAVIOR. Independent Behavior. In the human s brain the frontal lobe involves in problem solving. It contro ls the so-called executive functions. It involves into the ability to recognize future consequences resulting from current actions, to choose between good and bad ac tions (or better and best), override and suppress unacceptable social responses, and determine similarities a nd differences between things or events. PERSONALITY OF THE ARTIFICIAL SYSTEM (artificial person) Personality is a set of distinctive and characteristic patterns of thought, emoti on, and behavior that define an individual s personal style of interacting with his/her/its physical and social environment [18].

Personality of an artificial system is the totality of qualities and traits, such as the character of behavior, which is peculiar to a specific ar tificial system ( person ). An Artificial Personality is the composite of characteristics that make up the individuality of the system; the self. The personality of a natural system is determined by its genetic code and depends on the strength of internal secretions (chemicals and hor mones). The hardware and the software determine the personality of an artificial system. It is reasonable to develop row of identities that are based on set of psychological characteristics in accordance with different areas of applications. The most acceptable method of artificial system personality analysis is the Behavioristic approach that emphasizes the importance of environmental or situational determinants of behavior. Psychoanalytic and Phenomelogical approaches are more suitable to human personality analysis. There are different opinions about the number of traits that determine personality. The British psychologist Hans Eysenck arrived at 32, Cattell arrived at 16. McCrae and Costa design a table listing five factors by six traits [18].

For an artificial person it is reasonable to take into consideration seven known factors by two traits: Optimistic-Pessimistic are functions of the ratio of positive-negative s ensations and experiences. 124 Active-Passive is determined by the level of stimuli that generates a response: rewards, curiosity, etc. Peaceful (friendly)-Aggressive (unfriendly). Aggression is the next topic of dis cussion (see

Reliable (dependable)-Moody (undependable) (see Cool (calm)-Anxious Sociable-Unsociable (can work as a team member or not) Careful-Careless (unguarded) Higher flexibility in self-reconfiguration (for example: reconfiguration of connections between neurons in artificial neuron net) can create conditions of abn ormal or unknown development in natural systems. The environment has a strong influence on personality development throu gh moral, and immoral influences (by law). These limitations should be applied to th e artificial system operating in specific social environment. Changing the value of the weight function negative and positive variab les can change the personality, for example, from optimistic to pessimistic and back. Repe tition of positive result or negative result of actions can change personality from optimistic to p essimistic and back. The same method can be used to change other characteristics. AGGRESSION Aggression is the intent to injure another person (physically or verbally) or to destroy

property. In the human brain Amygdala involves in aggression, jealousy, and fea r. Military combat systems are aggressive. If a military assault system d oes not have an aggressive personality, it could be taught with knowledge about target and act ions via genetic code (natural or artificial). Aggression directed to friendly objects can be neutralized by activation of opposite personality against this target. In this case knowledge can be presen ted as For example, if target is below, is going through a knowledge base. A source of knowledge is an expert in the area of combat. If a military assault system is aggressive, it should not be taught abou t target and the actions against this target. In this case the operator can present just data: ta rget

It is easy to communicate with the artificial system if it demonstrates a specif ic personality. In this case the system does not need to learn specific knowledge, b ut needs just the information (data). Describing of these objects in the database as the friendly objects can set personal behavior. It is natural to use a 125 Optimistic-Pessimistic factors are important in systems that deal with uncertainly: systems involved in foreseeing results and events, forecasting assault in military application. It may be useful in some business and management applications. The same approach can be applied to the choice of personal factors for specific areas of a system application. The combinations of factors providing the AIS with the capability to automatically adjustment to specific environment characteristics (autonomy) are a very impo rtant ability

of the system. The special control module can perform this adjustment. If the system is designed as a neuron net, changing the values of the weight, threshold, and tran sfer function can perform this adjustment. Transformation from one trait into another can ari se gradually or instantly depending upon the conditions of application. Transformatio n of functions should be instantaneous. Algorithm: 1. Recognize the object (see OBJECT RECOGNITION). 2. Define it as 3. If 4. Change the object 5. Repeat 1- 4

EMOTIONS Emotions are the most typical of all human s features. It is complex phenomena, an d the term has no single universally accepted definition. Psychologies have general ly agreed that emotions entail, to varying degrees, awareness of one s environment or s ituation, bodily reactions and approach or withdrawal behavior. Although a widespread word, it is not so easy to come up with a ge nerally acceptable definition of emotion. Growing consensus does agree that the distinctio n between emotion and feeling is important. Feeling is a physical sensation and an affective state of consciousnes s, such as that resulting from emotions, sentiments, or desires. Feeling can be seen as emoti on that is filtered through the cognitive brain centers [36], specifically the frontal lobe , producing a physiological change in addition to the psycho-physiological change. Feeling and sensation

are synonyms. Sensing is a part of sensation. Sensation is a percept ion associated with stimulation of a sense organ (sensing system) or w ith a specific body condition: the sensation of heat; a visual sensation; sensation of interest, loneliness; the faculty to feel or perceive; physical sensibility: The patient has very little sensation left in the right leg [36] . So, feeling is the combinaation of physical sensations coming from th e outer and inner sensing system and perception development based on inf ormation from the different subsystems: sensing and emotions development. Demonstration of feeling is a communication processes that accompanying feeling. 126

Sensation (feeling) includes: 1. sensing (information collection) from sensors body s parts that are involved in emotions development 2. perception Based on discoveries made through neural mapping of the limbic system, the neur obiological explanation of human emotion is that emotion is a pleasant or unpleas ant mental state organized in the limbic system of the mammalian brain. Specifically, t hese states are manifestations of non-verbally expressed feelings of agr eement, anger, certainty, control, disagreement, disgust, disliking, embarrassment, fear, guilt, happiness, hate, i nterest, liking, love, sadness, shame, surprise, and uncertainty. For a machine to "have" emotion means there is a mechanism for it to decide whic h emotion state the machine should be in, and also influences the behavior of the system a fterward. This may be useful in human-computer interactions with mechanisms such as robots and virtual reality systems. Expressing emotional information in such systems can e nhance the naturalness of the system. Emotion recognition and understand ing see OBJECT

RECOGNITION and UNDERSTANDING and INTERPRETATION. By definition [36] emotions are: 1. An intense mental state that arises subjectively rather than through conscious effort and is often accompanied by physiological changes; a strong feeling. 2. A state of mental agitation or disturbance. 3. The part of the consciousness that involves feeling; sensibility. First of all, the third part of definition contradicts to the first one: is this effort or not? Second, this definition does not show the main substance of this phenomenon.

intuition. If it is conscious and at the same time unconscious process then it i s a synonym of feeling. Emotion is an automatic response or reaction to the selective signals. It consis ts of two parts: sensational processes (real emotional processes) and demo nstration of emotions (informational, communicational processes). tional Emotions are cognitive sensa

information processes that accompany the combinations of chemical, elec trical, mechanical, informational, and other physical processes in the inner and outer parts of a body triggered by the sensors and mobilize system resources. Feeling triggers emotions. Facial expression, body language, and verbal presentation ( are ways of communication. It is easy observing an animal body language as well as human . The cat sends the clear signal about readiness to attack. Artificial Intelligent Systems developers cannot wait the final conclusi on of the human psychologist to define the term development of Artificial Intelligent Systems. Anyway human knowledge is not abs olute. 127 There are the personal or cultural sets of the related pairs of inte rnal and external the same signals can trigger emotions of another agent. It is kind of conditiona l reflexes (see REFLEXES). Reflex is an automatic response or reaction. Some

family Emotions accelerate reaction of the system on dangerous events and tre at, mobilize system abilities. Emotions tell the system what is good and what is bad around help sys tem to stay alive. Emotions help to plane actions (anger helps to remove obstacles). Artificial Intelligent systems can be members of international teams. T hese teams include people with different cultural background... the ability to work in teams and with people from different cultures, Martin CEO Norman Augustine. It means different cultures is as important as IQ for success in today Desires to get moral or material reward (motivation, stimuli) generate emotions. A separate branch of inquiry into emotion had sledded some researches to theorize that emot ions are no more than strong motivational or drive state. Some emotions under specific circumstances can be controlled but some are not. I nformation about emotions can be controlled as well. In many cases both emotions and reflexes can be bound together. Emotions like co nditional reflexes are unintentional process - subconscious process. It creates d ifficulties to develop clear definition. Emotions of intelligent systems are the combination of processes of mobilization of the system mental and physical recourses to execute dynamic adjustment to the real w orld, to withstand bad (dangerous) events or to increase efficiency of good events: to develop self diagnostic of the system s modules problems; to communicate with another agen ts through the body language. Not all emotions (some internal disturbances) generate the communication signal. In the human brain each type of emotions has specific physical location. It is reasonable to repeat the same design in the Artificial Intelligent System. Amygdule is responsible for

control of emotions. It receives the signal from the sensor system an d sends the signal of response directly to the body (actuators) to prepare t he body to response. This response is a reaction of an organism or a mechanism, to a specific stimulus and in some case can create resonance to these signals as excitement. At the same time it sends the signal to the frontal cortex that is responsible for reasoning. The body sends the signal back to the frontal cortex as feedback. The signal to the actuators is a control signal it arises unintenti onally. It doesn t involve process of reasoning. It is unconscious process. Proces s of reasoning recognizes the situation (for example type of dangerous) and defi nes the type of reaction (Fig. II-23). It is unintentional process of reasoning. It is subconscious process that is based on system s experience. 128 Examples: sound of metal moving on glass surface, music, and so on. Human information with the spinal cord. This method distinguishes from perception (rec ognition and interpretation of sensory stimuli based chiefly on memory). It is poss ible to create the artificial information tracks with the same ability. Repetition of the same sign al can activate memory and excitement. In this case, the process involves subconsciousness. Th e so-called Mirror Cells in the human brain can respond to the signal presenting behavior of another human being by correspondent actions and prediction of the results of these actions. An artificial system can demonstrate the same ability if similar actions saved in t he memory and can be activated by visual or other signals. It is very important to keep balance between the incoming signal and emotional reaction. Overreaction can decrease intellectual and mobilization ability of the system. Over sensible Amygdule (natural or artificial) can be cause of the health (natural or artifici al) problems. Control of a system s status can be executed not just by emotions. The system goal triggers a control process intentionally as well; send the control signal to the actuators. It is intentional

conscious process. It is executed under control of free will (the goal driving p rocess) of the intelligent system. It is difficult to fake human emotions. Research shows that it is impossible to fake smile. It is possible to recognize fake smile by strong scrutiny. A level of some emotions from zero to maximum depends on culture and education. Artificial system s emotions can be fake d easy. A human being has emotions but at the same time can emulate, or fake them (profe ssionally or non-professionally) without any real feeling. The famous Russian the atre director Stanislavski has taught actors to feel emotions otherwise they will not be abl e to play their parts naturally. It is different result to live life the person artist represent s or to represent life of this person. It is true for drama artist. A singer (in opera) plays emotions by the voice technique and need full attention to this technique. Real feeling and a body language are imi tations (Anna Netrebko, a famous opera artist, coloratura soprano) simi lar to artificial systems. True humans emotions are triggered by external or internal signals to the control system (brain or peripheral nerve system). The control syst em activates chemical, mechanical, and electrical processes. Different emotions include different specific proce sses: a rise in blood pressure and adrenaline, rapid heart heartbeat, breathing, etc. People with a severed spinal cord in the lumbar region don t feel pain in the lower parts of their body and cannot develop emotions. Why do we laugh? What function does laughter have? Laughter is one of the most poorly understood of human behaviors. While we know, for example, that certain parts of the brain are responsible for certain functions and tasks, it seems that laughter cannot b e traced to one specific area of the brain. Furthermore, the relationships between laug hter and humor, or even laughter and mirth are not understood, despite their evident inte rconnection. The medulla directly controls many involuntary muscular and glandular activi ties, including breathing, heart contraction, artery dilation, salivation, vomiting, and prob

ably laughing. 129 Some clues for the physiological basis of laughter have come from people who suf fered brain injuries, strokes or neurological diseases. Three years ago, at the age of 48, C.B. suffered a stroke. Fortunately, he recovered quite well and was expected to retur n to his normal life. However, since the stroke, C.B. and those around him have been perple xed by certain changes in his behavior. Though he seems healthy, and doesn't suffer any pain, o ccasionally, for any noticeable reason, he bursts out into uncontrollable, wild laughter. In other cases, out of the blue, he is swept into tears in a similar attack. The pleasa nt feelings, happiness, amusement joy or memory about past joke that usually accompany laughter are absent. For artificial systems this facial information can be the signal of self-diagnostic. Some feelings that reflect local sensation such as local pain are con trolled by the local control systems (similar to unconditional reflexes, see also REFLEXESS). They are non-intelligent processes (see also CONSCIO US PROCESSES CONSCIOUS, UNCONSCIOUS, AND SUBCONSCIOUS PROCESSES) but it can trigger a body language similar to a body language of emotions. Does a machine have emotions or just emulate them? If you would like the answer to be

properly design the tires and other parts. An artificial system has a brain (computer). It is possible to add a full mechanism of emotions with a set of inner local sensors and local actuators to create an art ificial system with feeling and response (emotion). A human being with a transplanted heart (biological by nature but artificial by implementation ) experiences all types of emotions. My pacemaker (artificial part of my heart control system) participates in my emotional process, it follow the signal to increase my pulls t o mobilize my body to withstand the obstacles. New experiments with the arti ficial heart do not kill emotions. And now there is another more important question: Does an Artificial Intelligent system need emotions?

As soon as we define emotion as the tool of the system s recourses mobilization th e answer is complex engineering (artificial) system has the mechanism of resources (speed, power, force, and so on) mobilization. Changing of the lane in congested traffic, beating of the fare in the subway station needs mobilization of the car s or the body s resources. In the first case it is the driver responsibility, in the second case it is the responsibility of a violator. The Autonomous Artificial I ntelligent System can face the situation when it needs mobilization of recourses. It is emotions! Emo tions are very important part of communication between a human being and an artificial agent as a member of mixed team. The Control Theory presents strong methods of actuator s resources mobilization (f orcing). For Artificial Intelligent Systems there are no strong methods of comp utational ability of system resources mobilization. It is possible to c reate the flexible structure with ability to increase computational power by variation of a number of parallel computational branches. A facial expression and body movement can indicate emotions in artificial system s as well as in natural one. Researchers in the Humanoid Robotics Group at the Massachusetts Institute of Technology have created the robot Kismet (Fig. II-24) that demonstr ates (imitate) emotions. 130 Intensity of emotions should be determined for each individual system. Some people think that the artificial system cannot feel but can react emotionally to an arousing situation. The Structure of the Emotion s System SYSTEM OF EMOTIONS MODULE MODULE

OF OF REASONING CONTROL Fig. II-22 left leg Amygdule right leg Fig. II-23. Control signal from the Amygdule is going down (right leg). Feedback signal is going up (left leg) to the frontal lobe. If we accept the definition of a physical sensation 131 of an artificial system to feel. This can be expressed as Grief (Sorrow)-Loss, Fear-Treat, Anger-Obstacle, and Joy-Potential Mate, Trust-Group member, Disgust-Gruesome object, Anticipation-New information, and Surpris e-Sudden novel object. Some of artificial system s emotions are based on cognitive processes. It involves patterns of emotion reaction stored in the memory. Kevin Warwick, professor of cybernetics a t Reading University, cites Sony's AIBO as the closest man has come to creating a sentient artificial being. According to Warwick's research, robots w ith brain processing capabilities are no more intelligent than snails or bees, with basic behavior patterns and the ab ility to map out simple environments. "Humans have human emotions and robots have robot emotions. As soon as you allow robots to learn, you are opening up the possibility that they could develop their own emotions. "Warwick believes that in ten to twenty years, hu manoid robots will complicate the moral dilemma further. "In this timeframe, r obots in the home will not be an equal, but they will be given more of a status." He believes that in 20 years ti me, robots will have an intellect on par with humans, which could reverse the issue into whether or not robots will be willing to let humans into their homes! Strong emotions that follow events increase the capability of memorization of

these events. Emotions should trigger the generation of associative connections betwee n events and emotions (see CREATIVITY). The most difficult areas of emotional activities are related to art, poetry, and music (see also ART APPREHANSIONS). Emotional reaction in these areas requires strong preparati on of the Agent it requires a special education. A non-prepared human being does not e motionally respond to art and music. The AIS should be educated to respond emotionally to t his type of information as well. The artificial center of pleasure can be exited by specific sound, color, actions, etc. Studies show that emotions have a huge impact on the human body. Good emotions h elp to cure wounds faster. Emotions change the biochemical body environment (immune system). Mental distress makes physical pain worse. To control emotions is a n ecessary intellectual ability, but not the most important problem confronting t he evolution of the AIS. Kismet (Fig.II-24) was created in MIT by team under supervision of Rodney Brook. You can play with and talk to Kismet. If you charge towards Kismet too fast, you will st artle Kismet and Kismet will quickly withdraw or even become annoyed of you. When you talk to Kismet, watch your tone because a scolding tone can make Kismet's head bow and e yes look down. On the other hand, if you speak adoringly and with encouragement, Kismet w ill smile and engage. Kismet enjoys face-to-face interaction and will not (or ha s not learnt to) hide emotions. With those huge eyes shaped like eyes of a g old fish, Kismet has a doe-eyed look, like a young child. Only, Kismet is not a child. It is an ensemble of metals, wires, cameras, synthe sizers, sensors, and motors -- a machine with hardware and software control. Yes, Kismet is a robot. More precisely, Kismet, heavily inspired by the theories, observations, and experimen tal results of

132 child developmental psychology, is an expressive robotic creature with perceptua l and motor modalities tailored to natural human communication channels. Kismet is probably the most famous exemplar of a new crop of robots called "Soci able Humanoid Robots" or "Affective Humanoid Robots". In "The Art of Building A Robot to Love" by Henry Fountain ("New York Times", March 05, 2006) we read: 1. "What people want from Robots?" It turns out, is what they often want from people: emotions. 2. It turns out that by equipping robots with the mechanisms of emotions, we are able to increase the efficiency of our smart machines. So, there is an expectation that emotions can improve their performance. 3. Dr. Mataric presented a paper at a conference on Robot/Human Interaction at Salt Lake City persuasive enough to capture the attention of a group of people like PERMIS (National Institute of Standard and Technology) who want to affect the performance of Robots. 4. A robot must have human emotions, said Christof Bartneck of the Eindhoven University of Technology in the Netherlands. Thus, emotions have to be modeled for the robot's computer. "And we don't really understand human emotions well enough to formalize them well"-he said. 5. At Stanford, Clifford Nass, a professor of communication, found that in a simulation, drivers in a bad mood had far fewer accidents when they were listening to a subdued voice making comments about the drive...

6. Even an insincere or simple emotion is easy for a person to detect: people can find emotional cues everywhere. "They are obsessed with emotion," Dr. Nass said. "The reason is, it's the best predictor of what you'll do." 7. "If robots are to interact with us," said M. Scheutz, director of the AI laboratory at Notre Dame, "then the robot should be such so that people can make its behavior predictive". Then, people are able to understand how and why the robot acts as it does. 133 The Process of emotion development or imitations in the artificial sys tems is based on activation of a body s and facial s reactions on the signals fro

m the environment or internal sensors. These reactions are saved in the computer memory. Development of emot ions, not imitations, is possible in the system with a full set of artificial or natural s ubsystems that have a relationship to these processes. PERCEPTION (see above) describes the methods of emotional presentation. Success_observed (Success is the achievement of something desired, plann ed): Is a positive value-state variable that represents the degree to which t ask goals are met plus the amount of benefit derived there from [16]. Success_expected: Is a value-state variable that indicates the degree of expecte d success or the estimated probability of success it may stored in a task frame or computed d uring planning on the basis of world model predictions. When compared with success obs erved it provides a baseline for measuring whether goals were met on behind or ahead of s chedule at over or under estimated costs and with resulting benefits equal to less than or greater than those expected. Hope (To look forward to with confidence or expectation): Is a positive value-st ate variable produced when the world model predicts a future success in achie ving a good situations or events when high hope is assigned to a task frame. The Behavior Gene rator (BG) module may intensify behavior directed toward completing the task and achievin g the anticipated good situations or event. Frustrations (To prevent from accomplishing a purpose or fulfilling a desire): Is a negative state variable that indicates an inability to achieve a goal it ma y cause a BG module to abandon an ongoing task and switch to an alternate behavior directed toward compl eting the task and achieving the anticipated good situations or event. Love: Is a positive state variable produced as a function of the perceived attra ctiveness and

desirability of an object or person. When assigned to the frame of an objective or person, it tends to produce behavior designed to approach protect or process the loved obje ct or person. Hate: To feel hostility or animosity toward. Is a negative-value state variable produced as a function of pain anger or humiliation when assigned to the frame of an object or person hate tends to produce behavior designed to attack harm or destroy the hated object or person. Comfort: A condition or feeling of pleasurable ease, well-being, and c ontentment. Is positive-value state variable produced by the absence of or relief from stress pain or fear comfort can be assigned to the f rame of an object person or region of space that is safe sheltering or protective. When under stress or in pain an intelligent system may seek out places or persons with the entity frames that contain a large comfor t value. Fear: A feeling of agitation and anxiety caused by the presence or imminence of danger. Is a negative-value state variable produced when the sensory processing system recogn izes or the world models predicts a bad or dangerous situations or event fear may be assigne d to the attribute list of an entity such as and object person situation event or region of space. Fear tends to the produce behavior designed to avoid the feared situations event or r egion or flee from the feared object or person. Dangerous situation may be recognized by the s ystem by 134 reference to experience. In this case the system generates adequate response. Un known signals generates alert: increases sensibility of all sensors systems (see ATTEN TION and INTUITJON). In the human brain Amygdale involves in fear. Joy (pleasure): Intense and especially ecstatic or exultant happiness. Is positi ve-value state variable produced by the recognitions of an unexpectedly good situation s or event. It is assigned to the self-object frame. In the human brain the limbic system (Nucleus accumbens) involved in reward, pleasure, and addiction;

Despair: The state of being without hope. Is a negative value state variable pro duced by the world model predictions of unavoidable or unending bad situations or events. Des pair may be caused by the inability of the behavior generation planners to discover an accep table plan for avoiding bad situations or events. Depression is negative value. The condition of feeling sad affected or characterized by sorrow or unhappiness. Happiness (see Joy): Is a positive value produced by sensory processin g observations and world model predictions of good situations and event s. Happiness can be computed as a function of a number of positive rewarding and negative punishing value stat e variables. It can be measure by the level of fulfillment of desires. Confidence: Trust or faith in a person or thing. Is an estimate of probability of correctness a confidence state variable may be assigned to the fame of any entity in the World Model. It may also be assigned to the self-frame to indicate the level of confidence that a creature has in its own capabilities to deal with a situation. Level of confidence is based o n experience to deal with a person or event. A high value of confidence may cause t he behavior generator hierarchy to behave confidently or aggressively. Uncertainty [16]: Is a lack of confidence uncertainty assigned to the frame of an external object may cause attention to be directed toward that object in order to gather more information about it. Uncertainty assigned to the self-object frame may cause the behavior generating hierarchy to be timid or tentative. VJ modules [16]

Value-state-variables are computed by value judgment functions residing in VJ modules. Inputs to VJ modules describe entities, events, situations, an d states. VJ value judgment functions compute measures of cost, risk, and benefit. VJ outputs are value-statevariables. Axiom: In Artificial Intelligent System the value-state-variables are additive functions.

In this case the VJ value judgment mechanism can be defined as a mathematical or logical function of the form [16]:

E(t+dt)= (S(i,t) I 135 where E is an output vector of value-state-variables V is a value judgment func tion that computes E given S This equation represent algebraic sum of positive and negative functions. The components of S are entity attributes describing states, objects, events, or regions of space. Those may be derived either from processed sensory information, or from t he world model. Value judgment function V in the VJ module computes a numerical scalar value (i. e. an evaluation) for each component of E as a function of the input state vector S. E is a time dependent vector. The components of E may be assigned to attributes i n the world model frame of various entities, events, or states. If time dependency is included, the function E(t+dt)=V(S(t)) may be computed by a set of equations of the form [16] e(j,t+dt)=(k d/dt+1)b s(i,t) w(i,j) i Where e(j,t) is the value of the j-th value state-variable in the vector E at ti me t s(i,t) is the value of the i-th input variable at time t w(i,j) is a coefficient, or weight, that defines the contribution of s(i) to e(j ). Each individual may have a different set of value judgment function V. The factor (k d/dt + 1) indicates that a value judgment is typically dependent o n the temporal derivative of its input variables as well as on their steady-state values. If k> 1, then the rate of change of the input factors becomes more important than thei r absolute values. For k>0, need reduction and escape from pain are rewarding. This formula suggests how a VJ function might compute the value state-variable Happiness = (k d/dt+1) (success + hope frustration + love hate + comfort fear + joy despair) 136 expectation

Calm Interest Angry Happy Sad Surprise Disgust Fig.II-24. Kismet

If time where success, hope, love, comfort, joy are all positive value state-variables that contribute to happiness and expectation, frustration , hate, fear, and despair are all negative value state-variables that ten d to reduce or diminish happiness. It is possible to assign a real non-negative numerical scalar value to each valu e state variable this defines the degree or amount of that value state variable. example a positive real value assigned to good defines how good i.e. if e: =<e, 10en e+10 is the best evaluation possible. then, e=10 is the Some value state-variables can be grouped as conjugate pairs. For example: goodbad, pleasure-pain, success-fail, love-hate, etc. For conjugate pairs, a positive rea l value means the amount of the good value, and a negative real value means the amount of the bad value. For example, if e := and -10<e<+10 then 137 For

e=5 is good e=-4 is bad e=6 is better e=-7 is worse e=10 is best e=-10 is worst e=0 is neither good nor bad Similarly, in the case of pleasure-pain, the large the positive value, the better it feels. The large the negative value, the worse it hurts. For example, if e := then e=5 is pleasurable e=-5 is painful e=10 is ecstasy e=-10 is agony e=0 is neither pleasurable nor painful The positive and negative elements of the conjugate pair may be completed separa tely, and then combined. Detecting and Recognizing Emotional Information Detecting emotional information usually involve sensors which gather information about the user's physical state or behavior without interrupting the user. The m ost obvious way in which a computing device can sense the user's emotion is by using th e same cues as other humans do, such as Facial expression, posture, gestures , and speech. Computing devices can also sense emotion in ways which humans are not capable of , such as the force or rhythm of key strokes of a hand on the keyboard, the temperature c hanges of a hand on the mouse, or the evaluation of other physiological vital signs . Other technologies such as speech recognition are being explored for gatheri ng emotional information. Recognizing emotional information requires the extraction from the senso r data of the features specific to emotional states, and the learning of patterns of data by t he software. There is one specific area which is not really recognition of emotions but relat ed to emotional trust. Evolution has developed the stereotype of appearance of the wise person. It is not the

young but the aged person. Experience and knowledge collections take t ime. The most of portraits and pictures of the great thinkers and scientist (Newton, Ei nstein, Leonardo Da Vinci, Franklin, and so on, and so on) shows the aged person even if their young er pictures are available. Subconsciously we trust the aged person s wisdom. It me ans that in some special areas of application the Artificial Intelligent System should h ave the specific appearance that fits to the existing stereotypes. It is very important to develo p the map of an Artificial Intelligent System emotions to make it recognizable. 138 Emotional Understanding Emotional understanding refers to the ability of a device not only to detect emotional or affective information, but also to store, process, build and maintain an emotional model of the user. Emotional understanding aims at incorporating contextual information about the u ser and the environment, and produce appropriate responses. It is a diff icult issue because human emotions arise from complex external contexts. Possible features of a system which displayed emotional understanding m ight be editable preferences such as avoidance or modification of interaction when the user is angry, and applications might improve security or confidentiality as well as the overall in teraction.

STIMULUS, MOTIVATION, AND INSPIRATION

Stimuli are something causing or regarded as causing a response; an agent, an a ction, or a condition that elicits or accelerates a physiological or psycholo gical activity or response; something that incites or rouses to action; an incentive, it is some kind of motivation [36]. Motivation is the strongest stimulus. Motivation determines the direction and in tensity of goal-directed behavior. Motives are activated from within, but are some times stimulated by external conditions [36].

Motivation is a temporal and dynamic state that should not be confused with per sonality or emotion. Motivation is having the desire and willingness to do something. Stimuli and motivation are derivatives of emotions or desires. Both of them mobi lize mental and physical abilities to achieve the goal. Stimuli are developed as the result of learning and at the same time motivate learning. In the chapter FREE WILL AND ACTIONS we discussed importance of rewar ds and punishment as stimulus and motivations for natural system. Desires as well a s rewards and punishment cannot be stimuli s and motivation in contemporary artificial systems. Contemporary artificial systems so far are not so advanced to accept it. One of possible type of rewards is development of self-esteem (self-co nfidence). Self-esteem or self-confidence is determined by ratio of the number successful actions (SA) to the number of all actions (successful and failed) (FA + SA). It is non-linear fu nction similar to the trust function. n C = [SA/(SA + FA)] n = (SA + FA) / SA 0 If SA = 0, then n = 0.1 139 Positive experience increases the level of self-confidence. The grater level of self-confidence then the grater willingness to accept risk (see WILLINGNESS TO ACCEPT RISK). Distinction between calculation of trust (T) and self-confidence: trust is relat ed to the specific types of actions, self-confidence is related to all types of activities. Stimuli and motivation can be activated by evaluation of environment a nd alternatives of actions and set priority to deal with more dangerous and more efficient as the c hoices of top priorities. Rules, criteria of choices can be generated as the result

of learning (see APPENDIX 12). Stimulation triggers actions from outside environment. Motivation triggers ac tions from outside or sometimes from inside environment. In this case motivation i s similar to inspiration but may be not so strong. Inspiration is se lf motivation from inside of the agent. It is the act or power of exercisin g an elevating or stimulating influence upon the intellect or emotions; the result of such influence which quickens or stimulates. It is subconscious process that is generated by the agent s World M odel. This process can be triggered by contradictions between application knowledge in the agent s Knowledge Base. Inspiration is the strong motive to initiative. Initiative is the power or abili ty to begin or to follow through energetically with a plan or task; enterprise a nd determination [36]. Altruism (concern for the welfare of others), moral, protective and se lf-protective mechanism is important stimuli of the natural system. Some of them are reason able for the AIS. They involve in process of assigning a task priorities by level of motivation and resources mobilization. The value judgment of a system determines good and bad, reward and punishment, i mportant and trivial, certain and improbable all of them are the AIS motivations. Some of these terms are discussed in the next chapter. Algorithm (Altruism): 1. Recognition of the scene 2. Evaluation of the objects or alternatives of action 3. To find the dangerous object or conditions to another object 4. Development of the system s responses priorities of the actions.

WILLINGNESS TO ACCEPT RISK Risk is ess to the possibility of suffering harm or loss; danger . The Level of willingn

accept risk is an important personal characteristic: courage. Courage is the sta te or quality

of mind that enables one to face danger

[36]. Fear is opposite to courage.

Fear: A feeling of the presence or imminence of danger. It is the state or quali ty of mind that disables one to face danger. It is a negative-value state variabl e produced when the sensory processing system recognizes or the world models predicts a ba d or dangerous situations or event fear may be assigned to the attribute list of an entity such as and object person situation event or region of space. Fear tends to the produce behavior designed to avoid the feared situations event or region or flee from the feared object or person 140 By Aristotle [35] courage is the virtue at the balance point between heedlessness and cowardice, which are both excessive forms of the same thing. Practical reason is intellectual virtue. By which one comes to distinguish what is good and bad, th right strategy, and so on. In order to evaluate this personal ability it is important to understand how to measure risk. The standard risk-level measurement is probability of occurrences of th e event. Risk is acceptable in the case of a reasonable probability value calculated on represant ive number of repetitions of the events. The most important case in real life is t he evaluation of the risk level of only one occurrence. The level of information needed for decision-making can determine the level of risk. If information about an event is equal to zero, the n risk is equal to 100% or 1. If information about the event is equal to 100%, then risk is equa l to zero. The level of information availability is equal to L = I(A)/I(N), where I(A) is available information, I(N) is needed information, Available information is determined for each variable related to the event separ ately. The R = (1 L) Willingness to accept risk W = (1-R)*P/LOS, where P is profit, LOS is losses. The amount of needed information can be extracted from the rules of a

pplication from the Application Knowledge Base. The amount of available knowledge can be extracted f rom the Date Base information. The importance of each variable of information should be evaluated and presented by weight coefficient can be done by weight sum of information of the variables (APPENDIX 3). The AIS can evaluate its experience and make an adjustment of a decision-making process. In this case we are talking about subjective risk that is defined as uncertainty based on a person s mental condition or state of mind. Personal w illingness to accept subjective risk is determined as Wp = kW, 0 < k < 1

141 The type of professional activity sometimes depends on ability to acce pt risk. Entrepreneurship is such type of activities. Genes are key to entrepre neur s activity. Entrepreneurs are largely born rather than made, research suggests. A UK-US (BBC NEWS) study has found our genes are crucial in determining wh ether we are entrepreneurial and likely to become self-employed. The Twin Resear ch Unit at St Thomas Hospital, London, the Tanaka School of Business at Imperial College, Londo n and the US Case Western Reserve University carried out the study. It found nearly half of an individual's propensity to become self-employed is du e to genetic factors. The researchers say genetics is likely to determine whether a person ha s traits vital to being a successful entrepreneur, such as being sociable and extroverted . And, contrary to previous beliefs, family environment and upbringing have little influenc e on whether a person becomes self-employed or not. The other factors which did play a significant role

were random life events, such as being made redundant, winning a large sum of mo ney, or a chance meeting. John Cridland, CBI Deputy Director-General, said: "If half of a person 's propensity to become self-employed is due to genetic factors then half is caused by other influences and it is vital that the proper education and entrepreneurial support schemes are in place to enable them to blossom." Willingness to accept any risk depends on personality of a system and the level of self-confidence (see STIMULUS, MOTIVATION, AND INSPIRATIO N). If information is not available then willingness to accept risk can be defined base on the level of se lf-confidence

W = C Where C is a level of self-confidence. This equation represents agent s personality rather than decision that is based on information.

SOCIAL BEHAVIOR The Man-machine Society 21st century is the man-machine society. People have to learn how to leave in the new environment. Social skills of the AI systems are based on the strong ability of communication between members of the natural and artificial social grou ps and between members inside of the each group. Simple group s activities can be exec uted even without direct communication between group s members. The common goal guides the group actions. A soccer game in human and artificial groups is based on the common goa l and this goal can be reached without direct communication between the group s members (Robo cup) by evaluation of environment conditions. A group can be homogeneous or a mixed comprised of human beings and

the AIS. Interaction with the human individuals and their groups increase the importan ce of the AIS personality. 142

There is a big area of robots application that concentrates on relationships o r contacts with humans. These systems should have its own emotions and should be able to understand human emotions, as described in the section above (see EMOTIONS). People prefer to have relationships with cats and dogs because these animals have recognizable emotio ns. People need somebody who can respond to their actions on emotional level. If you look at the picture (Fig. II-25) then you can realize that P IGMALION S problem become reality. A humanoid, also called "android", or "anthropomorphic robot", is a robot that looks like a human. With the behavior-based approache s to robotics, humanoids are built to replace humans in performing tasks that are not suitable for humans (e.g., to o dangerous, too boring, too expensive) but that humans are good at because of our genetic bu ilt (e.g., our vision, our dexterity, our mobility). In replacing humans, "automation with emot ions" is key. In other words, the less human interference needed the more effective robots are. As researchers in the Humanoid Robotics Group at the Massachusetts Institu te of Technology have revealed: Mr. Warwick believes that in ten to tw enty years, humanoid robots will complicate the moral dilemma further. "In this timeframe, robots in the home wil l not be an equal, but they will be given more of a status." He believes that in 20 years ti me, robots will have an intellect on par with humans, which could reverse the issue into whether or not robots will be willing to let humans into their homes! Detail description of the Behavior Generator structure (Fig. II-1) and definitions of basic emotions with the quantities evaluation (see EMOTI

ONS) are presented in [28]. Most of them are emotions that are based on sensation. The behavior generator presents of an artificial system. One type of social activity is compassion - deep awareness of the su ffering of another coupled with the wish to relieve it. Unlike natural systems artificial ones can exercise compassion without the wish to relieve pain. Compassion is a result of the in volvement of two ore more systems in some kind of relationship. Both of them must have the same understanding of any event. The algorithm of compassion can be present ed as two synchronized processes Fig. II-25. The Robot ( http://www.androidworld.com/prod19.htm) 143

The first system is under influences of the event Event discrimination (good or bad) communication about feeling

The second system receives information about the event connected to the first sy stem: Information (from the first system) - demonstration of feeling An event can be internal (broken part or low level of power supply) or external. One of the systems can be a human being. The response to help is the highest level of compa ssion. The sensitivity level of the informational channel and the AI s ability to react deter mines the level of compassion. Reasonable behavior can be changed if there are some problems in the system. Fig . II-27 presents the neuron net (three layer perceptron) that can classify objects coded as 11, 10, 01. Fig. II-26. Honda's intelligent humanoid robot The system with ability to make classification can be design as the rule base sy stem with the set of rules: 1. If the code of a object is 00 then place this object in the location Output1 2. If the code of a object is 01 then place this object in the location Output2

3. If the code of a object is 10 then place this object in the location Output3 This example demonstrates relationship between the Neuron Net technique and the Rule Base technique. For any (i) neuron output is determined as

O = i 144 N5 I1 Input

w(i)*I(i)

T(i)

N6 N 1 Output 1

N3 I2 Output 2 Input

N7 N2

N4 N8 Output 3 Fig. II-27. Neuron Net application. I neuron N, T Threshold, O - Output Table of Outputs NEURON W(I1) W(I2) W1 W2 W3 W4 W5 T N1 1.0 0.0 0.5 N2 0.0

Input, N

Neuron, W(N)

weight of input from

1.0 0.5 N3 0.5 0.5 0.6 N4 0.0 1.0 0.0 N5 1.0 0.0 0.0 N6 -1.0 0.0 1.0 0.1 N7 1.0 0.0 0.0 0.0 N8 -1.0 1.0 0.0 0.1 Classification: 1. If I1 = 1, I2 = 0

Then O(1) = 1, O(2) and O(3) don t have a signal

because T(3) = 0.6 and W(1) = 0.5 Then result is O(6) = 1 2. If I1 = 0, I2 = 1 Then O(2) = 1, O(1) and O(3) don t have a signal

because T(3) = 0.6 and W(2) = 0.5 Then result is O(8) = 1 3. If I1 = 1, I2 = 1 Then O(3) = 1, O(1) = 1 and O(3) = 1

Then

result is O(7) = 1

This procedure demonstrates one of possible methods to store data in a neuron ne t. 145 Fairness An Artificial Intelligent System of 21st century will be involved in sophisticat ed relationships not just as the direct actor but also as advisers in business, polit ics, development of new product, military business, etc. Sometimes inter ests of the different agents or groups who involve in some kind of relati onships are contradictable. In this case the the base of relationships development. Unfortunately definition of the term not exist. It is may be because in reality pure fairness (impartial) does not ex ist either. Real fairness does not exist not because specific goal and criteria that are not compatible with the goals and criteria o f other parties. Compromise is the closest term to fairness. Compromise is a settlement of diffe rences in which each side makes concessions, the result of such a settlement. Usually it is based on balance of power, a state of equilibrium or parity characterized by ca ncellation of all forces by equal opposing forces. In terms of the T heory of Control Systems it is multi system control of equilibrium. We wil l use the term an agreement about rules of behavior or relationships between different groups. It is reasonable to speak about acceptability of other party s behavior by the first one instead of fairness. Fairness is the criteria or evaluation of multi group agreement (compr omise). Fairness can de defined as a temporary equilibrates in relationships between di fferent groups or individuals. It is a criterion in Value Judgment. Fairness has very strong psychological emotional connotation that creates additio nal difficulties for compromise. Temporary character of the many compromises is determined by deviation of power of different groups. Do you know international agreements that were not broken? The multiparty compromise development is the very complex problem. It can be c alculated

as minimal unhappiness [UH] in the group: C = min [UH] Developing of the fair deal includes 5 main steps: 1. Equalization of the party s status. Discrimination can not be base of the fair deal 2. Development of the common goal 3. Development of the common metric of the common goal 4. Adjustment of the parties goals to the common goal 5. Generate the compromise (minimal unhappiness calculation). If it is impossible to develop the fair deal then, as we know, alternative is wa r. In the mixed society of 21st century the problem of fairness in relationships between intelli gent systems as natural and artificial as well should be taken in consideration. Development of independent humanoid group with the special identity can posed treatment to the human societ y and other independent humanoid groups in the name of their identity (Greeks and Turks, Arabs and Israel) [43]. It should be taken in consideration from the very beginning in ord er to prevent this development. 146 Example above shows that even today, as we know, there are problems of fair dev elopment in relationships between human groups. But now the new problem arises: it is fuz zy problem of ownership of an intelligent system by another intelligent system. There is another question: Who is the leader and the boss in kinderg arten with artificial babysitter or in the group with super intelligent leader ? Even today in the aircraft, spaceships, machine-tool, and other industries we delegate some control functions t o artificial, computerized control system without ability to override it by a human being! The Fair Deal Development Note: This example is only simple illustration of the fair deal calculation.

There are two main parties that participate in the Public Company s wea lth distribution: investors and workers. Both of them have different status. Investors l end money, workers sale (?) their abilities and energy (labor). Labor is the use of physical or men tal energy and skills to do something. borrowing. In reality skills don t change their owner. Workers does not sale but lend their ability to work, their labor, for use it. It equalizes status of inv estment and labor. Some workers are the same company stockholders. A borrower owns only result of labor and loan application. As the workers who at the same time have status of the owners, are they sale their labor to themselves? Difference between economical understanding of the terms (different status), fuzzy understandings of the term different areas of the human s relationships. Wealth distribution in the public co rporations is one of many examples of such problems. This problem is the cornerstone condit ion of the social fairness (impartial). Party s goals are contradictable. A worker s goal is to get the highest return on the unit of labor. An investor s goal is to get the highest return on the unit of investment. The fair rules can be developed independently from the pressure of money power a nd power of the union. These rules have to reflect the society goal rather than the goal of the special powerful groups. In accordance with The Declaration of Independence the goal of the American society is Happiness, as it was shown, in some way is based on the fair deal that is based on fair wealth distribution in accordance with the level of participation in wealth development . As we mention before there are two groups who participate in the pro cess of wealth development: investors and workers. Investors invest money workers invest labor . Value of

the labor investment as any financial political and social factor can be measured by money value on the money scale [40]. Measurement of investment and labor on the common money scale (we equalized its status) creates the common metric. Value of w orkers investment is equal: I(W) = s(i)*N(i), (1) i 147 where s(i) is yearly salary (price of labor with insurance) in the group N(i) is the number of workers in the group

Suppose that this value is constant for each of Whole investment is equal: I = I(M) + n * I(W), Where I(M) is invested money by investors.

(2)

Suppose yearly spending on equipments, material, transportation, etc. are equal to constant for each of

Suppose I(M) is invested for investment is equal to YR = I(M)/n + [I(M) /n] * m/100 Or YR = [I(M)/n] * (1 + 0.01m) (4) If an agreement of money investment does not include fixed return (m %) than YR = I(M)/n (5) From (1) and (3) total spending for S = n * [E + I(W) + YR] (6) Suppose that value of full amount of money receiving after realization of the company s product is equal to The salary scale can be design for each level of workers, CEO are included. If t he scale is presented, for example, by arithmetic series up and number of workers in the gro up by the same math rule down, then salary for each group is equal: s(1)*N(1) [s(1) + k]*[N(1) k] (3)

. [s(1) + k*n]*[N(1) k*p], where p is the group level. Salary and return on investment are determined by financial situation on the lab or and financial markets. It is limitation by external conditions. 148 Extra value earned by a company for D = n * A S (7) and depends on choices (4) or (5). Distribution of Earning (simplify example) Each invested dollar as investment and labor eligible to get extra value equal d = D/I (8) From (2) and (8) investors eligible for I(E) = [I(M) + n * I(W)] * d (9) Or I(E) = I(M) * d * (1 + n) Yearly value is equal: (10)

I(E)/n = I(M) * d * (1 + n) /n (11) If an agreement of money investment does not include fixed return (m %) (5) then m% is not granted and risk of investment increases. If spending is calculated without return on investment and labor then S = n * E (12) and risk is higher. Workers eligible for (see (9) or (10)) return: W(E) = A I(E) (13) Yes, investment is risky business, but not very much if you invest rather then s peculates. If fixed return on money investment is not included in the agreement then investmen t become more risky but can gives higher return. Working people have risk to get compensa tion lover then it was suppose to be. Condition (13) creates equal level of risk for labor and investors. It is the highest level of fairness. If a company cannot fulfill obligat ion to pay salary then it layoff workers. If a company cannot fulfill obligation to investor then investor are loosing money. A Company declare s bankruptcy. So, it is almost equally risky activity for investors and workers. 21st century is the century of extensive automation. The workers who lost their job in the shrinking labor market can be compensated by particip ation in wealth distribution [39, 41]. 149 Each person who lost the job because technology advanced is eligible to get part of wealth of

the company he/she worked for. Value of compensation can be equal to the poverty level or minimal salary in this kind of business or calculated by the special formula. This compensation is for lifetime. Unemployment insurance will pay additional part of money in accordance with the existing system of unemployment benefits calculation law. Unemployment benefits should be stopped if the person will get a new job. This p olicy will benefit whole society from advanced technology application. Each worker receives basic amount of monthly payment. Final yearly sal ary can be calculated by the end of the year. This approach increases fairness in the Global Market and decreases so cial tension. If a company decides to move business off shore (outsourcing) to use cheape r labor then the financial result of a company activities increases and compensation of workers w ho lost their job will be increased. Fair Deal Algorithm: 1. Equalization of party s status. 2. Define the common goal of participants (wealth distribution) 3. Adjustment of the parties goals to the common goal 4. Development of the common metric of the common goal (money scale) 5. Chose the criteria (receiving amount is proportional to the investment) 6. Calculate the values on the table (amount of investment and labor) 7. Calculate unit of action on the common scale (d) 8. Calculate the fair deal

Independent Behavior A human being demonstrates great level of dependency from another human being be havior. In some cases it creates problems in the decision-making process.

Artificial Intelligent Systems can demonstrate grater level of independe ncy as well. However its decision is more reasonable, more valuable. In computer simulations, Couzin (http://www.msnbc.msn.com/id/6934951/Simple science governs herd mentality) and his colleagues programmed virtual animals with t he instinct to stay near others en endowed an important survival trait in many species. The researchers th be it toward a food source

some members in the flock with a preferred direction or a new

nesting site. They then determined how close the group would come to arriving at this goal. Accuracy increased, as more of the members knew where to go. But at a certain po int, adding more informed individuals did not increase the accuracy by very much. To give an example, a group of 10 gets about the same advantage from having five leaders as having s ix. The minimum percentage of informed individuals needed to achieve a cer tain level of accuracy depended on the size of the group. If 10 virtual buffaloes need 50 percent of the 150 herd to know where the watering hole is, a group of 200 can get by with only 5 p ercent. In nature, it is likely that the number of leaders is kept as low as possible. Couzin thinks there may be a similar sort of mechanism at human crowds. As human s "We walk along a busy street more or less on autopilot," he said. Perhaps we are sub consciously reconciling two simple commands: get to work on time and avoid steppi ng on anyone s shoes. The level of a recipient s trust to the leader in specific area of activities can be calculated as: T =[P/(P+N)]n , R > (P+N) T = 1, n = (P+N)/P 0 R = (P+N)

where P is a number of positive occurrences in specific events of a leader s act ivity, N is a number of negative occurrences in specific events of a leader s activity, R is the representative number events of a leader The same formula can be used to calculate ader in general area of activities. In this specific events should be used all number The level of self-confidence is C = [SA/(SA + FA)]n n = (SA + FA) / SA, where SA is a number successful actions of all types or specific area of activit ies, FA is a number of all failed action of all types or specific area of activities. Degree of readiness to follow is determined by the experience. If the lower level of self-confidence and grater level of trust, then the grater readiness to follow. The level of recipient s dependency from the leader s agent can be calculated by the ration of an agent s trust to the leader (T) to the agent s self-confidence (C). So readiness to follow is RF = T/C If SA = P and FA = N or T = C, then RF = 1 The value RF < 1 triggers independent behavior, the value RF > 1 triggers depend ency. An agent can choose his adviser by evaluation of value of the coefficient of tru st. 151 The grater value of previous experience. PSYCHOLOGICAL MALFUNCTIONS, DISORDERS AND CORRECTION s activity. a level of trust to the le case instead of a number of events in all areas.

Different artificial mind s malfunctions such as the broken connections, the sensors malfunctions, and the parameters deteriorations (see previous chapter and Tool of Reliability and so on can be cause of information distortion. Adding wrong or lo osing needed information can distort the World Model. The wrong World Model generat

e inadequate response, creates psychological disorders. Problems diagnostic can be done by co mparison of the incorrect World Model with correct one for the same system type. It is possi ble to use a human brain diagnostic technology (see APPENDIX 7). Dangerous and antisocial behavior of an Artificial Intelligent System c an be corrected in more efficient way than in case of a human one. A psychologist and neurobiologis ts working with human being as a patient tries to change existing negative setting to p ositive one. But for the time being he/she does not have detail information about content of know ledge in the human brain. In case of an artificial system we can get this information and eve n can correct it. Working with a human being or some types of animal psychologist presents input information to their control system. He/shy tries to reset some programmable par ameters of the system. Moral and law exist to do the same but in more powerful way.

MORAL AND LAW Moral arising from conscience or the sense of right and wrong; having psychological rather than physical or tangible effects; based on strong likelihood or firm con viction, rather than on the actual evidence [36]. Moral is the set of rules accepted by the member of society that are not covered by the law. See FREE WILL AND ACTIONS and APPENDIX 12. The frontal lobes of the human s brain involve in the ab ility to recognize future consequences resulting from current actions, to choo se between good and bad actions (or better and best), override and suppress unacceptable social responses.

Moral makes sense in the group. Even in the homogeneous group there is some range of moral values deviation through the cross section of the group. Fully isolated intelligent

system (in a world with no other occupant) does not understand moral values. Artificial Intelligent systems interact with human beings. In some cases they will be incor porated in the human society. It is very important to expect that these systems will be able to understand and exercise the moral law of the human society. David Hume rejects the idea that there is anything morality itself arising from our own sentiments about actions of certain kind. T he difficulty lies in the subjective nature of morals. What is ruled out in one c ulture is fashionable in 152 another. Moral estimations of sentiments of the observer. The moral law is within by each person. Kant s moral philosophy bases morality on reason and, thus, reserves the moral dom ain to creatures that are rational [34]. Artificial Intelligent Systems are rational. S o, they can act in accordance with the moral rules. Existence of the hybrid inte lligent systems (combination of the natural and artificial elements) advances m oral problems up on the list of problems. Moral rules are result of learning. The system learns moral patterns and criteri a s through the educational process (intentional process). It is way to impose the moral rules a nd behavior in the human and artificial systems between the group s members. Each new group member obliged to accept mo ral rules of behavior. There are two types of moral: the moral of the group (GM) and the personal moral (PM). The personal moral arises as relationship to the other society members. Difference between two of them is the personal moral deviation (PMD): PMD = GM PM Deviation can be defined by a person, by specific moral rules, and through the c ross section of the society as least square of deviation. The law is the body of rules and principles governing the affairs of a community and enforced by a political authority, a legal system [3 6]. The law is the social phenomena. What is a crime under one set of laws is an act of heroism a few miles do

wn the road. An Artificial Intelligent System must to abbey the human law. The law is a part of a characteristic of environment and should be presented in the World Mod el similarly to the process of imposing the moral rules. Subconscious can create very dangerous problems in natural and artificial intell igent systems. It is not under full control of conscious processes and has full access to almos t all sometimestwisted knowledge and to almost all control systems. It can generate unpredictab le dangerous behavior. It is critical to develop a system that can control dangerous subconsc ious processes. The existing system of moral rules and law should be expended to inc orporate rules of rights and responsibilities of Artificial and Hybrid Intelligent systems such as rights and limitations of ownership. Contemporary western mora l and law accept ownership of the intelligent system by another intelligent system or group (animals in agricultur e, Zoo). Can a human being be an owner of an android? It is important law problem as well as ps ychological and moral. There is one more question. Who is responsible for the problem that is caused by intellectual malfunction? An insane human being is not responsible for his/her actions

153 ART APPREHANSIONS Art apprehensions are one of personality characteristics. It is a resu lt of communication between an artist and a recipient. The common language is a precondit ion of any communications. In this case we are talking about specific symbolic language. Kn owing and mastering this language is important condition of art apprehension. Another precondition is sensibility of the visual, hearing and other s ensor systems. An

artificial system may have the better sensor system and can better evaluate the object of art then natural one and can collect more information from the same art object and present judgment that is inapprehensible by a human being. The artificial systems as art developers (composers, writhers, game and industri al designers, architects, interior designers, etc.) can work at the personal level as well as the community level. In social relationship this ability permits to share art enjoy with human counterparts. The human s brain cerebellum is processing of language, music, and other sensory temporal stimuli. Art is 1. Human efforts to imitate, supplement, alter, or counteract the work of nature . 3. The conscious production or arrangement of sounds, colors, forms, movements, or other elements in a manner that affects the sense of beauty [36].

The conscious production means the conscious apprehensions. The sensor system of the AIS collects information, the system of perception and conceiving analyses it and compare with stored in the memory patterns. Evaluation of information is based on the criteri a s of beauty. The system learns patterns and criteria through

1. experience (unintentional, subconscious process based on repetition a nd conceptualization) 2. the educational process (intentional process). Some criteria (rules) are culture-oriented some are universal. For exam ple, multimedia intentionally programs human s stereotypes of beauty. Beauty is [36] a delightful quality associated with harmony of form or color, excellence of craftsmanship, truthfulness, originality, or another property. The ration alist school takes aesthetics to include standards of taste and judgment permitting assessment of t he good, the

bad, and the prosaic. On the Humean account, aesthetics is the part of empirical psychology that identifies the features of the external world generally productive of agreeable feelings. Beauty is a product of the perceptual and emotional responses to an object, where the agreeable feelings are most reliable associated with judgments of aesthetic valu e. A professional agent evaluates the objects of arts intentionally throug h cognition, consciousness. A dilettante does not think about rules. He/she/it gets impression, unintentional evaluation. It is subconscious process. 154 Pythagoras discovers the musical scale and musical harmonies (concordant and dis cordant). The shape and color of natural objects usually is accepted as the go od patterns. The sharp shapes and discordant sounds, unmatched colors a ll are bad patterns. Analysis of the art objects can generate good or bad Any designer knows that part developed by long experience, sometimes through the evolutional process. The pic ture with strong asymmetrical location of the objects develops impression of misb alance. In accordance with every day experience it generates bad feeling by assoc iation. This feeling rejects acceptance of this piece of art unless it does no t represent something that compensate this bad feeling. Many famous physicians even before any experimental proof accepted Einstein theory as right theory because it was Douglas Bagnall and David Hall from New Zealand Archive created the N euron Net with ability to learn criteria of arts evaluation from the experts. This s ystem can implement learning knowledge to develop movies (see also CREATIVITY). Some unconditional reflexes (see also REFLEXES) can be controlled by t he Main Control system without involvement of intelligent abilities. In this case a system uses

the hard coded logical function. Psychologists know that there are sounds (singer s voic e) that trigger woman s sexual psychical behavior. The Main Control system can be connected to the inner sensor system. In this case system generates the reflection of input information as a sensible reaction. It is similar to resonance. This type reaction can be seen in art appr ehensions. In the some cases a system develops reverse procedure t fits to the deliver type of art tha

mood of recipient. As it was described before, Walt Disney Co. [New Scientist, 2 4.01.2006] has created a media player that selects songs based on its owner s latest mood. Th e device has wrist sensors that measure body temperature, perspiration and pulse rate. It uses these measurements to build a profile of what music or video the owner would prefer played when He/She is hot, cold, dry or sweaty, and when their pulse is racing or slow. T he device then comes up with suggestion to fit each profile, either using songs or videos in its library or downloading something new that should be suit able. If the owner rejects the player s selection it learns and refines the profile. So, over time the player should get better at matching bodily measurements with the owner s mood s. This type of relationship can be seen between two artificial systems. It resembles compassion and emotions. ARTIFICIAL LIFE Artificial Life as the Model of Natural One All definitions in this book define the artificial life terms, but it is up to reader to expand them to the natural life terms. A modeling of the real processes is the powerful method of research. Any model i s based on existing detailed description of the modeling process. Unfortunately many of the natural life important processes (creativity, intuition, etc) do not have adequate d efinition and description, 155 An artificial process model is based on an artificial process detail description

. In some cases the description is not the same as for the natural life process. It creates limi tation to use the model of artificial life processes as the model of natural life processes. But in some case to some extend it is possible to use artificial life as a model of the natural one. Artificial Life New advance in development of Artificial Intelligent Systems move us c lose to the new phenomena such as an Artificial Life. An Artificial Life is the chain events of development, evolution, existence and psychology of Artificial Intelligent Syste ms. Life is The property or quality that distinguishes living organisms fr om dead organisms and inanimate matter, manifested in functions such as metabol ism, growth, reproduction, and response to stimuli or adaptation to the environment originating from within the organism [36].

An Artificial Intelligence researcher (Josh Bongard) at the University o f Zurich shows the possibility of artificial life (virtual life) existence wi th virtual intelligence [31]. This type of life does not have natural meta bolism but perhaps it is possible to speak about virtual metabolism th at supports virtual growth. Metabolism is the complex of physical and c hemical processes occurring within a living cell or organism that are necessary for the maintenance of life. In metabolism some substances are broken down to yield energy for vital processes [36]. Professor of the Intelligence Autonomous Systems Laboratory (University of the West England) Chris Melhuish and his team created EcoBot I: Sugar powered autonom ous robot (Fig. II-29, 30). It is a 960g robot, powered by microbial fuel cells (MFCs) a nd performs a photo-tactic (light seeking) behavior. This robot does not use any other form of power source such as batteries or solar panels. It is 22cm in diameter and 7.5cm high. Tra nsformation of metabolism (chris.melhuish@uwe.ac.uk).

By definition Homo, family Hominidae, order Primates, class Mammalia, characterized by erect posture and an opposable thumb, especially a member of the only extant species, Homo sapiens, distinguished by a highly deve loped brain, the capacity for abstract reasoning, and the ability to communicate by means of organized speech and record information in a variety of symbolic syste ms [36] Preliterate societies had limited resources for recording what was of value to them [34]. The same definition (Homo sapiens) can be used to characterize the ad vanced Artificial Intelligent Systems - humanoids. Soul is the animating and vital principle in h uman beings, credited with the faculties of thought, action, and emotion and often conceived as an immaterial entity [36]. This definition gives another connection between life an d intelligence. It will be shown that biological and intellectual adaptations are subjects of tw o different parts of a definition of intelligence [31]. ence and life was elsewhere presented in [19] by Dr. A. Meystel. 156 Virtual creatures, with muscles, senses and primitive nervous systems, have been "grown" from artificial embryos in a computer simulation [31]. The multi-celled organism s could be the first step toward using artificial evolution to create intelligent life from scratch. Josh Bongard, ran the simulation until each cell had grown into a creature of up to 50 cells. He then tested each one to see how well it pushed a simulated box (Fig. II-28). By setting one creature against another, Bongard was able to find which cells grew into the mo st effective "pushing" creatures. He then took the genomes that led to the most successful cr eatures and mixed them to produce new genomes for his virtual embryos, which he grew and tested. The connection between intellig

Bongard, who reported the work at the International Workshop on Biolog ically Inspired Robotics at HP Labs, Bristol, now has a bunch of creatures that excel at box-pus hing. The Intelligent Autonomous Systems Laboratory (Dr. Ian Kelly, British Un iversity) has created the robot SlugBot (Fig. II-31) that can (artificial metabolism). American company UGOBE (Fig. II-32) created artificial intellectual life forms s imilar to the natural life. Dinosaur Pleo (created by this company) can be tired, exited, be a fraid, be happy, etc. He can make smooth movement of any part of his body. living creatures, but to the inorganic world as well and the objects of application include inanimate and non-living creatures Fig. II-28 157 Fig II-29. EcoBot I fully assembled (chris.melhuish@uwe.ac.uk) Two stacks of four MFCs connected in series Bank of capacitors (accumulator) Circular piece of styrine material Photo-detecting diodes Caster wheels High efficiency high torque escap motors Electronic Control circuit Fig. II-30 (chris.melhuish@uwe.ac.uk) 158

Fig. II-31 SluugBot Fig. II-32 Pleo (http://www.pcmag.com/article2/0,1895,1918705,00.asp) 159 PRINCIPLES OF THE ARTIFICIAL BRAIN DESIGN

Analysis of brain activities shows that it is reasonable to design the artificia l brain base on several principles:

1. The structure should be designed as a multilevel and multiresolutional syste m 2. It can be design as the distributed control system with multi location of th e system s parts 3. Each brain function should be assigned to the separate module 4. Each module technology should be design not as part of a uniform technology (for example: the Neuron Net or Genetic Algorithm) but using an application that prov ides of the best technology for the specific application 5. Execution of similar functions or stapes of functions can be done in the sam e module. It is important to have substitution of the fail modules (see Robustness). This replacement is possible because all intellectual functions are based on two main proc edures of data manipulation: learning and reasoning. Some systems such as the control systems of the symmetrical (left and right) parts of a body execute similar procedures.

EVOLUTION AND INTELLIGENCE

A term method of a problem solvind: the Genetic Algorithm (see APPENDIX 6). The second one is the process of a system adaptation to new conditions. Ability to adaptation is one of an important intellectual ability of the AIS (see AUTONOMOUS). 1. short term time-spatial adaptation 2. long term multi-generational adaptation. The last one is referred t o as Alex Meystel). The artificial gene in Genetic Algorithm represents the smallest unit of informa tion. Like the natural set of genes (chromosome) it controls the physic al development and behavior, determines a particular characteristic of the system.

determine Josh Bengard [31] took the genomes that led to the most successful creatures and mixed them to produce new genomes for his virtual embryos, which he grew and tested. B ongard, who reported the work at the International Workshop on Biologically Inspire d Robotics at HP Labs, Bristol, now has a bunch of creatures that excel at box-pushing (see Fig. II-28). "Evolution seems to figure out that it's useful to organize the growt h process," says Rolf Pfeifer who works with Bongard. "You get repeated struc tures, and they discover things like increasing body mass helps to push the block." Evolution of the AIS is the process of a system changing (adaptation) under ext ernal influence zed concerned with the development of the physical universe from unorgani

160 matter to the higher level of organization with the stable changing of a system s behavior that can be observed in the next generation.

There are two types of external influences in an artificial environment: 1. intentional influence by another Intelligent System (a human being or another the AIS) to develop new characteristic or behavior, 2. unintentional influence by external environment to adopt the system to new e xternal condition, to make a system more efficient (for example, automatic adjustment of a computer s pull down menu in accordance with frequency of a specific co mmand application) Evolution improves ability of the system to increase a level of intelligence. It is the part of General Intelligence (see also WHAT IS INTELLIGENCE?). not Evolution is

mandatory ability of the intelligent systems and is not feature that defines system as intelligent. Evolution is the tool to improve system s intelli gence. that has emerged as a result of evolution by rewarding systems

with increase of the probability of success under informational uncertainty to say that evolution in the most cases creates conditions to increase the level of intelligence. To increase the level of intelligence is possible just in an intelligent system. Evolution of a plant does not change a level of plant s intelligence. A plant and some other spec ies are not intelligent systems because they have the hard-wired and hard coded co ntrol system (see DEFINITION OF INTELLIGENCE). These systems have fully predictable response to any stimuli; they do not have the Virtual creatures, with muscles, senses and primitive nervous systems, have been "grown" from artificial embryos in a computer simulation [31] (see Fig. II-28) . The multi-celled organisms could be the first step towards using artificial evolution t o create intelligent life from scratch. Chris Langton (the Center for Nonlinear Studies) created a colony of artificial ants, he calls them, for virtul ants. The vants search their environment, m eet other vants, and reproduce to create new vants. The system starts with a bunch of randomly specified vants and gives them a few simple rules, such what to do when they meet other vants an d so forth. These rules define behavior of vants. Artificial genetic code can be presented as the set of controlled swi tches in separate

manner or instantaneously by special control signals. Repeated experienc e of the several generations can be presented in genetic code, memorized and transferred to the next generation as the result of evolution [38].

161

GENDER OF ARTIFICIAL INTELLIGENT SYSTEMS

Different areas of Artificial Intelligent Systems application need specific gend er abilities and characteristics. The specific characteristics of a human being are defi ned by different brain activity. The same difference can be implied in an art ificial intelligent system by variation of software or hardware. There is clear difference between male and fimale behevior. The corpus callosum is wider in the brains of women than in those of men, it may allow for greater cross talk be tween the hemispheres possibly the basis for woman s "intuition" . It has also been used, for example, as the explanation of an increased single-task orientation of male, rel ative to female, learners; a smaller male organ is said to make it harder for the left and right sides of t he brain to work together and to explain a feminine ability to multitask. The male type of the brain (masculine) [37] is business oriented is the single task system has better ability of orientation in spatial environment. The female type of the brain (feminine) has stronger social orientation is the multitasks system has wider range of sensitivity has stronger sensibility to the body language and details recognition

has more expressive facial and body language. Unlike traditional manufacturing robots, which carry out single tasks sequential ly, the three female robots (Fembots) are able to switch between a number of jobs according to priority and circumstance. "If a man does the housework, he'll load the washing machine then stand there an d watch it," Dr. Hill (founder of the robotic software firm Kadence, Australia) said. "A woma n will go off and do something else." It is possible to design the system with maximal set of abilities and its ex treme values, but some times application needs the specific set of abilities. For example female s enior citizens will be more comfortable with the female robot (female personality and appea rance) as the helper rather than male or neutral one. In this case specific gender should be taken in consideration. INSTINCT AND ALTRUISM Instinct is 1. An inborn pattern of behavior that is characteristic of a species and is ofte n a response to specific environmental stimuli: altruistic instincts in social animals. 162 2. A powerful motivation or impulse. 3. An innate capability or aptitude: an instinct for tact and diplomacy. Altruism is unselfish concern for the welfare of others, selflessness [36]. It is not only inborn ability. It is also ability that can be develop ed by education as the selective ability. In some cases it is professional respo nsibility (bodyguards).

Instinct provides a response to external stimuli, which moves an organism to a ction, unless overridden by intelligence, which is creative and hence far more vers atile. Since instincts take generations to adapt, an intermediate position, or basis for action, is served by memory, which provides individually stored successful reactions built upon experience. T he particular actions performed may be influenced by learning, environment and natural principles.

Generally, the term instinct is not used to describe an existing condition or e stablished state. It is debatable whether or not living beings are bound absolutely by instinct. T hough instinct is what seems to come naturally or perhaps with heredity, general con ditioning and environment surrounding a living being play a major role. Predominately , instinct is preintellectual, while intuition is trans-intellectual. Instinct can be developed partly as hard coded ability that is presented as a pa rt of hardware or as intelligent part of the system.

CONCLUSION Psychology of Artificial Intelligent Systems is the new branch of psychology. Th ere are two important questions. The first question: Is it really something new? Yes, the most of people (not jus t my students) have problems to accept these ideas in the very beginning not because they are very complicated but because they are not fit to the existing understanding of artificial system nature. The second question: Are these ideas useful? Yes, they present descrip tions of the new systems, their possible features and abilities, present the guideline h aw to design the new systems. They are preparing a human being to the new en vironment. The actual behavior of fully autonomous advanced artificial intelligent systems cannot be predicted in some cases. It is important therefore to prognosticate possible dan gerous results of their behavior and protect environment and human being from their not authori zed actions.

It is very important to answer arisen questions about relationship between a hum an being and an Artificial Intelligent System.

163

REFERENCES: 1. Artificial Intelligence With Dr. John McCARTY. Conversation On The Leading E dge Of Knowledge and Discovery With Dr. Jeffry Mishlove, 1998. 2. Mind As Society With g Edge o f Dr. Marvin Minsky. Conversation On The Leadin

Knowledge and Discovery With Dr. Jeffry Mishlove, 1998. 3. Language And Consciousness. Part 4: Consciousness and Cognition with Dr. Steven Pinker. Conversation h Dr. Jeffry Mishlove, 1998. 4. Unlocking your Subconscious Wisdom. Part 1: Using Intuition with Dr. Marcia Emery. Conversation r. Jeffry Mishlove, 1998. 5. Mind Over Machine With Dr. Hubert Dreyfus. Conversation On The Leading Edge of Knowledge and Discovery with Dr. Jeffry Mishlove, 1998. 6. Mind As A Myth with U. G. Krishnamurti. Conversation ading Edge of Knowledge and Discovery with 7. The Transcendence nsciousness Of The Dr. Jeffry Mishlove, 1998. Ego. An Existentialist Theory Of Co On The Le On The Leading Edge Of Knowledge and Discovery With D On The Leading Edge Of Knowledge and Discovery Wit

by Jean-Paul Sartre. Hill and Wang-New York, 1997. 8. Psychology by Peter Gray, Worth Publishers, 1999. 9. Philosophy, History & Problems by Samuel Enoch Stumpf, McGraw- Hill, 1994. 10. Computers and The Mind with Howard Rheingold. Conversation On The Leading E dge of Knowledge and Discovery with Dr. Jeffry Mishlove, 1998. 11. Decision Support and Expert Systems. Management Efraim Turban. Prentice Hall. 1995. 12. Foundations of Neural Networks by Khanna, Addison-Wesley, 1990. Support Systems by

13. Neural Networks and Physical e Computation Abilities USA Their

Systems

with Emergent

Collectiv Sciences

by Hopfield, J. Proceedings, Natural Academy of 79, 1985.

Minds, PHOENIX, 1999.

14. McNeill F. M., Thro E. Fuzzy Logic. A Practical Approach. AP PRO FESSIONAL, 1994. 15. Zadech, L., Kacprzyk J. Fuzzy Logic for the Management of Uncerta inty, NY. John Wiley & Sone. Inc. 1992. 16. Albus J., Meystel A. Behavior Generator in Intelligent Systems, NIST, 1997, 17. Meystel A. Evolution of Intelligent Systems Architectures. What Should Be M easured? Performance Metrics for Intelligent Systems, Workshop, August 14-16, 200 0, Gaithersburg, MD 18. Atkinson R. L., Atkinson R. C., Smith E. E., Bem D. J., Nolen-Hoeksema S. H ilgard s Introduction to Psychology, Harcourt Brace & Co. 1996 19. Meystel A. Semiotic Modeling and Situation Analysis; An Introduction, AdRem , Inc. 1994] 20. Measuring Performance of Systems with Autonomy: Metrics for Intelli gence of Constructed systems. Per MIS August 14-16, 2000, Gaithersburg, MD 21. Proud R. W/, Hart J. J., and Mrozinski R. B. Methods for Determining the Le vel of Autonomy to Design into a Human Spaceflight Vehicle: A Function Approach. PERMIS 2000. 164 22. Cawsey A. The Essence of Artificial Intelligence. Prentice Hall, 1995 23. Dean T., Allen J., Aloimonos Y. Artificial Intelligence. Theory and Practic e. The Benjamin/Cummings Publishing Company, 1995. 24. Gersting J. Mathematical Structures For Computer Science, W. H. Freeman and Co. 1999

25. Negnevitsky M. Artificial Intelligence. A Guide to Intelligence Systems, Ad disonWesley, 2001 26. Polyakov L. Structure Approach to the Intelligent Design. Proceedings of th e 2002 PerMIS Workshop August 13-15, 2002. 27. Russell S. Norvig P. Artificial Intelligence. A Modern Approach. Prentice H all, 1995 28. Albus J., Meystel A. Behavior Generation in Intelligent Systems, NIST. 29. Franclin S., Grasser A. Is it an Agent, or just a Program? A Taxonomy for A utonomous Agents, PerMIS http://www.msci.memphis.edu/~franklin/AgentProg.html#ag ent 30. Polyakov L. M. Agent with Reasoning and Learning: The Structure Design, Performance Metrics for Intelligent Systems, Workshop, August 14-26, 200 4, Gaithersburg, MD. 31. Bongard Josh EPSRC/BBSRC International Workshop Biologically-Inspired Robotics: The Legacy 6 August 2002, HP Bristol Labs, UK of Zurich, 32. Freeman W. J., How Brains make Up Their Mind, PHCENIX, 1999 33. Meystel A. Evolution of Intelligent Systems Architectures. What Should Be Measured? Performance Metrics for Intelligent Systems. Workshop. August 14-16, 2000, Gaithersburg, MD. 34. Robinson D. N. The Grate Ideas of Philosophy, The Teaching Company, 2004. 35. Dennett D. C. Consciousness Explained,. Dennett D. C., Kinsbourne M. The N ature of Consciousness: Philosophical Debates, London, The Penguin Press, 1992 36. American Heritage Talking Dictionary. Copyright 1997 The Learning Company, Inc. 37. Pease Allan, Why men don t listen and women can t read a maps, EKCMO-press, 199 8. 38. Jubak J. In the Image of the Brain, The Softback Preview, 1994 39. Albus J. People capitalism. Economics of the Robot Revolution. of W. Grey Walter 14-1

http://www.peoplescapitalism.org/people.htm#front 40. Polyakov L.M., In Defense of the Additive Form for Evaluating Vectors, Meas uring the Performance and Intelligence of Systems: Proceeding of the 2000 PerMIS Workshop. August 14-16, 2000. 41, Michael Brush, The coming crackdown on CEOs http://articles.moneycentral. msn.com/Investing/CompanyFocus/TheComingCrackdown OnCEOs.aspx 42. Wallis C., Steptoe S., How to Bring Our Schools Out of the 20th Century, Ti me, December 18, 2006. 43. Volkan V. Killing in the Name of Identity, A Study of Bloody Conflicts, Pi tchstone Publishing, 2006

165

166

APPENDIX 1

BRAIN DEVELOPMENT

167

168

The Brain Position of each neuron is determined by the genetic code. http://en.wikipedia.org/wiki/Neuron#Anatomy_and_histology 169

neuromuscular junction http://en.wikipedia.org/wiki/Chemical_synapse 170 Detailed view of a neuromuscular junction: 1. Presynaptic terminal 2. Sarcolemma 3. Synaptic vesicle 4. Nicotinic acetylcholine receptor 5. Mitochondrion There are two different scales at which brain operate. One such scale of nervous system is composed of circuits made up of large fibers usually called axons. These circuit s operate by virtue of nerve impulses that are propagated along the fibers by neighborhood de polarization of their membranes. The connections between neurons (synapses) take place for the most of parts with in this fine fiber. Pre-synoptically, the fine fibers are the terminal branches of axons that use to be called teledendrons. Both their existence and their name have been largely ig nored. Postsynaptically, the fine fibers are dendrites that compose a felt wo rk within which connections (synapses and electrical ephapses) are made in every direct ion. This feltwork

acts as a processing web

Chemical Synapses

Chemical synapses are specialized junctions through which cells of the nervous system signal to one another and to non-neuronal cells such as muscles or g lands. A chemical synapse between a motor neuron and a muscle cell is called a neuromuscular junction. Chemical synapses allow the neurons of the central nervous system to form in terconnected neural circuits. They are thus crucial to the biological computatio ns that underlie perception and thought. They also provide the means through which the nervous system connec ts to and controls the other systems of the body. The human brain contains a huge number of chemical synapses, with young children having about 1,000 trillion. This number declines with age, stabilizing by ad ulthood. Estimates for an adult vary from 100 to 500 trillion synapses. The word "synapse" comes from "synaptein" which Sir Charles Scott Sher rington and his colleagues coined from the Greek "syn-" meaning "togeth er" and "haptein" meaning "to clasp". Chemical synapses are not the on ly type of biological synapse: electrical and immunological synapses exist as well. Without a qualifier, however, "synapse" by itself most commonly refers to a chemical synapse. Relationship to Electrical Synapses An electrical synapse is a mechanical and electrically conductive link between two abutting neurons that is formed at a narrow gap between the pre- and postsynaptic cell s known as a gap junction. At gap junctions, cells approach within about 3.5 nm of each othe r (Kandel), a 171 much shorter distance than the 20 to 40 nm distance that separates cells at chem ical synapses (Hormuzdi). As opposed to chemical synapses, the postsynaptic potential in electrical synapses is not caused by the opening of ion channels by chemical transmitters, but by direct electrical coupling between both neurons. Electrical synapses are theref ore faster and more reliable than chemical synapses. Electrical synapses are found throughout the nervous system, yet are less common than chemical synapses.

The Brain Development Stages Birth to 1 Month. ADAPTIVE REFLEXES A child shows - basic responses to stimulus reflexes

- learning many little correlation s of position and sensors input - generating mental maps of the different position its body - feedback (positive and negative) is provided by various hard-wired pleasure a nd pain stimuli - signals of comfort and discomfort teach the brain what works and what does n ot 1 4 Months. CIRCULAR REACTIONS - the basic reflexes of the first month are now chained together, creating repe titive motions - visually-guided reaching begins to occur - recognition of the mother s face 4 - 7 Months. SECONDARY CIRCULAR REACTIONS - developing simple goal-directed behavior - begins the training of the plan-and-goal layer of intelligence - the brain is learning to make use of cause-affect relationships Gruber (http://en.wikipedia.org/wiki/Child_psychology#Infancy) thinks that the d evelopment of logic and the coordination between means and ends occur from nine to twelve m onths and is associated primarily with. This is an extremely important stage of development, holding what Piaget calls the "first proper intelligence. " Also, this stage marks the b eginning ofgoal orientation, the deliberate planning of steps to meet an objective. 7 - 9 Months. COORDINATION OF SECONDARY CIRCULAR REACTIONS - the brain developing intentionality and creativity, exhibits means-end behavi or, including the use of intermediate actions to achieve the ultimate goal (new actions are no t being

invented yet, but the brain explores the many uses (both familiar and novel) for the motions it has already learned - a baby loosing an ability of differentiate sounds of different languages, an adult person accepts these sounds as the same. 172 9 - 15 Months. THIRD LEVEL OF CIRCULAR REACTIONS - the brain has a fairly complete mental model of what its body can do and what effect it has on the environment - the brain begins to direct the body to perform old actions in new context an d to find new actions for old situations, performing experiments to see what happens (process is going not through hypothesis and intentional experiment, but by trail and error) - the brain is probably developing symbolic model of its physicals capabilities 15 - 24 Months. SIMULATION OF EVENTS - the representational model is being exercised, validated, and expended - language and symbolic communication are coming online now, and these tools ar e used to further expand the mental model of the world - a child begin to recognize themselves in the mirror 24 as long as possible. DEVELOPMENT OF EXPENDED WORLD MODEL - further expansion of the mental world model.

Note: A cat and other animals cannot recognize themselves in the mirror. Some group of dolphins and primates can do this. Last experiment with elephants in Bronx Zoo (New York) shows that these animals can recognize themselves in the mirror.

New results shows (Scientists: New phylum sheds light on ancestor of animals, humans.

University of Florida, http://physorg.com/news81711681.html) that our comm on ancestor doesn t have a brain but rather a diffuse neural system in the animal s surface. A reconstructed genetic record reported in the Nature article also implies that the brain might have been independently evolved more than twice in different animal lineages, Mo roz said. This conclusion sharply contrasts the widely accepted view that the centralized brain has a single origin, Moroz noted. This shows that our common ancestor doesn t have a bra in but rather a diffuse neural system in the animal s surface.

173

174

APPENDIX 2

ANALISIS OF DEFINITIONS OF INTELLIGENCE

175

176 Analysis Axiom: A mentally healthy human baby as well as a grown human being is the intelligent system without any age limitations. ( The baby test ).

It does not mean that a human being with some mental problems is not an intellig ent person. The baby test is the big problem to many types of existing definitions of intelli gence. This axiom tells only that availability of reflexes is a condition of int elligence existence. It defines just the lower limit of the area of intelligent existence. It is jus t needed but not only condition. Unconditional reflexes are not an intellige nt processes (see also REFLEXES). Several groups of classification can illustrate the variety of existing definitions. Some definitions and groups can overlap others. Each group is presented by one or several examples. 1. This group of definitions represents only the list of some s: intelligent abilitie

complex engineering artifacts This approach is based on only naming some common abilities. It is a very simpli stic way of definition design. A baby cannot engineering artifacts 2. This group of definitions emphasizes importance of learning and adaptation: to the environment knowledge The New Encyclopedia Britannica gives a definition of intelligence as consists of the ability to learn from experience, adapt to new situat

ions, understand and handle abstract concepts, and use knowledge to manipulate one Webster learn or understand from experience; the ability to acquire and retain knowledge; mental ability; (b) the ability to respond quickly and successfully to a new situation; use of the faculty of reason in solving problems, directing cond uct, etc. effectively; (c) in psychology, measured success in using this abilities to perform certain tasks What does it mean respond quickly and successfully to a new situation concepts 177 3. This group defines an intelligent system as a goal driven system: environment

purposefully, to think rationally and to deal effectively with the env ironment psychologists can accept this definition What does it mean An intelligent agent is something that can act independently with well -defined goal should be able to adopt what it is doing based on information it re ceives from its environment A goal is an important but only one of the features of intelligence. 4. This approach defines an intelligent system as creative autonomic systems: creativity Autonomy and creativity don t give full description of intelligence. A b aby cannot demonstrate creativity at an earlier age. The brain develops creativity when a baby is 7 9 month old. 5. This group underlines importance of knowledge and reasoning. For an cient Greeks

Intelligence is originates in

Superior powers of mind

Mind: 1. The human consciousness that

the brain and is manifested especially in thought, and imagination. 2 . The collective conscious and unconscious processes in a sentient organism t hat direct and influence mental and physical behavior. 3. The principle of intelligence; the spirit of co nsciousness regarded as an aspect of reality (American Heritage Dictionary). In the same dictionary real definition and the description of abilities. It is impossible to define int elligence through the term mental. Term There is another definition of the word Smart characterized by sharp, quick

thought. Smart is often a general term implying mental keenness; more specifically it 178 can refer to practical knowledge, ability to learn quickly, or to sha rpness or shrewdness . So smartness is a highly dynamic kind of intelligence with a goal that directs to personal gain (American Heritage Dictionary). Knowledge collection and reasoning are very important characteristics of intelli gence. As we can see later it is a reasonable definition. MUST be able to do things by itself (however may choose to accept aid). The sys tem MUST be able to reason. The system MUST be able to develop self-awareness psychologist, University of New England, Armidale, Australia) Self-awareness is developing by the middle of the second year of life. 6. This group identifies an intelligent system as an information system: represents the size of the brain in excess of that needed to control routine body functions (fish and human beings have the same intelligence level? ) [31]. This definition looks like a control system description. Different human beings have different levels of intelligence. 7. Some definitions underline the ability of the system to adapt appr opriately to a changing environment:

appropriate action is that which increases the probability of success, and success is the achievement of behavioral sub goals that support the syst em

It is very reasonable definition. This definition does not permit to the evaluat ion of a product for promotion into global market because in the most of cases a product does not act and can be evaluated by possibilities, not abilities (see surroundings, manipulate objects or seek communication with other intelligent sy stems

implies the ability to cope with demands created by novel situation a nd new problems, to apply what is learned from experience, and to use the power of reasoning and inference effectively as a guide to behavior 179 A baby cannot age. 8. G. Berg-Cross defines intelligence as ability to manipulate symbolic repre sentation as cognetivists did it at earlier stages:

i.e. action Unconditional reflexes manipulate symbolic representation but it is not intellig ent processes. Chaotic manipulation with symbolic representation is not the intelligence proper ty. The goal defines meaning of the process. A human baby does not have the capacity to con struct and manipulate symbolic representation at such an early age, but still is an intelligent system (The baby test). 9. Sir Francis Galten makes clear connection between intelligence and the physical objects-sensors (the materialist s point of view):

more sensitive and accurate an individual s perceptual apparatus, the mor e intelligent the person It is a very important element of the artificial system design. 10. Some authors understand existence of different types of intelligenc e and propose dual definitions.

fluid intelligence and crystallized intelligence. Fluid intelligence refe rs us to our ability to gain new knowledge and to attach and solve novel problems. Being both genetically and biologically determined it consists more of our capacity for learning new things . Crystallized intelligence refers to the actual accumulation of knowledge over our l ife span. Research (Horn, 1978) has found that crystallized intelligence tends to increase with a ge, while fluid intelligence tends to decrease after about age 40 problems (performance components) solve that allow us to understand how, in general, to

the problems we face (metacomponents) Duality of intelligence can be easily understood through artificial intelligence definition:

that performs a specific job) 180 inherent in the information content of the system, and (2) performance intelligence, expressed in the successful (i.e., goal-achieving) performance of the system in a complicated environment It is a very productive approach. 11. The most control tool that has emerged as a result of evolution by rewarding systems with increase of the probability of success under informational uncerta inty. Intelligence allows for a redundancy in its features of functioning simultaneously with reduction of computational complexity by using a loop of semantic closure equipped by a mecha nism of generalization for the purposes of learning. Intelligence grows through the generation of multiresolutional system of knowledge representation and processing. Intelligence is not a tool but outcome of a control system. In this case we should define control as a system that brain as

(American Heritage Dictionary). Information uncertainty is not a mandato ry condition for intelligent system environment. Analysis shows that different definitions refer to intelligence as mental qual ity, ability of a system, behavior, application of knowledge, consciousness, control tool, and etc. Most of them accept intelligence as a set of abilities. Many definitions stres s an importance of adaptation to the environment (group 7). It is important to mention d istinguishes between biological or physical (ability to change a body) and intellectual adaptation (a bility to make choice of action). Adaptation to the environment is one of life matter, manifested in functions such as metabolism, growth, reproduction , and response to stimuli or adaptation to the environment originating from within the or ganism Heritage Dictionary). It makes the connection between life and intelligence. A n ewborn baby has reflexes. New research (Josh Bongard) [10] shows the possibility of artifici al life (virtual life) existence with virtual intelligence. This type of life does not have natural metabolism but perhaps it is possible to speak about virtual metabolism that supports virtual growth. It will be shown that biological and intellectual adaptations are subjects of two d ifferent parts of a definition of intelligence. The connection between intelligence and li fe was elsewhere presented in [41] by Dr. A. Meystel. 12. This group represents specific definition of AI. intelligence if done by man 181 simulate human methods for the deductive acquisition and application of knowledge and reason

that mimic human thinking and do

Systems that act rationally

commonly associated with the higher intellectual process characteristic of humans, such as the ability to reason, discover meaning, generalize or learn from past experienc e Encyclopedia Britannica). is the science and engineering of making intelligent machines. (www.cera2.com/ee/ai.htm). Some definitions resemble a description of Nobel Prize winner s abilities . They refuse to accept existence of AI. Authors of these definitions don t realize that their own abilities don t match to these definitions. As we can see, there is variety of definitions. It is not a complete but represe ntative pool of opinions. Some of them are contradictory to the baby test. It is dif ficult to answer these questions: Which definition is right? Which defi nition is acceptable? Let us try to answer these questions. AI is the science and engineering

of making intelligent machines. (www.cera2.com/ee/ai.htm). Some definitions resemble a description of Nobel Prize winner s abilities . They refuse to accept existence of AI. Authors of these definitions don t realize that their own abilities don t match to these definitions. As we can see, there is variety of definitions. It is not a complete but represe ntative pool of opinions. Some of them are contradictory to the baby test. It is dif ficult to answer these questions: Which definition is right? Which defi nition is acceptable? Let us try to answer these questions.

References: (see References to PART 1)

182

APENDIX 3

MEASURMENT OF MULTIVARIABLE FUNCTION 183

184 ADDITIVE FORM The most important question of intelligent measurement is: Is it an additive or multiplicative function? Psychology and cognitive science calculate IQ based on assumption that intelligence is an additive function of abilities. It is very strong assumption because there is interdependence between the same abilities. For example: reasoning is a basis of several other abilities such as generalization , intuition, etc. It is important to choose local abilities without interdep endency. For example: generalization, intuition, associative thinking, object recognition, etc but not reasoning that is the part of these abilities. The measurement is a process of assigning numbers to the objects or events in accordance with certain rules of the system. The estimation is a proc ess of assigning fuzzy values. The number and value assignment is possible just on the scalar sca le. There are three types of axioms related to a measurement process: identity axioms, ran k axioms, additively axioms [9]. Next set of known axioms is very important for ability of the measurement. Identity axioms A=B or A B If A=B, then B=A If A=B and B=C, then A=C Rank axioms If A>B, then B<A

If A>B and B>C, then A>C Additively axioms A=D and B>O, then A+B>D A+B=B+A If A=D and B=Q, then A+B=D+Q (A+B)+C=A+(B+C) These axioms determine four scale levels: scale of names, rank scale, inter val scale and ratio scale. The analyses of these scales are done in [9,16]. All these scales are one-dimension scales and cannot be used to measu re vectors. The multi-dimension scales that we use to measure vectors is not covered by the number 4 axiom, which say that just comparable, can be compared. It is possibl e to compare vectors just in case to assign weight function to the vector s components. It will be shown that just, the weighted-sum approach and utility functions can be used in this case [5,10] as the method of multivariable scales aggregation and converts vector into a sufficient scalar. 185

The intelligence measurement is not the same as a multiobjective optim ization of the systems with intelligence. The optimization can be done based on different scale s (scale of names, rank scale, interval scale, and ratio scale), but measurement can be d one just on the interval and ratio scales [9]. There are many different methods of optimization [3,14, and other]. A ll of existing methods use each function of the intelligence separately and de termine preferences and a system s rank, but not an intelligence value. The additive function is presented in the

most of the research works [3,5 - 11, 16, 19, 20 and other]. The values of separate intellectual abilities (variables) don t give any ideas about artificial intelligence integrated value. Each variable has a certain level of u sability. Many different forms of aggregation were introduced [4,5-8,10-13,15-20]. They can be divided in two main groups: a weighted arithmetic mean of variables (additive forms) [15 ,18,20] and multiplicative forms [4,15,18]. A multiplicative form is a multidimensional function and, as it was mention above cannot be used. Just one, even not important variab le, that equal to zero brings all evaluations based on multiplication down to zero. On ly one not important variable that has dominant value can create not reasonable h igh level of the evaluation function. Additive forms can be divided in two groups: the absolute, nonnormalized ( ntation. The Wi*Fi ) and relative, normalized (5) form of variables prese

absolute form has the problem. Weight functions (Wi) have to be measu red against the scales calibrated in the units that are an opposite of the variable scale units. Weight functions of the relative forms are measured against the dimensionless scale. Aggregation of the separate variables or their usability can be done on the base of the utility theory because utility reflects usability. For example, an American statistician Harrington [10,16] proposed aggregation of the utility functions as k n U = U i . i = 1 As it was mention above for multiplicity forms, this form doesn t work. The utility vector [Ui] can represent the vector of abilities [Ai]. The utility of intelligent alternative can be presented as [10]: n UA Ui (1) i 1

where Ui is an utility of i-th basic variable, As it is shown in [16], equation (1) can be translated into 186

n VA Wi (Fi )* Vi ( Fi), (2)

i 1 where Wi (Fi) is a weight function of i-th variable (Fi), Vi (Fi) is a utility function measured by the universal utility scale for each basic variable. The value of the weight function depends on the variable value (the second sandw ich is less important to the hungry person then first one). A set of variables has to be named for each problem separately. The function [Wi (Fi) *Vi (Fi)] is not linear. Suppose that Wi (Fi) incorpor ated the nonlinear part of the function and Vi (Fi) is a linear part of the function. In this case: Vi ( Fi ) [V(Fimax)/Fi max ]* Fi . (3) This utility function is measured by the universal utility scale. So V(Fimax) =Vmax . (4) From (2) and (4) we can get the quality index of j-th alternative (domain speci fic) in nondimensional units n Qj VA/V max Wi (Fi )*( Fi/Fi max ) (5)

i 1 Usually one of the variables is an investment of the j-th alternative (Cj). In this case equation (5) can be rewritten as: n-1 Qj Wi (Fi) * (Fi/ Fmax )- - WC(Cj ) * Cj /C max i=1 or n - 1 Qj*(C max/ WC) [ Wi (Fi)/ WC] * C max*(Fi/Fimax )- Cj , (7) (6)

i 1 where WC (Cj ) is a weight function of variable C.

187

This equation presents the evaluation of j-th alternative measured in C units. A s a result from the equation (7), artificial intelligence variables can be measured by one universal scale. Now we can chose any scale, even financial, as a real universal scale of the mea surement. Cj can be added to the left and the right parts of the equation (7) n - 1 Ij [Wi (Fi)/ WC] * Cmax*(Fi/Fimax). (8) i 1 This is the direct way to calculate profit. Let us look at the Meaning of this equation. Case 1. Suppose one person has 2 horses and another one has a car. They decided to make an exchange. It means that for these people 2 horses are equivalent (in uti lity terms) to one car. We measure car utility against the scale. Unfortunately or not all horses are different and can not be used as universal measurement units (instrumentation Bible). Money, as we know, nothing m ore, but the result of the between people agreement. Money is just abstract conveni

ent scale even without any representation of gold as it was many years ago. Case 2. Suppose one person is ready to pay for a car $20,000 but another one is ready to pay $25,000 for the same car. It means that finance equivalent is not the cons tant value on the money scale. The equation (8) sets relationship between different nature values, as a basis o f the barter exchange, and covers both of the described cases. In some cases the expert group can evaluate profit intelligence functions. But in the most cases it is not possible. can use the In from this each case we

equation (8). It is understandable that we can not measure intelligence with a high level of accuracy. But it is better to have an approximate evaluation rather than nothin g. The level of accuracy may be improved by using a self-learning procedure. The last question is how to determine the value of weight function. The most kn own and usable method is an expert method, [3,5,8-11,16,19] and other] but ther e are several analytical methods (in special cases) to find out the v alue of this function [6,15,16]. Opponents of an expert method and the aggregation function complain ag ainst the application of a human expertise as a source of information. They dis pute an expert ability to produce objective information. Yes, a collective expertise has an element of subjectivism but only a human bein g has the best sense of the weight of the intelligence functions variables. 188 New Microsoft Intelligence system for Internet is based on weighting o f functions by experts. The weighting takes whether the user was in the office, in meeting, on the phone or beh ind the wheel some case weight is measured in dollars and cents [13]. Expert Choice , Inc. created decision-making system based on weighting of functions by experts.

Each separate intelligence abilities can be measured by appropriate met hods but as an integrated value, intelligence has to be presented as a scalar. There are many different methods to measure each separate intellectual abilities. The intelligence measurement is not a new problem. d WAIS-3 [2] The famous IQ an

tests are the possible ways to make an evaluation of the human intelligence. The se tests present an aggregated value of the multifunctional intelligence and convert a ve ctor value into a scalar value. These tests can be used for an artificial intel ligence variable measurement The opponents to these testes pointed out to the possible social prob lems bounded to these methodic. In case of artificial intelligence measurement this problem does not make sense. REFERENCE: 1. Albus J., Outline for Theory of Intelligence. IEEE Transactions on Systems, Man, and Cybernetic, vol. 21, No 3. May/June, 1991. 2. Charles G. Morris, Albert A. Maisto Psychology. Prentice Hall, 1999, 68 2p. 3. Dhar, V. Stein, R. Intelligent Decision Support Methods. Prentice Hall, 1997, 244p 4. Gutkin . (in Russian). 5. Fishburn Set: Applications 15, 1967, L. S., Optimization of the Radio Equipment s. Sov. Radio, 1975, 167p

P.

C.,

Additive and

Utilities

with Incomplete

Product

to priorities

Assignments - Operations Research, V.

No.3, p.537-542. 6. Fiebaugh Morris, Artificial ent

Intelligence: A Knowledge-Bas Approach. PWS-K

Publishing Co. 1988, 736p . 7. Fishburn P. C. A Study of Independence in Multivariate Utility Theory Econometric, 37, No.1, 1969, p.7-121. 8 Fishburn P. C. Independence in Utility Theory with Whole Product Set s. Operations Research, V.13, 1965, p.28-45.

189 9. Hall A. Experience of Methodology for Large Engineering Systems, M, 1975, 120p. 10. John Von Neumann. O. Morgenstern, Economic Behavior and Theory of Gam es, 1944, 650p. 11. Marino Y. P. Technological forecasting Elsevior Company Inc. N.Y., 1972. 12. Markoff J., Microsoft Sees Software New York Times, July 17, 2000 . 13. Mitsuo Gen, Runwei Cheng. Genetic Algorithms Engineerin & Optimization. A Wiley-Interscience Publication, 2000. 14. Pareto M. D economic Politque. Paris1927, 695p 15. Pogogev I. B., Optimization of Variables and Quality Control. Znanie, 1 972, 51p. (in Russian). 16. Polyakov he L. M., Kheruntsev P. E., Shklovsky B. I., Elements of t of Machine too for decision making, American

Automated design of the electrical a automated equipment s ls.

Publ. 17. Schziver A. Forecast Air Review, V. 16, No.3, 1965, p.12-23. 18. Schor J B., Quality of the Manufacturing Product Evaluation. Znanie, 1971, 56p. (in Russian). 19. Sigford Y., Pazvin R., Project PATTERN: a methodology for determining relevance in complex decision making. IEEE Transaction. Eng. Manag. V. EM-12, No.1, 1965210p. 20. Stuart Russell, Peter Norvig,. Artificial Intelligence, A Modern Approach. P rentice Hall. 1995, 931p.

190 APPENDIX 4

FUZZY LOGIC

191 192 FUZZY NUMBERS A fuzzy 8. A crisp8.

6 10 7 9 is the BASE MEMBERSHIP FUNCTION. (The set of eighty s Member 7 0

Degree of Membarship

7.5 .5 8 1 8.5 .5 9 0 Crisp and fuzzy arithmetic operations. Crisp a = 3 b = 2 Fuzzy a = -2,3,8 b = -3,2,7 Addition: a + b 3 + 2 = 5 (-2,3,8) + (-3,2,7) = (-5,5,15)

The base ranges of two fuzzy numbers are added (geometrically) together, formingthe base of the arithmetic result. Main value: 3 + 2 = 5 First base: -2 + 8 = 10. Second base: - 3 + 7 = 10. Sum base: 10 + 10 = 20. The sum is divided by 2: 20/2 = 10. Left point: 5 10 = -5.

Right point: 5+ 10 = 15. Subtraction: a 3 193 2 = 1 b

(-2,3,8) - (-3,2,7) = (-9,1,11)

Multiplication: a * b 3 * 2 = 6 Division: a / b 3 / 2 = 1.5 (-2,3,8) / (-3,2,7) = (-8.5,1.5,11.5) FUZZY LOGIC RULES. Fuzzy AND is a conjunction or minimum of the input value: .5 .7 = .5 ( .5 (.5 + .2)) 0 1= 0 (-2,3,8) * (-3,2,7) = (-4,6,16)

Fuzzy OR is a disjunction or maximum of the input value: .5 .3 .7 .8 = .8 0 1= 1 The rules for evaluating the fuzzy truth, T, of a complex sentence are: T(A T(A B) = min(T(A), T(B)) B) = max(T(A), T(B)) T(A) T(True) but in Boolean logic it is true A A

T( A) = 1 T(A True. A)

Modus ponens (Latin) (affirmative mode) (Rule of inference) If A is true, then B is also true

A implies B Fuzzy Sets X1 X2 Set A Set B Union A .8 1 B

or X3 .2 .3 1.0 B

A X25 .7 .4 .3 .8 .7 .2 .4

Intersection A

Difference A \ B (Set A minus of the portion of it that is also in Set B) Result 0 194 APENDIX 5 NEURON NETWORK 195 196 Neuron net 0 .3

Neuron net is built out of units, simple electronic processors often called neurons connected to each other by wires that mimic not just the nerve fiber between neurons, called dendrites and axons, but even the synapses, that gaps across which neurons connect. all descriptive terms on roughly the same level. They all refer to g eneral approach to computation that relies on same analogy to the biological system of neurons and synapses [38, PART 2]. Neuron networks don t share the traditional division between software and hardware. It replace the symbolic logic and programming with learning and evolution. Forwardpropagation Forwardpropagation is a supervised learning algorithm and describes the "flow of information" through a neural net from its input layer to its output layer. The algorithm works as follows: 1. Set all weights to random values ranging from -1.0 to +1.0 2. Set an input pattern (binary values) to the neurons of the net's input layer 3. Activate each neuron of the following layer:

Multiply the weight values of the connections leading to this neuron with the output values of the preceding neurons. Add up these values. Pass the result to an activation function, which computes the output value of this neuron 4. Repeat this until the output layer is reached 5. Compare the calculated output pattern to the desired target pattern and compute an error value 6. Change all weights by adding the error value to the (old) weight values 7. Go to step 2 8. The algorithm ends, if all output patterns match their target patterns Example: Suppose you have the following 2-layered Perceptron: 197 Patterns to be learned: input target 0 1 0 1 1 1 First, the weight values are set to random values (0.35 and 0.81). The learning rate of the net is set to 0.25. Next, the values of the first input pattern (0 1) are set to the neurons of the input layer (the output of the input layer is the same as its input). The neurons in the following layer (only one neuron in the output layer) are activated: Input 1 of output neuron: Input 2 of output neuron: Add the inputs: Compute an error value by subtracting output from target: 0 - 0.81 = -0.81 Value for changing weight 1: 0.25*0*(-0.81) = 0 (0.25 = learning rate) 0 * 0.35 = 0 1 * 0.81 = 0.81 0 + 0.81 = 0.81

Value for changing weight 2:0.25*1*(-0.81)=-0.2025 Change weight 1: Change weight 2: 0.35 + 0 = 0.35(not changed) 0.81 + (-0.2025) = 0.6075

Now that the weights are changed, the second input pattern (1 1) is set to the input layer's neurons and the activation of the output neuron is performed again, now with the new weight values: Input 1 of output neuron: 198 Input 2 of output neuron: 1 * 0.35 = 0.35

1 * 0.6075 = 0.6075

Add the inputs: 0.35 + 0.6075 = 0.9575 (=output) Compute an error value by subtracting output from target: 1-0.9575 = 0.0425 Value for changing weight 1: 0.25 * 1 * 0.0425 = 0.010625 Value for changing weight 2: 0.25 * 1 * 0.0425 = 0.010625 Change weight 1: Change weight 2: 0.35 + 0.010625 = 0.360625 0.6075 + 0.010625 = 0.618125

That was one learning step. Each input pattern had been propagated through the net and the weight values were changed. The error of the net can now be calculated by adding up the squared values of the output errors of each pattern: Compute the net error:(-0.81)2 + (0.0425)2 = 0.65790625 By performing this procedure repeatedly, this error value gets smaller and smaller. The algorithm is successfully finished, if the net error is zero (perfect) or approximately zero. Backpropagation Backpropagation is a supervised learning algorithm and is mainly used by Multi-L ayer-Perceptrons to change the weights connected to the net's hidden neuron layer(s).

The backpropagation algorithm uses a computed output error to change the weight values in backward direction. To get this net error, a forwardpropagation phase must have been done before. While propagating in forward direction, the neurons are being activated using the sigmoid activation function. The formula of sigmoid activation is: 1 f(x) = --------1 + e-input The algorithm works as follows: 1. Perform the forwardpropagation phase for an input pattern and calculate the output error 199 2. Change all weight values of each weight matrix using the formula weight(old) + learning rate * output error * output(neurons i) * output(neurons i+1) * ( 1 - output(neurons i+1) ) 3. Go to step 1 4. The algorithm ends, if all output patterns match their target patterns Example: Suppose you have the following 3-layered Multi-LayerPerceptron: Patterns to be learned: input target 0 1 0 1 1 1 First, the weight values are set to random values: 0.62, 0.42, 0.55, -0.17 for weight matrix 1 and 0.35, 0.81 for weight matrix 2. The learning rate of the net is set to 0.25. Next, the values of the first input pattern (0 1) are set to the neurons of the

input layer (the output of the input layer is the same as its input). The neurons in the hidden layer are activated: Input of hidden neuron 1: 0*0.62+1*0.55 = 0.55

Input of hidden neuron 2: 0*0.42+1*(-0.17)=-0.17 Output of hidden neuron 1: 1 / (1+ exp(-0.55)) = 0.634135591 Output of hidden neuron 2: 1 / (1+exp(+0.17)) = 0.457602059 200 The neurons in the output layer are activated: Input of output neuron: 0.634135591 * 0.35 +

0.457602059 * 0.81 = 0.592605124 Output of output neuron: 1/(1+exp(-0.592605124)) = 0.643962658 Compute an error value by subtracting output from target: 0 - 0.643962658 = -0.643962658 Now that we got the output error, let's do the backpropagation. We start with changing the weights in weight matrix 2: Value for changing weight 1: 0.25*(-0.643962658) * 0.634135591 * 0.643962658 * (1-0.643962658) = -0.023406638 Value for changing weight 2: 0.25*(-0.643962658) * 0.457602059*0.643962658 * (1-0.643962658) = -0.016890593 Change weight 1: 0.35+(-0.023406638)=0.326593362 Change weight 2: 0.81+(-0.016890593)=0.793109407 Now we will change the weights in weight matrix 1: Value for changing weight 1:0.25*(-0.643962658)* 0* 0.634135591 * (1-0.634135591) = 0

Value for changing weight 2: 0.25*(-0.643962658) *0* 0.457602059 * (1-0.457602059) = 0 Value for changing weight 3: 0.25*(-0.643962658) * 1* 0.634135591* (1-0.634135591) = -0.037351064 Value for changing weight 4: 0.25*(-0.643962658) * 1* 0.457602059 *(1-0.457602059) = -0.039958271 Change weight 1: Change weight 2: 0.62 + 0 = 0.62 (not changed) 0.42 + 0 = 0.42 (not changed)

Change weight 3: 0.55+(-0.037351064)=0.512648936 Change weight 4: -0.17+(-0.039958271)= -0.209958271 The first input pattern had been propagated through the net. The same procedure is used for the next input pattern, but then with the changed weight values. After the forward and backward propagation of the second pattern, one learning step is complete and the net error can be calculated by adding up the squared output errors of each pattern. By performing this procedure repeatedly, this error value gets smaller and smaller. 201 The algorithm is successfully finished, if the net error is zero (perfect) or approximately zero. Note that this algorithm is also applicable for Multi-Layer-Perceptrons with more than one hidden layer. "What happens, if all values of an input pattern are zero?" If all values of an input pattern are zero, the weights in weight matrix 1 would never be changed for this pattern and the net could not learn it. Due to that fact, a "pseudo input" is created, called Bias that has a constant output value of 1. This changes the structure of the net in the following way: These additional weights, leading to the neurons of the hidden layer and the output layer, have initial random values and are changed in the same way as

the other weights. By sending a constant output of 1 to following neurons, it is guaranteed that the input values of those neurons are always differing from zero. Selforganization Selforganization is an unsupervised learning algorithm used by the Kohonen Featu re Map neural net. A neural net tries to simulate the biological human brain, and selforganization is probably the best way to realize this. 202 It is commonly known that the cortex of the human brain is subdivided in different regions, each responsible for certain functions. The neural cells are organizing themselves in groups, according to incoming information. Those incoming information is not only received by a single neural cell, but also influences other cells in its neighborhood. This organization results in some kind of a map, where neural cells with similar functions are arranged close together. A neural network can also perform this selforganization process. Those neural nets are mostly used for classification purposes, because similar input values are represented in certain areas of the net's map. A sample structure of a Kohonen Feature Map that uses the selforganization algorithm is shown below: Kohonen Feature Map with 2-dimensional input and 2-dimensional map (3x3 neurons) As you can see, each neuron of the input layer is connected to each neuron on the map. The resulting weight matrix is used to propagate the net's input values to the map neurons. Additionally, all neurons on the map are connected among themselves. These connections are used to influence neurons in a certain area of activation around the neuron with the greatest activation, received from the input layer's output. The amount of feedback between the map neurons is usually calculated using

the Gauss function: -|xc-xi|2 -------2 * sig2 203 where xc is the position of the most activated neuron xi are the positions of the other map neurons feedbackci = e sig is the activation area (radius) In the beginning, the activation area is large and so is the feedback between the map neurons. This results in an activation of neurons in a wide area around the most activated neuron. As the learning progresses, the activation area is constantly decreased and only neurons closer to the activation center are influenced by the most activated neuron. Unlike the biological model, the map neurons don't change their positions on the map. The "arranging" is simulated by changing the values in the weight matrix (the same way as other neural nets do). Because selforganization is an unsupervised learning algorithm, no input/target patterns exist. The input values passed to the net's input layer ar e taken out of a specified value range and represent the "data" that should be organized. The algorithm works as follows: 1. Define the range of the input values 2. Set all weights to random values taken out of the input value range 3. Define the initial activation area 4. Take a random input value and pass it to the input layer neuron(s) 5. Determine the most activated neuron on the map: Multiply the input layer's output with the weight values The map neuron with the greatest resulting value is said to be "most

activated" Compute the feedback value of each other map neuron using the Gauss function 6. Change the weight values using the formula: weight(old) + feedback value * ( input value - weight(old) ) * learning rate 7. Decrease the activation area 8. Go to step 4 9. The algorithm ends, if the activation area is smaller than a specified value Example: see sample applet The shown Kohonen Feature Map has three neurons in its input layer that represent the values of the x-, y- and z-dimension. The feature map is 204 initially 2-dimensional and has 9x9 neurons. The resulting weight matrix has 3 * 9 * 9 = 243 weights, because each input neuron is connected to each map neuron. In the beginning, when the weights have random values, the feature map is just an unordered mess. After 200 learning cycles, the map has "unfolded" and a grid can be seen. As the learning progresses, the map becomes more and more structured. It can be seen that the map neurons are trying to get closer to their nearest blue input value. At the end of the learning process, the feature map is spanned over all input values. The selforganization is finished at this point. Freeman believes that much of neural network theory and neurobiology i s founded not so much on truth as on convenience. Neurobiologists and cognitive scientist believe in the reflex model because it promises to bake the brain into easily analyzable machine. Neurobiologists concentrate on the feed-forward networks ion the brain while ignoring the feedback loops, because it s easier in the former case to connect a stimulus to respon se Neural network modelers concentrate on the same feed-forward networks because the mathematics of networks using feedback loop is so difficul

t. Adding feedback makes network unstable [38]. 205 206 207 208 209 210 211 212 APENDIX 6 GENETIC ALGORITHM 213

214

GENETIC ALGORITHM

Genetic algorithms, a school of computation most closely identified with John Ho lland, are designed to A system that uses genetic algorithms begins with some kind of fitness function. Each entity consist of a computer program for solving the task at hand that is initia lly designed

by an engineer, and a two-part genetic algorithm, which sets the rules of reprod uction for surviving programs. Each computer program entity is measured against the fitness function. Those pro grams that pass the threshold are allowed to reproduce, yielding a new generation simi lar to their parents. Program that don Some neural network researchers are using them to configure the connections in t heir networks. Some neurobiologists are using them to explain how the brain completes its own organization during development. David Stork (Ricoh s Menlo Park, California) uses a similar kind of evolution to grow neural network that recognize different typefaces. 11101010110 111010010111 111010010111 0 000110010111 000110101100 000110101100 111010101001 111110101001 11101101110 0 00111010100 001110101100 001110101101 1 Offsprings Initial Population (Selection) Cross-Over Mutation Gene is the smallest unit of a GA . A series of genes, or a chromosome, represents one possible complete solution to the problem. Genetic algorithm consists of the several steps: 1. To select the initial population. If nothing is known

About the problem solution, then solution can be chosen at random from the space solutions. 2. To apply a rule of selections to determine solutions 215 Will survive to become parents of the next generation. 3. To apply a fitness function. 4. To repeat step 2 and 3 until acceptable result will be created. of all possible

216

APENDIX 7 EXPLORE BRAIN SCANNING TECHNIC

217

218

Explore brain-scanning technology: EEG (Electroencephalograph) CAT (Computerized Axial Tomography ) Scan PET (Position Emission Tomography) Scan MRI (Magnetic Resonance Imaging) MEG (Magneto encephalography) EEG (Electroencephalograph)

The EEG shows the electrical impulses of the brain. Active neurons create currents in the brain tissue, which can leak through the s kull and be recorded by electrodes attached to the scalp. As the path from the active area t o the scalp can be quite complicated , spatial resolution is poor compared to PET and MRI. B

ut the temporal resolution is in the order of 1 ms. EEGs allow researchers to follow electrical impulses across the surface of the b rain and observe changes over split seconds of time. An EEG can show what state a person is in -asleep, awake, and anaesthetized -- because the characteristic patterns of curre nt differ for each of these states. One important use of EEGs has been to show how long it tak es the brain to process various stimuli. A major drawback of EEGs, however, is that the y cannot show us the structures and anatomy of the brain or really tell us which specific regions of the brain do what. CAT (Computerized Axial Tomography) Scan High-resolution magnetic resonance image of normal brain with CAT scan. 219

CAT scans of the brain can detect brain damage and also highlight local changes in cerebral blood flow (a measure of brain activity) as the subjects perform a task . PET (Position Emission Tomography) Scan The gray outer surface is the surface of the brain from MRI and the inner colore d structure is cingulated gyrus, part of the brain's emotional system visualized w ith PET.

PET imaging software allows researchers to look at cross-sectional "slices" of t he brain, and therefore observe deep brain structures, which earlier techniques l ike EEGs could not. PET is one of the most popular scanning techniques in current neuroscience research. PET relies on the injection of radioactively labeled water (using the O -15 isotope) into the vein of the test person. In a short time the water accumulates in the brain,

forming an image of the blood flow as follows: The O-15 decays emitting a posit ron that after annihilating with an electron emits two gamma rays into almost opposit e directions. These gamma rays can be detected and their origin located. Neurologists found th at when resting neurons become active, the blood flow to them increases. Thus an i mage of the blood flow can act as a means to locate neural activity. MRI (Magnetic Resonance Imaging) 220 MRI uses the technique of nuclear magnetic resonance. This technique allows you to detect slight changes in the magnetic properties of the substance under investig ation. In the case of brain activity one exploits the fact that a neuron becoming active r esults in an increased oxygen level in the blood vessels around it. The oxygen in the blood i s carried by hemoglobin, whose magnetic properties change, when the oxygen level is rising . This change is detected by MRI and thus indicates the active area. MRI can produce very clear and detailed pictures of brain structures. Often, the images take the form of cross-sectional "slices." The images of these slices are obtain ed through the use of "gradient magnets" to alter the main magnetic field in a very specifi c area while the magnetic force is being applied. This allows the MRI technician to pic k exactly what area of the person's brain he or she wants an image of. MEG (Magneto encephalography) MEG measures the tiny magnetic fields created by active areas in the brain w ith highly sensitive measurement devices called SQUIDs (superconducting quantum interference devices). MEG has the same temporal resolution as EEG but signals a re less

affected by the conductivity profile of the brain, skull and scalp. Thus MEG is superior to EEG. The spatial resolution is less than for MRI. 221 222 APPENDIX 8

DEFENITION 223 224 The list of five rules by means of which to evaluate the success of connotative definitions: 1. Focus on essential features. A good definition tries to point out the featur es that are essential to the designation of things as members of the relevant group. 2. Avoid circularity. Since a circular definition uses the term being defined a s part of its own definition, it can't provide any useful information. Thus, for example, there isn't much point in defining "cordless 'phone" as "a telephone that has no cord." o Capture the correct extension. A good definition will apply to exactly the same things as the term being defined, no more and no less. Successful intentional definitions must be satisfied by all and only those thing s that are included in the extension of the term they define. 3. Avoid figurative or obscure language. Since the point of a definition is to explain the meaning of a term to someone who is unfamiliar with its proper application, the use of language that doesn't help such a person learn how to ap ply the term is pointless. 4. Be affirmative rather than negative. It is always possible in principle to e xplain the application of a term by identifying literally everything to which it does n ot apply. In a few instances, this may be the only way to go: a proper definition o f the mathematical term "infinite" might well be negative, for example. But in ordinary circumstances, a good definition uses positive designations whenever it

is possible to do so. 225 226 APPENDIX 9 PREDICTION OF TIME THE NEUROAL NET WILL BE AT LEAST AS COMPLEX AS THE HUMAN BRAIN 227 228 If we support the hypothesis of consciousness as a physical property of the brai n, the question becomes: When will computers at least as complex as the human? Consciousness Brain complexity Human brain Fig. 1. The complexity threshold. If consciousness is a function of brain complexity, the brain marks the complexi ty threshold required. 1 Gbyte

100 Mb 10 Mb 1 Mb !00 Kb 10 Kb 1 Lb 1980 1982 1984 1986 1988 1992 1994 1996 1998 2000 Fig. 2 RAM capacity Consciousness seems to represent a step function of brain complexity and the human brain provides the threshold, as Figure 1 shows. How much memory would a computer require to replicate the human brain 's complexity? The human brain has about 1012 neurons. Each neuron makes about 103 synaptic connections with other neurons, on average, for a total of 1015 synapse s. Artificial neural networks can simulate synapses using a floating-point number that requires 4 bytes of memory to be represented in a computer. As a consequence, si mulating 1015 synapses requires a total of 4 million Gbytes. Simulating the human brain r

equires 5 million Gbytes, including the auxiliary variables for storing neuron ou tputs and other internal brain states.

229 When will such a memory be available in a computer? During the past 20 years, random-access memory capacity increased exponentially by a factor of 10 every four years. The plot in Figure 2 installed on shows the typical memory configuration

personal computers since 1980. By interpolation, we can derive the following equation, which gives RA M size as a function of the year: (year 1966)

bytes =10 4 For example, from this equation we can derive that in 1990, personal computers t ypically had 1 Mbyte of RAM, whereas in 1998, a typical configuration had 100 Mbytes of R AM. Assuming that RAM will continue to grow at the same rate, we can invert this rel ationship to predict the year in which computers will have a given amount of memory: year = 1966 + log10 (bytes) To calculate the year in which computers will have 5 million Gbytes of RAM, we substitute that number in equation above. This gives the year 2029. In reality computational ability depends also on the structural complex ity. Contemporary intelligent system can develop a world model of the external and internal wo rld. Existing system also can develop mention above circle in the social environment. 230 APPENDIX 10 231

232 Original text yuo hvae a sgtrane mnid if yuo cna raed this. Cna yuo raed tihs? Olny 55 pcenert of plepoe cluod uesdnatnrd ym wariteng. The compute sr ilteleignnce hsa hte sema phaonmneal pweor as the hmuan s mind. Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it dseno't mtaetr in waht oerdr the ltteres in a wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it whotuit a pboerlm. Tihs is

bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Btu it si nto mipotratn ot ehav the frits and teh lats eltters ni the ritgh poositni. oyu can rade even fi the lats letrest aer in teh rwogn poosiotns. Recognition-Translation

You have a strange mind if You can read this. Can you read this? 55% of people could actually understand my reading The computer s intelligence power has the same phenomenal power as the human s mind. According to a researchers at Cambridge University, it doesn t matter in what order the letters in a word are, the only important thing is that the first and last letter be in the right place. The rest can be a total mess and you can still read it without a problem. This is because the human mind does not read every letter by itself, but the word as a whole. But it is not important to have the the last letter in the right positi on. You can read if these letters are in the wrong positions 233 234

APPENDIX 11 HIDDEN MARKOV MODEL 235

236

Hidden Markov model From Wikipedia, the free encyclopedia (Redirected fromHidden Markov Model) State transitions in a hidden Markov model (example) x y a b hidden states observable outputs transition probabilities output probabilities

A hidden Markov model (HMM) is a statistical model where the system being modele d is assumed to be a Markov process with unknown parameters, and the challenge is to determine the hidden parameters from the observable pa rameters. The extracted model parameters can then be used to perform f urther analysis, for example for pattern recognition applications. A HMM can be considered as the simplest dyna mic Bayesian network. In a regular Markov model, the state is directly visible to the observer, and th erefore the state transition probabilities are the only parameters. In a hidden Markov mode l, the state

is not directly visible, but variables influenced by the state are visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. Hidden Markov models are especially known for their application in tem poral pattern recognition such as speech, handwriting, gesture recognition and bioinformatics. A concrete example Assume you have a friend who lives far away and to whom you talk d aily over the telephone about what he did that day. Your friend is only interested in three activities: walking in the park, shopping, and cleaning his apart ment. The choice of what to do is determined exclusively by the weather on a given day. You have no definite infor mation 237 about the weather where your friend lives, but you know general trends. Based on what he tells you he did each day, you try to guess what the weather must have been l ike. You believe that the weather operates as a discrete Markov chain. There are tw o states, "Rainy" and "Sunny", but you cannot observe them directly, that is, they are hi dden from you. On each day, there is a certain chance that your frien d will perform one of the following activities, depending on the weather: "walk", "shop", or "cle an". Since your friend tells you about his activities, those are the observa tions. The entire system is that of a hidden Markov model (HMM). You know the general weather trends in the area, and what your frien d likes to do on average. In other words, the parameters of the HMM are known. You ca n write them down in the Python programming language: states = ('Rainy', 'Sunny') observations = ('walk', 'shop', 'clean') start_probability = {'Rainy': 0.6, 'Sunny': 0.4} transition_probability = { 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3},

'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, } emission_probability = { 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, } In this piece of code, start_probability represents your uncertainty about which state the HMM is in when your friend first calls you (all you know is that it tends to be rainy on average). The particular probability distribution used here is not the equili brium one, which is (given the transition probabilities) actually approximately {'R ainy': 0.571, 'Sunny': 0.429}. The transition_probability represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The emission_probability rep resents how likely your friend is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% c hance that he is outside for a walk. 238 APPENDIX 12 THREE LAWES OF ROBOTICS

239

240 Three Laws of Robotics From Wikipedia, the free encyclopedia This cover of I, Robot illustrates the story "Runaround", the first to list all Three Laws of Robotics. In science fiction, the Three Laws of Robotics are a set of three rules written by Isaac Asimov, which all positronic robots appearing in his fiction must obey. Introdu ced in his 1942 short story "Runaround" , the Laws state the following, quoted e xactly: 1. A robot may not injure a human being or, through inaction, allow a h uman being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not c onflict with the First or Second Law. According to the Oxford English Dictionary , the first passage in Asimov's shor t story "Liar!" (1941) that mentions the First Law is the earliest recorded use of the word robotics. Asimov was not initially aware of this; he assumed the word already e xisted by analogy with mechanics, hydraulics , and other similar terms denoting branches of applied knowledge. The Three Laws form an organizing principle and unifying theme for Asimov's fict ion, appearing in his Robot series and the other stories linked to it, as well as Luc ky Starr and the Moons of Jupiter . Other authors working in Asimov's fictional universe hav e adopted 241 them, and references (often parodic) appear throughout science fiction and in ot her genres. Technologists in the field of artificial intelligence, working to c reate real machines with some of the properties of Asimov's robots, have specula ted upon the role the Laws may have in the future.

242

APPENDIX 13 DISCRIMINANT ANALYSIS 243 244 Linear discriminant analysis http://en.wikipedia.org/wiki/Linear_discriminant_analysis Linear discriminant analysis (LDA) and the related Fisher's linear disc riminant are used in statistics to find the linear combination of feat ures which best separate two or more classes of object or event. The resulting combinations may be used as a linear classifier, or more commonly in dimensionality reduction before later classific ation. LDA is closely related to ANOVA (analysis of variance) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. In the other two methods however, the dependent variable is a numerical quantity, while for LDA it is a categorical variable ( i.e. the class label). LDA is also closely related to principal component analysis (PCA) and factor analysis. LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class, and factor an alysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique : a distinction between independent variables and dependent va riables (also called criterion variables) must be made. LDA works when the measurements made on each observation are continuous quantiti

es. When dealing with categorical variables, the equivalent technique is Di scriminant Correspondence Analysis (see References).

Applications Face recognition In computerised face recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dime nsions is a linear combination of pixel values, which form a template. The linear combi nations obtained using Fisher's linear discriminant are called ose obtained Fisher faces, while th

using the related principal component analysis are called eigenfaces. Marketing In marketing, discriminant analysis is often used to determine the fa ctors which distinguish different types of customers and/or products on the bas is of surveys or other forms of collected data. The use of discriminant analysis in marketing is usually described by the following steps: 1. Formulate the problem and gather data - Identify the salient attributes cons umers use to evaluate products in this category - Use quantitative mar keting research techniques (such as surveys) to collect data from a sample of potential customer s concerning their ratings of all the product attributes. The data collection st age is usually done by marketing research professionals. Survey questions ask the respondent to rate a product from one to five (or 1 to 7, or 1 to 10) on a range of 245

attributes chosen by the researcher. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durabili ty, colourfulness, price, or size. The attributes chosen will vary dependin g on the product being studied. The same question is asked about all the produ cts in the study. The data for multiple products is codified and input into a s tatistical program such as SPSS or SAS. (This step is the same as in Factor analysis). 2. Estimate the Discriminant Function Coefficients and determine the st atistical significance and validity - Choose the appropriate discriminant analysis met hod. The direct method involves estimating the discriminant function so that all the predictors are assessed simultaneously. The stepwise method enters the predictor s sequentially. The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when th e dependent variable has three or more categorical states. Use Wilks s Lambd a to test for significance in SPSS or F stat in SAS. The most common method use d to test validity is to split the sample into an estimation or analysis sample, and a validation or holdout sample. The estimation sample is used in constru cting the discriminant function. The validation sample is used to construct a classifica tion matrix which contains the number of correctly classified and incorrectly classif ied cases. The percentage of correctly classified cases is called the hit ratio. 3. Plot the results on a two dimensional map, define the dimensions, and interpret the results. The statistical program (or a related module) will map the results. The map will plot each product (usually in two dimensional space). The di

stance of products to each other indicate either how different they are. The dimensions mu st be labelled by the researcher. This requires subjective judgement and is often v ery challenging. See perceptual mapping. References Pattern Classification (2nd ed.), R.O. Duda, P.E. Hart, D.H. Stork, W iley Interscience, (2000). ISBN 0-471-05669-3 Fisher, R.A. The Use of Multiple Measurements in Taxonomic Problems. Annals

of Eugenics, 7: 179-188 (1936) pdf file Friedman, J.H. Regularized Discriminant Analysis. Journal of the America n Statistical Association, (1989) pdf file Mika, S. et al. Fisher Discriminant Analysis with Kernels. IEEE Confer ence on Neural Networks for Signal Processing IX, (1999) gzipped ps file 246

APPENDIX 14

INFORMATION EXCHANGE BETWEEN SHORT AND LONG TERM MEMORIES IN THE NATURAL BRAIN

247

248 Classification by information type http://en.wikipedia.org/wiki/Memory#Classification Long-term memory can be divided into 1. declarative (explicit) 1.1. Semantic memory, which concerns facts taken independent of context.. Semantic memory allows the encoding of abstract knowledge about the worl d, such as "Paris is the capital of France". 1.2. Episodic memory. Episodic memory is used for more personal

memories, such as the sensations, emotions, and personal associations of a particular place or time 1.3. Visual memory is part of memory preserving some characteristics of

our senses pertaining to visual experience. We are able to place in memory information that resembles objects, places, animals or people in sort of a mental image. Visual memory can result in priming and it is assum ed some kind of perceptual representational system or PRS underlies this phenomenon. Declarative memory requires conscious recall, in that some conscious process mu st call back the information. It is sometimes called explicit memory, since it consists of information that is explicitly stored and retrieved.

2. procedural (implicit) memories. (Anderson, 1976) (or implicit memory) is not based on the conscious recall of information, but on implicit learning. Procedural memory is primarily employed in learning motor skills an d should be considered a subset of implicit memory. It is revealed when we do better in a given task due only to repetition - no new explicit memories have been formed, but we ar e unconsciously accessing aspects of those previous experiences. Procedural memory involved in motor learning depends on the cerebellum and basal ganglia. Information Exchange The finding, reported by Daoyun Ji and Matthew A. Wilson, researchers of the rat s brain

at the Massachusetts Institute of Technology, showed that during nondreaming sl eep, the neurons of both the hippocampus and the neocortex replayed mem ories in repeated simultaneous bursts of electrical activity of a task the rat learned the previou s day. Special neurons in the hippocampus are known as when the rat passes a specific location, as if they were part of a map in the br ain. Dr. Wilson reported that after running a maze, rats would replay their route d uring idle moments, as if to consolidate the memory, although the replay, surpris ingly, was in reverse order of travel. These fast rewinds lasted a small fraction of the actua l time spent on the journey. 249 The same replays occurring in the neocortex as well as in the hippoc ampus as the rats slept. The rewinds appeared as components of repeated cycles of neural activity, each of which lasted just under a second. Because the cycles in the hippocampus and neoc ortex were synchronized, they seemed to be part of a dialogue between the two regions. The researchers recorded electrical activity only in the visual neocortex, th e region that handles input from the eyes, but they assumed many other regions part icipated in the memory replay activity. One reason is that there is no direct connect ion between the visual neocortex and the hippocampus, suggesting that a third brain region coord inates a general dialogue between the hippocampus and all necessary components o f the neocortex.

250

APPENDIX 15 STUDENT S DISTRIBUTION 251

Student's t-distribution http://en.wikipedia.org/wiki/Student%27s_t_distribution In probability and statistics, the t-distribution or Student's ribution is a t-dist

probability distribution that arises in the problem of estimating the mean of a normally distributed population when the sample size is small. It is the basis of the popular Student's t-tests for the statistical significance of the difference b etween two sample means, and for confidence intervals for the difference between two po pulation means. Student's distribution arises when (as in nearly all practical statisti cal work) the population standard deviation is unknown and has to be estimated from the data. Occurrence and specification of Student's t-distribution Suppose X 1, ..., Xn are independent random variables that are normally distri buted with expected value be the sample mean, and be the sample variance. It is readily shown that the quantity

is normally distributed with mean 0 and variance 1, since the sample mean is normally distributed with mean . Gosset studied a related quantity, and showed that T has the probability density function with equal to n T is now called the t-distribution. The parameter is conve ntionally called the number of degrees of freedom. The distribution depends on , but not and is what makes the tdistribution important in both theory and practice. is the Gamma function. The moments of the t-distribution are 252

Special cases

Certain values of

Distribution function Distribution function Density function

253

254

INDEX

255 A de Beauvoir Simone 18 A-strategy 117

Behaviorism 6, 9, 53 A*- strategy 117 Behaviorist 11 Ability 5, 7-10, 23, 24 Beethoven 9, 26 Abstraction 73, 93 Berg-Cross G. 179 Abstract thinking 93 Bergson Henri 100 Aconceptual mind 28 Body language 124, 127, 128 Actuator 8-9, 26, 34, 56, 62, 68, 129, Bohm 28 129 Bohr N. 29 Adaptation 7, 10, 39, 58-62, 156, 177 Brain Development Stages 169 Aesthetics 154 Brook Rodney 9, 132 AIBO robot platform 118, 133 Agent 23 C Agent class 24 Cartesian Theater 12, 71 Aggression 126 Cattell 124 Albus J. 55 Cerebellum 70, 119, 154 Altruism 140, 163 Cerebrum 72

American Society for the Prevention of Chromosome 19, 160, 214 Cruelty to Robots 143 Cingulate gyrus 69 Amygdule 123, 125, 129 Classification 24, 38, 83, 98, 107, 144 Android 142, 143, 153 Classes 25 Anthropomorphic robot 142 - agent 25 Apprehension 5, 38, 153 - goal 25 Aristotle 5, 26, 95, 140 Cognition 10, 27, 30, 39, 54, 70, Art 155 Cognitive sciences 28 Artificial gene 160 Cognitive psychology 51 Artificial Life 155-159 Cognetivist 11 Artificial person 124 Golomb Beatrice 73, 97 Association 73 Combination 21 Associative ball 95 Comfort 134 Associative memory 95 Communication 32, 47, 58, 68, 77, 96, Associative thinking 16, 95 109, 143

Attention 30, 37, 58, 67, 69 Compassion 21, 70, 143 Autonomy 11, 15, 24, 37, 59-62, 126, Compromise 123, 145, 146 255 Concept 109-115 Autonomous robots 142, 144, 156 Conception 77, 96 Award 36 Conceptualization 9, 15, 24-26,38-39, Awareness 23, 27, 30 96-97, 102, 197, 154 Axiom 7, 126, 175 Connectionist 53 Axon 171, 196 Conceive 38, 72, 73, 77, 118, 156 Connotative relation 76 B Conscious 9, 29, 30 Baby test 7, 175 - intentional Bagnall Douglas 155 - process Bartneck Christof 133 - unintentional Beauty 154 Control system 256 - local 29 External world 27 - main 29

Eysenck Hans 124 Convergent thinking 15, 16, 178 Courage 140 F Costa 124 Face recognition 244 Cottell Raymond 179 Fair Deal 148-149 Creativity 13, 15-17, 22 , 38, 39, 103, Fairness 147-149 105 Fear 122-126, 135, 142 Creativity Machine 20 Feedback 9, 19, 57, 113, 117, 119, 129, Cridland John 141 132 Curiosity 116, 117 Fembots 162 Feeling 127 D Fogel L. 59 Darwin Charles 39, 67 Fountain Henry 132 Decision tree 121 Free will 35 Decomposition 54, 55, 58, 82, 96, 120 Frontal lobe 125, 126, 128, 131, 152 Definitions 5, 6, 10, 11, 15, 55, 222 Frontal cortex 128 Definitions of Intelligence 7 Frustrations 133

Dendrites 171, 196 Functionalism 53 Dennett Daniel 12 Furber Steve 14 Denotative relation 76 Fuzziness 10 Descartes 32, 55, 81 Fuzzy logic 190 Determinism 35-37 Fuzzy image 21 Dinosaur Pleo 159 Discriminant analysis 133, 242 G Discrimination 62, 67, 69, 146 Galten Francis 179 Disorder 19, 33, 67, 119, 151 Gender 51, 161 Distributed control theory 159 General Intelligence 8-10 Divergent thinking 15, 17 Generalization 10, 24, 38, 62, 86, 97-98, Dreyfus Hubert 81, 100 107, 121, 179, 194 Dreyfus Stuart 82 Genetic Algorithm 18, 19, 159-161, 212 Duality of intelligence 8, 179 Genetic code 8, 124, 125, 161, 169 Dynamic Systems 8, 19, 53-54, 73 Genius 17-18 Genome 156, 160 E

GenoPharm 18, 95 EcoBot 156, 157, Gestalt psychology 53 Einstein 155 Gibson J. 68 Electrical ephapse 171 Goal class 24 Emery Marcia 100 Goal driven system 167 Emotional-family 128 Goal 23 Emotions 30, 38-40, 51, 95, 127-129 - external 7 Engels F. 35 - internal 7 Entrepreneurs 141 Golomb Beatrice 70 Evolution 7, 60, 117, 132, 155, 166 Confidence 127 Existence 31 Gray Jeremy 8 Expert system 12 Gray P. 101 257 Greedy search 18, 121, 122, - general 7, 8, 12 Guilford 15, 17 - knowledge-based 10 Intelligent Design 29 H Intelligent tasks 23

Hall David 154 Intention 12, 29 Happiness 126, 136, 146, Internet 18, 22, 95, 109 Hard coded 11, 30, 34, 40, 155, 161, 162 Interpretation 38, 52, 58, 70-75 Hard wired 7, 11, 30, 34, 40, 54, 116, Intuition 9, 16, 38, 99-106 161, 162 Intuitionalists 100 Harrington 185 Hate 27, 134 J Hebb 8 Judgment 15, 38, 56, 135-136 Heisenberg W. 28 Johnson-Laird Philip N. 29, 111 Hibbard Bill 22 Joy 134 Hill 162 Hippocampus 248 K Hobbes 55, 81 Kant Immanuel 17, 98, 152 Holland John 214 Keller Helen 9, 26 Hope 133 Kelly Ian 156 Horn 179 Kismet 13, 132, 137 Hubert 78

Knowledge 7-10, 18, 52, 83 Humanoid 51, 52, 130Knowledge mining 21 132,142,144,146,156 Koch Christof 26 Hume David 152 Koffka Kurt 51 Humean 154 K Husserl Edmund 103 Kruskal s algorithm 115, Hybrid robot 6 Krishnamurti 101 Hybrot 6 Hypothesis 29,38, 58, 77, 98, 107 L Langton Chris 161 I Law 152 Identity 32,73, 146 Learning 109 Image 17, 21, 70-73, 81, 96 Learning by Experience 112 Imagination 20, 21 Learning by Imitation 117 - objection 21 Learning by Instructions 112 - subjection 21 Learning by Interactions 116

Learning Concepts 109

Learning decision tree 1119 Impression 70, 100, 104, 154 Leibniz 55, 81 Information-processing systems 51 Limbic system 69, 127, 135 Inheritance 8 Locke John 100 I.Q. 8, 22, 112 Localization 38, 71 Inspiration 139, 140 Love 67, 134 Instinct 100, 152, 162 Lubart 17 Intelligence 8, 12, 14, 21, 26, 151 - duality 8, 170 258 M Malfunctions 151 O Markov model 74, 234 Object recognition 72 Marr 54 Operating system 13, 29 Mataric 132 Operator 62, 121 Materialistic 11 McCarty John 101 P McCrae 124 Pavlov Ivan 53 Measurements 23 Path 121

Medulla 129 Perceive 41, 69 Melhuish Chris 156 Perception 26, 30, 31, 37, 39, 56, 62 70Memory 72, 126, 154 - factual 83 Personality 124-125, 139 - long-term 83 Pfeifer Rolf 160 - procedural 83 Piaget Jean 53, 171 - short-term 83 Picasso P. 73 Metabolism 156, 157 Pinker Steven 101 Meystel A.11, 55, 60, 156, 157, 169 Planner 120 Mind 1, 9, 12, 22, 26-31, 99, 141 Planning 58, 120 Minsky Marvin 12, 101 Planning algorithms 120 M.I.Q. 22 Plato 5, 26, 53, 100--------------Mirror Cells 68, 129 Pleo 157, 159 Modularity 56 Parietal lobe 70 Moral 59, 125, 127, 131, 139, 142, 152 Possibility 36, 60, 70, 100, 123,, 140,

Multi-KB 84 179 Multivariable functions 182 Post-phenomenologists 28, Multilevel structure 17, 24, 39, 54, 56, Potter Steve 6 58, 84, 139 Pribram Karl 12 Musical Prime s algorithm 121 - harmonies 153 Process - scale 153 - intentional 17, 30, 38, 39, 104, Mutation 19, 214 105, 153, 154 - unintentional 17, 30, 38, 39, 104, N 105, 153 Nass Clifford 133 Psychoanalysis 53 Neisser Ulric 7 Psycholinguistics 53 Netrebko Anna 129 Pulses 29, 68, 170 Nettleton Philip 177 Punishment 36, 56, 139, 140 neuromuscular junction 169, 170 Pylkkanen P. 28 Von Neumann machine 81

Pylkko 28 Neuro-sciences 28 Pythagoras 155 Neuron 160, 168 Neural maps 27 Q Newell 101, 178 Quantum-like process 27 Newton Isaac 18 Quantum physics 28 Node 95, 121 259 SlugBot 157 R Smart 22-24, 176 Random choices 35 Specific Intelligence 171 Reasoning 9, 10, 11, 12, 15, 16, 18, 20, Speech Recognition Technology 51, 74, 22, 24, 26, 27, 38, 57, 60, 78, 80, 82, 84, 236 88, 92, 93, 96, 98, 100, 113, 120, 145, Spinoza 99, 100 149, 167 Stanislavski 129 Reliability 13, 14, 152 State Reflexes 130, 155, 175 - initial 121 - action 33 - space 121 - arc 33

Sternberg 17, 178 - conditional 33, 34 Stimuli 31, 133 - unconditional 33, 34 - external 27 Reflexes family 128 - internal 27 Regeneration 14 Straight line algorithm 122 Remote association test 17 Structuralism 53 Reproduction 25, 156, 179, 214 Symbolic 53 Risk 16, 22, 36, 135, 140 Symbolic equation test 17 Rheingold Howard 98 Subconscious process 29 Robinson Daniel N. 96 Subjective risk 140 Robocup 134 Success_expected 134 Robotics Law 238 Success_observed 134 Robustness 13, 14, 152, 160 Superintelligence 21 Rumelhart 81 Supervised Learning 109 Russell 55, 81, 100 Synapse 169-170, 238 S

T Sartre Jean-Paul 33, 103 Tautology 82, 96 Scheutz M. 133 Teledendron 170 Schopenhauer Arthur 17 Temporal lobe 72, 136, 139, 154, 236 Self-awareness 7, 30-34, 39, 81, 172 Text Recognition Technology 74 Self-consciousness 33 Thale Stephen 19 Self -confidence 139-142, 151 Thought 26 Self-esteem 139 Transhuman minds 22 Sejnowski Terrence J. 105 Translation 74. Semiotics 54, 76, 96 Triarachic theory of intelligence 178 Sensation 26, 67 Twins studies 7 Sensing 8, 37, 67-70, 126 Tulvin Endel 86 Sensing system 8 Turing test 23, 24 Sentient 9, 10, 27, 28, 39, 40, 132, 176 SEXNET 73 U Shepard 98 Uncertainty 14, 36, 69, 127, 135, 141, Simone de Beauvoir 18

161, 237 Simon Herbert 6, 98 Unconsciousness 28 Singularity 22, 29 Undirected graph 95 Skinner B. F. 53 260 Warwick Kevin 132, 143 V Watson John B 53 Vants 161 Wertheimer Max 53 Virtual embryos 157, 160 Whalen T. 59 Virtual growth 156, 179 Whitehead 55, 81 Virtual intelligence 156, 179 Will (free) 35 Virtual life 156, 179 William James 27, 73 Virtual metabolism 156,179 Wittgenstain L. 56, 81 W Z Waives 28 Zadeh Lotfi 84

261

262

ABOUT THE AUTHOR

Professor Leonid M. Polyakov is a member of the Computer Science and Math department faculty at Globe Institute of Technology (New York), the au thor over 100

books and articles. He earned his Ph.D. in Electrical Engineering and Theory of Control Systems from Moscow Machine Tool Institute. He was the principle desig ner of the Intelligent Control system for the machine-tool manufacturing company (O dessa, Ukraine). He was teaching Cybernetic and Intelligent Control systems at Odessa Polytechnic Institute and has intensive working experience with differen t American engineering companies.

263

See more books in http://www.e-reading-lib.org

S-ar putea să vă placă și