Documente Academic
Documente Profesional
Documente Cultură
-Partiendo de ideas de Turing, von Neumann desarroll la idea de disear un programa que
instruyera a la mquina de Turing a reproducirse a s misma. Esta fue la idea de un
programa almacenado: la computadora era controlada por un programa que se almacenaba
en su memoria interna, de modo que no tuviera que ser reprogramada para cada nueva
tarea. sta fue la primera vez que se concibi que una computadora poda preparar y
ejecutar sus propios programas.
El modelo neuronal
-W. McCulloch y el lgico W. Pitts mostraron en 1943 que las operaciones de una clula
nerviosa y sus conexiones con otras clulas nerviosas (una red neural) poda ser
modelada en trminos de la lgica. Los nervios podan ser pensados como enunciados
lgicos, y la propiedad de activacin o no activacin de nervios poda ser comparada con la
operacin del clculo proposicional en la que un enunciado es verdadero o falso. As como
una proposicin puede implicar otra, la activacin de una neurona conduce a la activacin
de otra. La conclusin era que todo lo que poda ser descrito exhaustivamente y sin
ambigedad es realizable por una red neural finita apropiada.
-Gracias a esta idea, la nocin de mquina de Turing se perfilaba en dos direcciones: hacia
un sistema nervioso compuesto de innumerables neuronas todo o nada (activadas o
desactivadas), y hacia una computadora que pudiera realizar cualquier proceso descrito sin
ambigedad. Mientras Turing haba mostrado la posibilidad en principio de mquinas
computadoras de gran poder, McCulloch y Pitts demostraron que una de esas mquinas, el
cerebro humano, poda ser pensado como operando por los principios de la lgica, y
entonces como una computadora poderosa.
-McCulloch sostuvo que los problemas fundamentales de la epistemologa pueden
formularse y resolverse slo a la luz del sistema nervioso central, y at sus afirmaciones
sobre el pensamiento demasiado fuertemente a los conocimientos de su tiempo sobre el
sistema nervioso. Se le critica que su analoga entre la lgica y el cerebro es demasiado
directa, y que en realidad debe buscar analogas en un nivel ms alto (McCarthy).
-Uno de sus adelantos fue el apoyo a la investigacin sobre las propiedades especficas de
las clulas nerviosas individuales, lo cual ayudo a entender algunos de los aspectos ms
importantes del sistema nervioso. Adems, actualmente revive en ciencias de la
computacin las ideas sobre la naturaleza y las conexiones de las clulas nerviosas.
La sntesis ciberntica
-Durante sus trabajos en los 30s y 40s sobre servomecanismos, N. Wiener comenz a
pensar en la naturaleza del feedback y de los sistemas auto-correctores y auto-regulatorios,
sean mecnicos o humanos. Colabor con V. Bush, que fue pionero en el desarrollo de
computadoras analgicas, y le interesaba el trabajo de McCulloch y Pitts. Sin embargo, fue
ms lejos que todos sus contemporneos con la conviccin en la coherencia entre estos
variados desarrollos cientficos y tecnolgicos.
-Wiener concibi estos nuevos desarrollos como constituyendo una nueva ciencia, centrada
en temas de control y comunicacin. Para l los problemas en ingeniera del control y de la
comunicacin eran inseparables, al centrarse no en las tcnicas de la ingeniera elctrica,
sino en la nocin de mensaje, sea ste transmitido por medios elctricos, mecnicos o
nerviosos.
-Con sus colaboradores Rosenblueth y Bigelow, Wiener introdujo la idea de que es legtimo
hablar de mquinas que exhiben feedback como motivadas por fines, como calculando las
diferencias entre sus fines y sus ejecuciones reales, y como trabajando para reducir esas
diferencias.
-Tambin desarrollaron una nueva nocin de sistema nervioso central. ste no es ms un
rgano auto-contenido, que recibe inputs de los sentidos y realiza descargas sobre los
msculos. Por el contrario, algunas de sus caractersticas ms singulares slo pueden ser
explicadas como procesos circulares que emergen del sistema nervioso hacia los msculos,
y reingresan en el sistema nervioso por los rganos sensoriales, sean propiocepctores u
rganos de los sentidos especiales. Esto signific un nuevo paso en el estudio de la parte de
la neurofisiologa que se ocupaba del sistema nervioso como un todo integrado. Estas ideas
son anlogas a las crticas de Lashley al conductismo.
-En 1948 integr sus ideas sobre el sistema nervioso, la computadora electrnica y la
operacin de otras mquinas en la nueva ciencia de la ciberntica, definida como el campo
completo de la teora del control y la comunicacin, sea en la mquina o en el animal. No
obstante, esta sntesis no es ms que un ejemplo pionero, y gan ms adeptos en la URSS
que dentro de la ciencia cognitiva.
La teora de la informacin
-A C. Shannon, un ingeniero electrnico, se le atribuye el desarrollo de la teora de la
informacin. A fines de los 30s vio que los principios de la lgica (en trminos de
proposiciones verdaderas y falsas), podan ser usados para describir los dos estados
(encendido y apagado) de los conmutadores de transmisin electromecnicos
(electromechanical relay switch), de modo que circuitos elctricos como los de las
computadoras podan encarnar operaciones fundamentales del pensamiento.
-Junto con W. Weaver, Shannon desarroll la nocin clave de la teora de la informacin: la
informacin puede ser pensada de forma enteramente divorciada del contenido especfico,
como siendo simplemente una decisin entre dos alternativas igualmente plausibles. La
unidad bsica de la informacin es el bit (dgito binario), y es la cantidad de informacin
requerida para seleccionar un mensaje de dos alternativas igualmente probables.
-La intuicin de Wiener [Shannon??] hizo posible que se pensara a la informacin con
independencia de un dispositivo de transmisin particular, posibilitando un enfoque en la
eficacia de cualquier comunicacin de mensajes por cualquier mecanismo. De este modo,
uno poda considerar a los procesos cognitivos con independencia de cualquier
corporizacin particular, oportunidad que sera aprovechada por los psiclogos que
pretendan describir los mecanismos subyacentes al procesamiento de cualquier
informacin.
-Recientemente se ha cuestionado entre los cientficos cognitivos la idea de que es posible
tratar toda la informacin de forma equivalente, ignorando cuestiones de contenido.
Los sndromes neurofisiolgicos
-Durante las guerras se aprendi mucho sobre la afasia (dficit en el lenguaje), la agnosia
(dificultad en el reconocimiento) y otras formas de patologa mental que surgan como
consecuencia del dao cerebral.
-Uno de los descubrimientos fue que haba similitudes en las patologas que traspasaban los
lmites culturales y lingsticos, lo cual era un indicador de que las capacidades cognitivas
en el sistema nervioso se organizan con ms regularidad que la permitida por explicaciones
puramente ambientales de los procesos mentales.
-Adems, los patrones de colapso no podan ser explicados en trminos de una disrupcin
entre estmulo y respuesta, sino en trminos de una jerarqua de respuestas conductuales
alteradas.
-Los perfiles de habilidades y discapacidades que emergen con el dao cerebral provey de
sugerencias fecundas acerca de cmo puede estar organizada la mente humana en
individuos normales.
Encuentros catalticos y escritos influyentes
-Hubo numerosos encuentros entre estos interesados en temas de cognicin (de los cuales el
Hixon Symposium fue slo uno, aunque de especial importancia por las ideas manifestadas
all sobre la conexin cerebro-computadora y sobre el desafo al conductismo), y un
nmero significativo de publicaciones (como Design for a Brain de Ashby, los escritos de
lingstica de Jakobson, los de neuropsicologa de Hebb, los de antropologa de Bateson, y
los trabajos de Bartlett, Lvi-Strauss, Luria, Piaget y Vygotsky) que ayudaron a promover
una nueva ciencia interdisciplinar de la mente.
-En gran medida todos estos desarrollos tuvieron lugar fuera de los campos de estudios
establecidos, como actividades extracurriculares desde la perspectiva de las lneas
predominantes (psicologa conductista, lingstica estructural, antropologa social
funcionalista, neuropsicologa del aprendizaje animal). Hizo falta eventos ms dramticos
para que estos desarrollos adquirieran un lugar central.
[Hiptesis para trabajo: el rol preponderante de lo institucional en la constitucin de la
ciencia cognitiva (por sobre las crticas al conductismo)].
-Ciertos libros sirvieron para difundir las ideas en boga. En Plans and the Structure of
Behavior, de 1960, Miller, Pribram y Galanter criticaron la idea de arco reflejo de la
psicologa conductista y adoptaron un enfoque ciberntico de la conducta. Unos aos
despus comenzaron a aparecer libros de textos de psicologa cognitiva, siendo Cognitive
Psychology de Ulric Neisser, de 1967, el ms influyente (si bien criticaba varios puntos
importantes de la metfora computacional de la mente). En The Sciences of the Artificial
(1969), Simon provea una explicacin filosfica de su enfoque: tanto la computadora
como la mente humana son entidades fsicas que procesan, transforman, elaboran y
manipulan smbolos de varios tipos. En 1972, Newell y Simon publican Human Problem
Solving, en donde describen los programas solucionadores de problemas. En 1964, Fodor y
Katz editaron The Structure of Language, en donde reunieron artculos acerca de la nueva
lingstica. Computers and Thought, editados en 1963 por Feigenbaum y Feldman, y
Semantic Information Processing, editado en 1968 por Minsky, presentaban los nuevos
avances en inteligencia artificial. En el campo de la antropologa se destac Cognitive
Anthropology (1969) de Tyler.
-El clima intelectual predominante era que se estaba desarrollando una revolucin similar a
la revolucin en fsica del siglo XVII: se crea que haba muchos descubrimientos por
hacer, que se tena el mtodo apropiado para hacerlos, que se requera una nueva
matemtica, una nueva ontologa y un nuevo punto de vista sobre el mtodo cientfico, y
que se deba pelear contra hbitos intelectuales e institucionales obsoletos.
La iniciativa de Sloan
-La fundacin privada Alfred P. Sloan Foundation estimul a comienzos de los 70s un
programa de neurociencias, un conjunto de disciplinas que exploran el sistema nervioso,
que incluyen la neuropsicologa, la neurofisiologa, la neuroanatoma y la neuroqumica.
-La fundacin tena inters en financiar otro proyecto similar, y los cientficos cognitivos
lograron convencer 1976 a la fundacin de que financiara con veinte millones de dlares un
proyecto en esta disciplina.
-La iniciativa de la Sloan Foundation tuvo un efecto cataltico sobre el desarrollo del campo
de la ciencia cognitiva. Se fund la revista Cognitive Science, cuyo primer nmero fue
publicado en 1977, y en 1979 se fund una sociedad con el mismo nombre. Se
desparramaron programas, cursos, peridicos y el resto de la parafernalia escolar acerca de
la ciencia cognitiva, incluyendo libros de divulgacin como The Universe Within (1982) de
Hunt y esta misma obra.
-Esta declaracin del nacimiento de un campo disciplinar fue vitalizante, pero no aseguraba
un consenso interno, ni un progreso cientfico apreciable. Haba tensiones acerca de cul
era el campo, quin lo entenda, quin lo amenazaba, y en qu direccin deba progresar.
-Un sntoma de esta falta de consenso fue el State of the Art Report de la Sloan
Foundation. Ese reporte deca que la razn de ser de ese campo disciplinar era un objetivo
de investigacin comn: descubrir las capacidades representacionales y computacionales de
la mente y su representacin estructural y funcional en el cerebro. Present adems un
esbozo de las interrelaciones disciplinares que ya se haban forjado (filosofa-psicologa,
filosofa-lingstica, psicologa-lingstica, psicologa-IA, psicologa-antropologa,
psicologa-neurociencia, lingstica-IA, lingstica-antropologa, lingstica-neurociencia,
antropologa-neurociencia) y de las que quedaban por forjar (filosofa-IA, filosofaantropologa, filosofa-neurociencia, IA-antropologa). La comunidad recibi
negativamente el reporte, posiblemente porque cada cientfico lo ley desde su propia
extraar que a los neurocientficos les entusiasme menos el nivel representacional que a los
psiclogos, lingistas y cientficos de la computacin.
Computadoras
-La computadora sirve, en primer lugar, como un modelo del pensamiento humano: si una
mquina puede razonar, tener objetivos, revisar su conducta, transformar informacin, etc.,
entonces los seres humanos merecen ser caracterizados del mismo modo.
-En segundo lugar, la computadora sirve como una herramienta de trabajo.
-Algunos crticos sostienen que la computadora es otro modelo mecnico errneo ms que
se propone para explicar la cognicin humana. Consideran que es un error ver a los
organismos activos como sistemas procesadores de informacin. El hecho de que se pueda
simular la conducta no quiere decir que se haya alcanzado una descripcin correcta de la
conducta.
-El involucramiento con el modelo computacional es diferente en las distintas disciplinas
que constituyen la ciencia cognitiva. Ese involucramiento es central en inteligencia
artificial, y es aceptado con algunas reservas en la lingstica y la psicologa. En la
antropologa y la neurociencia la cuestin es ms problemtica, en la medida en que unos
consideran suficientes las explicaciones cerebrales y otros las culturales. En filosofa hay
actitudes variadas: desde un entusiasmo descarado a un escepticismo virulento.
Menos nfasis en el afecto, el contexto, la cultura y la historia
-Si bien la exclusin de factores afectivos, contextuales, culturales e histricos en la
explicacin de la conducta no es una tesis explcita de la ortodoxia cognitivista, es claro que
en la prctica esos factores son apartados, incluso por los antroplogos cognitivos.
-Esta exclusin puede deberse a su practicidad: la ciencia cognitiva no puede pretender
incluir todos los factores, no sera capaz de construir una explicacin completa de la accin,
y los factores individuales pueden constituir una explicacin bastante buena.
-Algunos crticos del cognitivismo sostienen que los mencionados factores no pueden ser
explicados por la ciencia, porque pertenecen a dimensiones inherentemente humansticas o
estticas, pero juegan un rol central en la experiencia humana.
-Otros crticos reconocen tambin que estos factores juegan un rol central, pero creen que
son susceptibles de explicacin cientfica: la ciencia cognitiva debe incorporarlos.
Creencia en los estudios interdisciplinarios
-Hay acuerdo en que no hay una ciencia cognitiva. En realidad, hay investigadores
procedentes de diferentes disciplinas que creen que pueden interactuar productivamente
para lograr una comprensin ms poderosa de la que podran lograr si trabajaran desde una
nica disciplina.
-Algunos escpticos sostienen que no se progresa componiendo disciplinas, y que es ms
conveniente que cada una ocupe su lugar.
-Otra crtica es que no es claro qu disciplinas contribuirn en ltima instancia a la ciencia
cognitiva, por lo que se puede perder mucho tiempo en colaboraciones infructuosas.
-En el mejor de los casos, para estos crticos debe haber cooperacin entre disciplinas, pero
nunca una fusin total.
Enraizamiento en problemas filosficos clsicos
-Los problemas filosficos clsicos son un ingrediente clave de la ciencia cognitiva, si bien
no todos los cientficos cognitivos lo consideran as.
-Slo mediante la exploracin de la historia de la filosofa puede mostrar a los cientficos
cognitivos que estn tratando temas que fueron abordados antes por filsofos. Los
cientficos cognitivos pueden criticar que los problemas filosficos fueron mal formulados,
o no fueron resueltos, y que los filsofos hoy no tienen un rol que jugar en la ciencia
cognitiva. Incluso los propios filsofos pueden pensar eso. Sin embargo, eso no quita que
convenga revisar las concepciones filosficas del conocimiento humano.
de que los individuos tienen una capacidad limitada para la toma y el almacenamiento de
informacin. Adems, en lugar de hablar de los lmites estructurales de un modo esttico,
Cherry y Broadbent buscaron determinar precisamente lo que sucede con esta informacin
desde el momento en que se la recibe. As, Broadbent fue el primer psiclogo moderno en
describir el funcionamiento cognitivo con un diagrama de flujo.
El enfoque estratgico de Jerome Bruner
-En colaboracin con Jacqueline Goodnow y George Austin, Bruner public en 1956 A
Study of Thinking, el cual creci del Cognition Project que Bruner dirigi en Harvard. El
tema del libro era la clasificacin, categorizacin, o formacin/adquisicin de conceptos. El
problema era cmo una persona, frente a un conjunto de elementos, llegaba a agruparlos en
categoras.
-Los experimentos de Bruner consistan en indicarle al sujeto una determinada categora, y
mostrarle objetos para que diga si pertenecen o no a la categora.
-Si bien este tipo de experimentos era similar a otros anteriores, diferan en que Bruner no
conceba a los sujetos como simple reactores a estmulos, sino como solucionadores de
problemas activos y constructivos. Trat de analizar las propiedades informacionales de
largas secuencias de acciones, llamadas estrategias (escaneo sucesivo, focalizacin
conservativa, focalizacin apostadora). Bruner descubri que el mejor modo de explicar los
actos individuales es en trminos de esos patrones generales o estrategias, ms que en
trminos de respuestas particulares a estmulos particulares.
Etapas posteriores del procesamiento de informacin
-Durante los 60s se precisaron algunos detalles del procesamiento de informacin a partir
de los trabajos de Brodabent. Se destacan los trabajos de Moray y Treisman.
-Nielssen propuso una explicacin informacional ms compleja. Argument que un sujeto
entiende una seal sintetizando una representacin interna que se asocia con la seal.
-Posteriormente, Broadbent revis su modelo unidireccional de la atencin, cambindolo
por uno de ida y vuelta.
-Sperling hizo experimentos para mostrar cunta informacin de una sola vez puede tomar
un individuo por medio de la visin.
-Sternberg estudi los detalles finos del procesamiento de informacin, llegando incluso a
mediciones de tiempo.
Un modelo de la memoria
-Atkinson y Shiffrin propusieron un modelo influyente segn el cual la memoria posea tres
almacenes: uno donde se registran inmediatamente los estmulos dentro del sistema
sensorial apropiado, uno de corto plazo en donde la informacin entrante decae y
desaparece rpidamente pero puede pasar al almacn de largo plazo.
-Allport critic la idea de que el input informacional es serial (bits entrando uno tras otro en
un solo punto), defendiendo que es paralelo (hay mltiples entradas en mltiples puntos).
-Shannon sostuvo que la informacin contextual afecta el procesamiento de la informacin
sensorial, por lo que un modelo abajo-arriba o afuera-adentro que trate los bits de
informacin con independencia de su significado y contexto de presentacin no har
justicia a la cognicin humana.
-Las etapas del modelo de Atkinson y Shiffrin comienzan a diluirse cuando uno examina la
cuestin ms atentamente: la memoria a corto plazo no puede ser tan fcilmente separada
de la memoria intermedia, los procesos pre-atentos se mezclan con los buffers sensoriales,
etc.
Enfoques arriba-abajo
Unidades de anlisis molares
-Estudios desarrollados por Bransford muestran que los sujetos no tienden a recordar la
formulacin exacta de las oraciones que encuentran, pero s los significados de las
oraciones. Esto demuestra que los individuos utilizan enfoques inferenciales e integradores
al memorizar fragmentos del lenguaje, lo cual hace dudar de aquellos experimentos que se
enfocan en la memoria de slabas o frases que no tienen sentido. Los sujetos utilizan
distintos esquemas organizadores que determinan el modo en que se interpreta la misma
informacin sensorial. Adems, los sujetos infieren algunas cosas de las oraciones que
escuchan, y que muchas de sus respuestas se basan en esas inferencias, no en el contenido
literal de las oraciones.
-Rumelhart ha postulado la existencia de una gramtica para las historias: un conjunto de
supuestos subyacentes acerca de cmo debe desarrollarse la trama de una historia. Esta
gramtica afecta el modo en que los sujetos interpretan historias: suelen recordar ms
fcilmente aquellas historias que respetan la gramtica, y suelen olvidar o normalizar las
historias que no la respetan.
-Estos nuevos enfoques no destronaron a los abordajes abajo-arriba, pero los investigadores
comenzaron a apreciar que los sujetos no van a ejecutar las tareas como papeles en blanco.
Por lo tanto, surge un enfoque alternativo que se centra en cmo el organismo, con
estructuras ya preparadas para la estimulacin, manipula y reordena la informacin que
encuentra.
-Estos nuevos enfoques son ecolgicos, y arriba-abajo, y llegaron incluso a influir en los
estudios ms conservadores sobre la memoria (por ejemplo, Bower, Craik, Lockhart).
-Hay un avance con esta nueva perspectiva, porque no se limita a reconocer la existencia de
procesos arriba-abajo, sino que describe sus detalles.
Sntesis y enfoques generales
-Los modelos clsicos de procesamiento de informacin eran ciegos con respecto al
contenido: se asuma que cualquier tipo de informacin era procesada de la misma manera.
No obstante, estos supuestos fueron cuestionados. Shepard y Metzler intentaron mostrar
que en una tarea los sujetos comparan figuras geomtricas para determinar si son iguales
rotndolas mentalmente, lo cual parece mostrar que no se las representa de manera
proposicional, sino como imgenes (Kosslyn). Del hecho de que las computadoras
transmiten informacin en un nico simbolismo no se sigue que los humanos tambin se
restrinjan a uno.
-Uno de los problemas que enfrenta la psicologa cognitiva es la particin del campo en
muchas especialidades y subespecialidades. John Anderson propuso un intento ambicioso
de unificacin, un modelo general de la arquitectura de la cognicin llamado ACT (o
CAP: Control adaptativo del pensamiento). Su nocin central es la de sistema de
produccin: cuando un nodo de la red es lo suficientemente activado, surge una accin o
produccin. El sistema incluye varias formas de memoria (memoria de trabajo, memoria
Algunos quieren simular exactamente los procesos de pensamiento humano, mientras que
otros se contentan con cualquier programa que conduzca a consecuencias inteligentes.
Algunos tienen una visin dbil de la metfora del pensamiento, para la cual el diseo de
programas inteligentes es slo un medio de probar teoras sobre cmo los humanos
podran llevar a cabo operaciones cognitivas, mientras que otros tienen una visin fuerte,
para la cual una computadora programada apropiadamente es en realidad una mente, en el
sentido en que se puede decir literalmente que entienden y tienen otros estados cognitivos.
Hay una tensin entre generalistas y expertos, que refleja la discusin entre
perspectivas modularistas y de procesamiento central en la psicologa: para los generalistas
hay programas o familias de programas que pueden ser aplicados a cualquier tipo de
problema; para los expertos los programas deben contener un conocimiento ms detallado
acerca de un dominio especfico y restringirse a ese dominio.
Para algunos la inteligencia artificial tiene importancia cientfica (muchos sostienen que
reemplaza a la epistemologa); para otros no es una ciencia, sino una forma de ingeniera
aplicada sin el basamento terico de una disciplina cientfica.
Antecedentes histricos
-En 1938 Shannon mostr que los circuitos que podan ser encontrados en una mquina
electrnica podran ser expresados en trminos de una ecuacin booleana: el sistema
verdadero-falso era paralelo a los interruptores encendido-apagado, o a los estados abiertos
y cerrados de un circuito. Cualquier operacin que pueda ser descrita en un conjunto finito
de pasos puede ser llevada a cabo por estos circuitos.
-En 1936 Turing propuso su idea de que una tarea computacional explcitamente enunciada
podra ser ejecutada por una mquina que poseyera un conjunto finito apropiado de
instrucciones. Comenz a pensar en la relacin entre el pensamiento humano y el
pensamiento artificial, lo que deriv en el famoso test de Turing.
-Bush comenz a construir mquinas capaces de resolver ecuaciones diferenciales.
-Hacia 1943 McCulloch y Pitts desarrollaron sus ideas sobre redes neurales; especialmente
la idea de que todo lo que pueda ser expresado exhaustivamente y sin ambigedad puede
ser realizado por una red finita apropiada de neuronas. El cerebro poda ser visto como una
mquina de una forma ms precisa que antes, y ser pensado como una mquina de Turing.
-Wiener estaba aunando las corrientes de la ciberntica, un nuevo campo interdisciplinar
que investigaba los mecanismos de retroalimentacin en materia orgnica y en autmatas.
-A von Neumann suele atribursele la idea de un programa almacenado, donde las
operaciones de la computadora pueden ser controladas por medio de un programa o
conjunto de instrucciones, almacenado en la memoria interna de la computadora, siendo por
tanto innecesario reprogramar la computadora para cada nueva tarea. Tambin explor las
analogas y las diferencias entre las computadoras y el cerebro.
El verano de 1956 en Dartmouth
-En el verano de 1956 un grupo de jvenes matemticos y lgicos se encontraron en
Dartmouth College con la conjetura de que cualquier aspecto de la inteligencia en principio
puede ser describible tan precisamente que una mquina podra simularlo. Se expusieron
las capacidades de las computadoras para llevar a cabo las siguientes tareas cognitivas:
jugar al ajedrez (Alex Bernstein), jugar a las damas (Arthur Samuel), probar teoremas
lgicos (Newell y Simon), modelar redes neurales (Nathan Rochester), y probar teoremas
euclidianos (Minsky).
-Los propsitos de la reunin no fueron satisfechos, pero las ideas propuestas por la
generacin anterior (Wiener, von Neumann, McCulloch, Turing) podan ahora ser llevadas
a cabo mediante el diseo de mquinas y la escritura de programas acerca de los cuales la
anterior generacin slo haba especulado.
Programas para problemas: Allen Newell y Herbert Simon
A. Logic Theorist
-Su primer programa, Logic Theorist (LT), poda probar teoremas de los Principia de
Russell y Whitehead.
-Para desarrollarlo se enfrentaron a la dificultad de escribir programas directamente en el
lenguaje de las computadoras: necesitaban un lenguaje de nivel superior, ms simple para el
programador humano, que pudiera ser traducido automticamente al lenguaje de la
computadora. Por ello disearon lenguajes de procesamiento de informacin (LPI), o
lenguajes de procesamiento de listas.
-El procesamiento de lista fue una tcnica que resolvi el problema de la distribucin del
almacenaje en una memoria limitada de computadora, y permiti que los programadores
crearan estructuras de datos para almacenar la informacin de una forma accesible similares
a los procesos de pensamiento humano.
-El programa contiene las reglas bsicas de operacin: una lista de axiomas y teoremas
previamente demostrados. Luego, el programa recibe una nueva expresin lgica y se le
instruye que descubra una prueba. El programa trata todas las operaciones que puede para
encontrar una prueba. Si encuentra una, la prueba es impresa en una larga tira de papel, si
no la encuentra, el programa declara que no puede resolver el problema y detiene sus
operaciones.
B. General Problem Solver
-El proyecto ms ambicioso de Newell y Simon fue el de desarrollar un solucionador
general de problemas (SGP), un programa cuyos mtodos en principio podran ser utilizado
para resolver todo tipo de problemas.
-El proyecto no pretenda disear un programa que simplemente resolviera problemas, sino
que lo hiciera imitando los procesos seguidos por humanos normales al resolverlos. Para
ello recolectaron protocolos que registraran las introspecciones y anotaciones de los sujetos
que se involucraban en la resolucin de problemas.
-En un anlisis de medios-fines, uno primero establece la forma deseada de resolver el
problema, y luego compara el lugar actual en el proceso de solucin con el objetivo final
deseado. Si esas dos instancias coinciden, el problema ha sido resuelto. Si no coinciden, el
solucionador clarifica la diferencia y busca mtodos para reducirla. Hay una tabla que
asocia los fines del sistema con operadores que pueden ser de utilidad para lograrlos. Al
computar la diferencia entre el estado actual y el fin, el sistema selecciona un operador
asociado con esa diferencia y prueba si el operador es aplicable a la situacin actual. Si es
aplicable, y produce un resultado ms cercano al estado deseado, se lo repite. Si es
inaplicable, el sistema genera una sub-meta, que se propone reducir la diferencia entre la
situacin actual y la situacin en la que el operador puede ser aplicado. Este proceso se
repite hasta que la meta es lograda, o hasta que se demuestre que no puede ser lograda con
la informacin dada, o con los operadores disponibles para el programa.
-El proyecto del SGP fue abandonado, porque su generalidad no era tan amplia como sus
creadores haban pretendido, y porque el campo de la IA continu por otras direcciones. No
-Minsky no contribuy muy activamente a la literatura sobre IA, ni se asocia una lnea de
trabajo a l, pero form a un grupo activo de estudiantes.
-T. G. Evans desarroll un programa a fines de los 60s que resolva analogas de tipo
visual. Si se le mostraban dos figuras F y G que mantenan una relacin de analoga A, el
programa era capaz de seleccionar de un grupo de figuras otro par, F y G, que tambin
mantenan la relacin A. El programa describe a F y G como figuras, y caracteriza la
diferencia entre las descripciones (en trminos como dentro, sobre, etc.); luego aplica
la diferencia identificada como una regla de transformacin a F, para llegar a un patrn
que tenga la misma descripcin que uno de los patrones candidatos (G).
-Daniel Bobrow dise STUDENT, capaz de resolver el tipo de problemas de lgebra que
podan encontrarse en libros de matemticas de secundaria. El programa asuma que cada
oracin del problema era una ecuacin. Se le daba conocimiento acerca de ciertas palabras
para ayudarle a localizar la ecuacin, analizando la sintaxis de las oraciones por medio de
esos significados conocidos. STUDENT exhibe a la vez los poderes y las limitaciones de
los programas de esa poca: los programadores podan disear mquinas capaces de actuar
inteligentemente, pero los procedimientos que stas usaban eran muy diferentes a los
empleados por humanos ordinarios. Mientras las computadoras podan desconocer el
dominio de aplicacin de las ecuaciones matemticas (su conocimiento es puramente
sintctico), los humanos en general se valdran de sus conocimientos de ese dominio para
resolver el problema.
Listas y lgica: John McCarthy
-Uno de sus mayores logros de McCarthy fue el diseo de LISP (list processing), el
lenguaje de computacin que se volvi el ms usado en el campo de la IA. LISP se
preocupa por la presentacin y manipulacin de listas, de tems de listas, y de listas de
listas. El poder de LISP se deriva de que es un lenguaje recursivo, capaz de describir y
manipular estructuras y conjuntos de estructuras.
-McCarthy tambin tuvo ideas fuertes sobre los fines de la IA y los modos de lograrlos.
Para l, el modo de hacer mquinas inteligentes es usando un enfoque formal riguroso en el
que los actos que producen inteligencia sean reducidos a un conjunto de relaciones lgicas
o axiomas que puedan ser expresados precisamente en trminos matemticos. El sistema de
McCarthy se basaba en la fe en la consistencia de un sistema de creencias y en el punto de
vista de que todo conocimiento puede ser pensado en trminos puramente lgicos. Si este
enfoque fuese adoptado, sera posible usar tcnicas de demostracin de teoremas que no
sean dependientes de los detalles de los dominios particulares. McCarthy era un defensor de
estos puntos de vista extremadamente generales. McCarthy trabaj en el diseo de una
modificacin no-convencional de la lgica estndar para modelar el razonamiento de
sentido comn, y su colega Hayes trat de formular en trminos lgicos los procesos de
pensamiento de la fsica de sentido comn.
Otros hitos en la programacin
-Feigenbaum dise DENDRAL, capaz de determinar, a partir de la enorme cantidad de
datos de los espectrgrafos, qu compuesto orgnico estaba siendo analizado. El programa
formulaba hiptesis sobre la estructura molecular del compuesto, y luego testeaba esas
hiptesis mediante posteriores predicciones. El output definitivo era una lista de posibles
compuestos moleculares listados en trminos de plausibilidad decreciente. Sus resultados se
compararon favorablemente con los de qumicos expertos. A diferencia de programas
anteriores, usaba mucho conocimiento almacenado sobre qumica, y no emulaba los modos
en que seres humanos resolveran el problema.
-Por otra parte Colby y Weizenbaum desarrollaron programas capaces de dialogar. Ambos
programas podran, durante un tiempo, engaar a una persona. No obstante, alguien que
conociera el diseo del programa, o que pudiera inferir que se trata de una mquina, podra
desenmascararlo. Esto se debe a que, como en el caso de STUDENT, los programas no
entienden las palabras usadas, sino que estn diseados para responder de un cierto modo a
ciertas palabras.
El fenomenal SHRDLU
-Terry Winograd desarroll un programa llamado SHRDLU, un experto que realmente
entiende el lenguaje, aunque trabaja en un dominio muy limitado.
-Winograd dise para su programa un mundo de bloques simples que podan ser apilados
y dispuestos de varias formas. Su programa era lo suficientemente sofisticado en su
conocimiento lingstico como para llevar a cabo un conjunto complejo de instrucciones.
Adems, SHRDLU da muestras de que esas instrucciones son de hecho entendidas (una de
esas muestras es que el programa pide clarificacin cuando las instrucciones son
ambiguas).
-El mundo simulado era pequeo, y el nmero de acciones que poda llevar a cabo y el
nmero de preguntas que poda responder era muy limitado. No obstante, dentro de su
universo particular, el programa se comportaba de un modo plausible, percibiendo
distinciones y llevando a cabo rdenes, sugiriendo que entenda lo que se le deca.
-SHRDLU era ms sofisticado que sus predecesores porque usaba una serie de especialistas
lingsticos. Tambin posea sistemas de creencias, conocimiento sobre resolucin de
problemas, y especialistas que detectan si una proferencia es una pregunta, una orden o un
comentario.
-De acuerdo con Dennett, una de las mayores contribuciones de SHRDLU es su
exploracin de las demandas impuestas sobre cualquier sistema que sigue instrucciones,
planea cambios en el mundo, y mantiene un registro de ellos, a pesar de que no sea
exactamente anlogo a un humano.
-SHRDLU tambin tena limitaciones. No dispona de informacin semntica para
distinguir entre los significados de palabras como y, el y pero. Adems, no puede
aprender a ejecutar mejor sus tareas. Un programa posterior similar, llamado HACKER,
diseado por Sussman, mostr que era posible que ocurriera ese aprendizaje.
Cuestiones cruciales
-Necesidad de sistemas expertos: mientras Newell y Simon lideraron la bsqueda de
programas que pudieran tratar con cualquier tipo de problemas, hacia fines de los 60s las
limitaciones de esos programas generales comenzaron a ser ms evidentes. Feigenbaum
sostuvo que sus maestros estaban trabajando con problemas de juguete, no con problemas
del mundo real. El desarrollo de SHRLDU fue lo que termin de evidenciar las limitaciones
del programa generalista, y la necesidad de sistemas que poseyeran una gran cantidad de
conocimiento especializado.
-Representacin procedimental versus representacin declarativa: algunos favorecan una
representacin declarativa (conocimiento codificado como un conjunto de hechos o
declaraciones almacenados) y otros una representacin procedimental (conocimiento
codificado como un conjunto de acciones o procedimientos a ser llevados a cabo). Los
seems to believe that, to do cognitive studies, one needs only two levels of explanation: the
level of intentionality (a plain English discussion of the organism's wishes, beliefs, and so
on) and a neurophysiological explanation of what the brain does in realizing these
intentional states. He finds no need for a level of symbolic representation, which dominates
work throughout the cognitive sciences: there is no representational level in which
computers and humans are similar. However, Searle's positive assertions about
intentionality have little force.
Critics and Defenders: The Debate Continues
-Weizenbaum raises ethical questions about whether A.I. should be allowed to impinge on
territory that has hitherto been restricted to human beings.
-Another line of criticism suggests that efforts to simulate human intelligence are perfectly
valid, but that most of the A.I. community has heretofore used superficial models which do
not approach the core of human thought processes. Scientists ought to be spending their
time analyzing competent human beings themselves, and then to program computers to
carry out intelligent operations.
-The practice of A.I. entails deep philosophical issues which cannot be ignored or
minimized.
-There are limits to what can be explained by current A.I. methods, and even whole areas of
study may lie outside of artificial intelligence, at least now and perhaps permanently.
-There are increasingly close ties being forged between experimental cognitive psychology
and artificial intelligence. Psychologists can benefit from the careful simulations made by
A.I. researchers, and can put their own typically informal models to rigorous tests; A.I.
scientists can determine whether their hypothesized models of human behavior are actually
realized by the subjects about whom they have been speculating.
-Parts of psychology and parts of computer science will simply merge into a single
discipline, or they will form the central core of a newly forged cognitive science. There
may well be areas of computer science, as well as of psychology, that do not take part in
this merger. Nonetheless, the issue of the actual degree of similarity between humans and
computers cannot be permanently ignored.
-Hence, Chomsky's theory is not a mere reorganization of the data into a new kind of
library catalogue, nor another speculative philosophy about the nature of Man and
Language, but rather a rigorous explication of our intuitions about our language in terms of
an overt axiom system, the theorems derivable from it, explicit results which may be
compared with new data and other intuitions, all based plainly on an overt theory of the
internal structure of languages.
-One of Chomskys assumptions was that the syntax of language could be examined
independently of other aspects of language. He even conceived syntax as the core of
language, as the capacity unique to the human species to combine and recombine verbal
symbols in certain specifiable orders, in order to create a potentially infinite number of
grammatically acceptable sentences. Syntax is the primary, basic, or deep level of the
language, with both semantics (meaning) and phonology (sound structure) being
constructed upon a syntactic core. Chomsky also considered language to be an abstraction,
a capacity that can merely be glimpsed in impure form in an individual's actual output.
-Another assumption was that the discipline of linguistics could proceed independently of
other areas of the cognitive sciences. He challenged the widespread belief in general
powers of the mind, coming to think of the mind as a series of relatively independent
mental organs or modules which follow their own rules. Language is an autonomous organ,
so linguistics is an autonomous discipline. Superimposed on this modularity was a
commitment to mentalism, to the existence of abstract structures in the mind which make
knowledge possible. There was as well a swing to nativism, the belief that much of our
knowledge is universal and inborn: the individual is born with a strong penchant to learn
language, and the possible forms of the language which one can learn are sharply limited by
one's species membership with its peculiar genetic inheritance (an evidence for this is that
despite the difficult task faced by children who learn language, language is learned rapidly
and with lack of explicit tutelage and enough stimuli).
-These working assumptions about linguistic autonomy worked out propitiously and made
linguistics a rapidly developing area of science. But whether these assumptions can
ultimately be sustained constitutes a problem that has yet to be resolved.
-Chomsky criticized behaviorism, which attempted to explain linguistic behavior in terms
of the same stimulus-response chains and laws of reinforcement, ignoring the intricate
structural properties of language, because it ignores the creative aspect of language.
Cambios posteriores
-In the Standard Theory, there are no longer initial kernel sentences. Instead, one now
starts with the base grammar, which generates an initial phrase marker, the deep structure,
major operations being performed upon this deep structure. There is a transformational
component which converts the initial deep structure into other structures, the final of which
is the surface structure. Deep-structure relations are interpreted by a semantic component:
thus, the information necessary for semantic analysis must be represented in the deep
structure. Phonological interpretation occurs on the surface structure string.
-The Standard Theory was a more ambitious theory, attempting in part to accomplish for
semantics what had been modeled for syntax alone in Syntactic Structures. It also proved
system, or module, which has the potential to develop in a small and delimited range of
ways.
-Despite undeniable shifts in emphasis and strategy, the centrality of syntax, the belief in a
transformational component, and the view of semantics as an interpretation of basic
syntactic relations have endured.
Reactions in Other Cognitive Sciences
-George Miller became a convert to Chomskian linguistics and helped to turn the
psychology of language into a testing ground for Chomsky's transformational claims, trying
to demonstrate the "psychological reality of transformations. This effort was not
particularly successful, but important methods of psycholinguistic research were worked
out in the process.
-Chomskys ideas and definitions clash with established truth in psychology, finding
suspicion about his formal methods, opposition to his ideas about language as a separate
realm, and outright skepticism with respect to his belief in innate ideas. His particular
notions and biases have thus far had only modest impact in mainstream psychology.
-Chomsky had enormous influence in the psychology of language, or psycholinguistics. In
the study of syntactic capacities, models for analysis were generally supplied by Chomsky.
At times these models have been used as a means of characterizing the data collected; at
other times, the data have been used to test the "psychological reality" of models. However,
when Chomskys models were applied, the results have not been consistent with those
models, at least in any straightforward way. Sometimes he has discounted empirical
research in psycholinguistics, with the disclaimer that their theories have to do with
idealized competence, and not with the facts of individual performance.
-Jerrold Katz and Jerry Fodor introduced Chomskys model to philosophy. They developed
a model of semantics which became incorporated into the "standard version" of
transformational grammar. Philosophers have reacted coolly to Chomsky's promotion of
seemingly discredited rationalist notions and to his enthusiasm for innate ideas. His ready
use of terms like rules, structures, systems, with (apparent) disregard for the nontrivial
technical problems involved in such concepts, and his facile reinterpretation of leading
philosophical figures of the past have proven difficult for most philosophers to swallow.
Also Chomsky's lack of interest in semantics has troubled many philosophers, who find in
the work of semanticist Richard Montague some of the same formal elegance others have
admired in Chomsky's syntactic discussions.
-Several of Chomsky's main ideas are not readily implemented in computational formats.
For example, there is no guarantee in principle that one can parse sentences using
transformational grammatical approaches. Moreover, A.I. is very much oriented toward
practical problems of designing programs that understand sentences or stories, and
Chomsky's syntax-centered framework is not suited for the main issues of understanding
discourse. Accordingly, computer scientists like Roger Schank have been publicly hostile to
the theory, taking the position that semantics and pragmatics are central in language and
that syntax is relatively unimportant. Schank has also attacked the modular notion. For his
part, Chomsky has been rather critical of research in artificial intelligence, finding it mostly
unmotivated and ad hoc.
fieldwork, however, was that it left a great deal of discretion in the hands of a single
investigator or a small cadre of fieldworkers.
-The need of more objective methods which could be employed by a single investigative
team gave rise in the 1960s to the field of ethnoscience.
-While it seemed for a while that the new empirical procedures might place anthropology
on a firmer scientific footing, there has recently been a disaffection with these methods.
There has been at least a partial return to the view that anthropology ought to re-embrace
the holistic methods of the in-depth case study, and perhaps align itself more with the
humanities and less with the sciences.
Edward Tylors empiricist anthropology
-Edward Tylor undertook in his book a rationalist assault on the divine inspiration of
religious beliefs. According to his revisionist perspective, human culture and religions were
products of a natural, law-governed evolution of human mental capacities.
-Tylor was declaring that human capacities are not simply part of one's birthright: they are
rather derived from one's membership in a group and presumably could be changed, if the
individuals were reared in a different group or if the group itself altered its practices or its
values.
-According to his own scheme, humanity could be arrayed along a linear track, ranging
from savagery, to barbarism, to civilization. He believed in psychic unity, however, and
held that all peoples were capable of making this progression. Further, even those
individuals at the height of civilization were not bereft of earlier traces. Conversely, Tylor
also held that even the most irrational customs are products of a reasoning capacity like our
own.
-Tylor also made important methodological contributions. Noteworthy was his statistical
method of adhesion, whereby he attempted to determine which customs or practices hang
together, by preparing massive lists of the practices carried out in various cultures, and
noting which tended to occur at the same time.
James Frazers speculative anthropology
-Frazer traced a connecting thread from the pagan ceremonies of the past to the practices of
Christianity and other modern religions. He described early forms of magic where one
could control another individual simply by gaining possession of some vestige of that
individual. These totemic practices anticipated the rise of religion where individuals gave
up the belief that they themselves could control events, and instead posited nonhuman
higher powers which govern the world. And finally, Frazer described the highest stage of
development, that of science, where man once again began to manipulate nature, but this
time sought to uncover and test the relevant physical laws. On this view, early men and
contemporary primitives were seen as relatively irrational, though perhaps possessing the
same potentials as modern man.
-The tradition that Frazer represented eventually yielded to a less grandiose, more empirical
approach. There was a large-scale expedition to the Torres Straits in the South Pacific. The
interest was kept in primitive mentality, but there was a shift in method, trying to take
systematic measurements of psychological characteristics in the field. Anthropologists did
not focus on "higher" cognitive functions, but they probed abilities to make discriminations
in various sensory modalities, to appreciate illusions, and to name colors. Nor were the
results particularly decisive with respect to the controversy about primitive mentality. There
were some provocative findings: for example, a hint that the language available to
individuals might influence the way in which they see or group colors; documentation of
the Papuan's keen powers of observation; a suggestion that the perception of spatial
relations may also be culturally conditioned; and the documentation of capacious memories
for family genealogies.
The American Version
-Faced with the conflicting claims of the physicist -who sought objective explanations of
color- and the explorer -who sought to capture the atmosphere of exotic cultures- Boas
strove to reconcile these perspectives. He concluded that validity must be granted both to
the scientific view of the outsider and to the subjective view of the particular individual or
culture. He brought this lesson to the larger arena of anthropology, where he undertook a
long-term study of Indian societies in the Pacific Northwest. In addition, he began to train
nearly all of the next generation of anthropologists.
-He avoided strong theoretical statements, preferring to adopt a more inductive approach.
-He opposed the notion of the linear evolution of culture. Boas felt that each culture was
best studied in terms of its own practices, needs, and pressures, rather than in relation to
some other culture which represented a more or less advanced mode of organization.
-Boas also emphasized the importance of language and of linguistics for all of
anthropological study. He developed methods for the careful notating of languages. He also
underlined the important role of language in all of human activity, though he expressed
skepticism that a culture could be restricted by the form of its particular language. He saw
thought as influencing language rather than vice versa.
-The difference between primitive peoples and ourselves is that whereas the categories used
by the primitive have developed in a crude and unreflective manner, contemporary literate
populations have been able to systematize knowledge, in the manner of the rational
scientist. This difference has emerged not because each individual in our society thinks in a
more logical manner but rather because various philosophically oriented materials have
become worked out more systematically over the generations and are now available to the
general population. Primitive and modern individuals possess essentially the same
cognitive potential.
-His most vocal critics in the next generation were those who, unlike Boas, had a strong
theoretical position to defend.
-Leslie White and Marvin Harris, devotees of evolutionism who were sympathetic to
Marxism, portrayed Boas as one who refused to take a stand on the relationship between
one culture and another, and who, in his passion for data about particular individuals and
groups, neglected the material and technological basis of human activities.
-A. R. Radcliffe-Brown stressed the importance in anthropology of an undergirding theory;
he promoted a Durkheimian approach, in which the needs for group solidarity exert a
decisive impact on kinship structures and on the actions and beliefs of individuals.
Radcliffe-Brown also saw cultures as part of a social system, as organisms which evolve
toward increasing diversity and complexity.
-The functionalist approach of Malinowski evinced little interest in mental phenomena or in
historical factors. According to Malinowski, the anthropologist should search for the
various goals that a particular custom, material object, idea, or belief serves within a
society. His biologically and psychologically oriented explanations never captured the field.
-Dan Sperber thought that it is from the work of Chomsky, Fodor, and others of the
transformationalist school that the anthropologist must now seek models. Here the lessons
turn out to be largely negative. Sperber points out that most human beliefs are not purely
propositional but are rather semipropositional. It is risky to apply to such amorphous belief
systems the rigid classificatory grid of the syntactician or the phonologist. Instead, one
needs to study the processes whereby rich penumbras of meaning are evoked. Sperber's
positive contribution inheres in his characterization of symbolic processes: rather than
being induced or constructed from experience, the symbolic mechanisms are part of the
innate mental equipment which makes experience possible. Anthropology is the discipline
that has access to the fullest range of beliefs, practices, and symbolic systems; hence, it is in
a privileged position to lay bare the operation of those human symbolic mechanisms that
supplement the pure computational aspects involved in language, mathematics, and
ordinary classification.
-The most telling line of criticism against Lvi-Strauss questions whether polymorphous
human behavioral patterns and beliefs can lend themselves to the kind of systematic, rulebound, and "closed" analysis that has proved appropriate for certain aspects of linguistic
structures. Geertz criticizes Lvi-Strauss's mechanistic approach, his ignorance of the
particular historical conditions that spawn a given myth or social organization, the
minimization of affective and emotional factors, the loss of the specific individual with his
or her own needs, motivations, goals, and wishes. Geertz also questions the wisdom of
construing symbolic products as the output of internal cognitive mechanisms: according to
his more public view of mind, myths, rituals, beliefs, and the like are a socially generated
form of symbolization.
Ethnoscience
-Ethnoscience (or componential analysis, or ethnosemantics, or cognitive anthropology)
comprises the organized study of the thought systems of individuals in other cultures and
sometimes in our own. The background of systematic linguistics spawned the initial pair of
publications consciously styled in the ethnoscientific mode, like analyses of kinship
terminology. Drawing on the model of a linguist's grammar, ethnoscientists search for the
ways in which knowledge of a culture's rules is reflected in the behavior of natives, and
especially in their speech.
Componential analysis
-The method of componential analysis starts, for example, by taking a set of kinship terms.
Next, they define these terms with respect to genealogical relations. All terms are defined
through primitive forms and some simple operators. The third stage entails a number of
observations obtained from the definitions. Now comes the crucial stage: the analyst
hypothesizes that three dimensions will be sufficient to define all the terms: the sex of the
relative, generation, and lineality. In the next step the terms are redefined as points in a
three-dimensional space.
-A "formal account" of a collection of empirical data has been given when there have been
specified 1) a set of primitive elements, and 2) a set of rules for operating on these, such
that by the application of the latter to the former, the elements of a "model" are generated. A
formal account is thus an apparatus for predicting back the data at hand, thereby making
them "understandable".
-One test of the adequacy of this account is that it does not do violence to my own feel, as
informant, for the structure of what is described. This is the subjective test of adequacy. An
equally important test is that it provide an alien with the knowledge he needs to use my
kinship terminology in a way I will accept as corresponding with the way I use it. This is
the objective test of adequacy.
Critiques of Ethnoscience: From Within
-When one turns to domains like color, botany, and disease, it turns out to be more complex
to elicit the relevant terms and delineate the domain, let alone to ferret out the relevant
dimensions that may systematize the domain in a defensible and desirable way. Even when
the terms and dimensions have been delineated, the way in which to arrange them becomes
a subject of considerable controversy.
-Still thornier questions arise when one wants to determine whether a componential
analysis is appropriate, or which of a number of competing analyses is most accurate. The
"psychological reality" of an analysis is it in the heads of all informants, of trained and
reflective informants, or only in the head of the analyst? turns out to be one of the most
complex questions.
-There are also problems with homonyms and metaphors, where the same words might
have different meanings, or where different words might have the same meanings. There is
the problem of connotation, where words may have the same objective meaning but
connote different affective values.
-Other critics have argued that componential analysis is inherently circular, since one must
begin by assuming the very relationship among terms whose relationship should actually be
fixed only at the conclusion of the investigation.
-Some commentators have focused on the enormous problems of translating terms from a
foreign language into a familiar tongue and assuming that the same kinds of analysis can be
applied to the translations.
Critiques of Ethnoscience: Outside the Ranks
-Clifford Geertz points out that one has to pay attention to the logic exhibited in actual life,
not to some kind of abstracted set of symbolic elements whose validity for the inhabitants is
questionable at best. In the same line, Gary Witherspoon argues that many aspects of
importance are simply not marked in the language.
-Ethnoscience reflects an atomistic conception of language, without any sense of how
words function in a social context, the kinds of actions in which they are embedded, and the
ways in which they interact with and influence one another.
-Tyler held that the view of language as showing the cognitive systems of individuals
creates problems in understanding how the purely formal system of elements and rules
relates to something other than itself. Both create dualistic systems which oppose formal
linguistic competence to empirical components. Language is not merely a means of
representing ideas but equally a means for expressing wishes, feelings, and emotions and,
above all, a way of getting things done in the world. The independence of semantics from
pragmatics must be rejected.
-Compositional analysis rapidly become dysfunctional when applied to slippery areas, like
emotions or diseases, where the line around the domain is not announced in advance and
where an individual's (or a group's) idiosyncratic interpretive system comes more readily to
the fore.
-Stephen Murray discerns two different reasons for the evanescence of classical
ethnoscience. The first stems from a promise that was not fulfilled: during a study of
drinking in Chiapas, investigators could not find objective procedures to systematize the
collected information. The second reason is that ethnoscience never cohered into a single
integrated perspective but was, at best, a loose confederation.
Psychological Forays
-Investigators of perceptual capacities found that on certain items, such as the Mller-Lyer
illusion, European and American samples proved more susceptible to illusions. These
seemed to show that the experience of groups conditioned the way their members
perceived.
-An early wave of studies of reasoning capacities was sympathetic to the notion that people
outside the West performed far more poorly on tests of abstraction, conceptualization, and
classification. Some important methodological adjustments were made, and it emerged that,
when familiar materials were used, or when requested behavior was explained to or
modeled for the subjects, many of the documented differences between individuals from
the two cultures now evaporated. The fundamental operations of thought are the same
everywhere, and it is the uses to which these processes are put that differ dramatically
across cultures. Superimposed upon this basic continuity is the advent of certain abilities to
reason without the usual contextual supports, or to carry through certain complex chains of
reasoning, which seem to develop chiefly among individuals exposed to years of Westernstyle secular schooling.
-There was a widespread suspicion among anthropologists that the differences between
conceptual systems in remote cultures were vast and that these might well reflect variations
in the structure or the contents of language. However, in a line of study which continues to
exert wide influence in several cognitive sciences, Eleanor Rosch strongly challenged the
Whorfian line: she demonstrated that, even in cultures with few color terms, individuals
still sort, classify, and otherwise deal with the color spectrum in roughly similar ways.
-These lines of work have helped swing the pendulum of anthropological analysis back to
the pole of universalism. Individuals perceive and classify in relatively similar ways, and
the ways in which they classify reflect the operation of deep principles of mind which
cannot easily be dislocated.
-One productive way in which the anthropological community can solve the tension
between universalism and relativism is to inform its studies with promising concepts or
methods from cognitive science. There are some investigations that cherish the individual
details of particular groups in their home context: they pointedly spurn premature
generalizations or excessive reliance on arbitrary sorting tasks. Still, when proper caution is
taken, forms of thinking in remote settings do lend themselves to comparison with the kinds
of thought process exhibited and the kinds of measure used in traditional Western-schooled
settings.
-While most energy in recent years has been devoted to the increasingly fine-grained
analysis of particular domains, some investigators remain interested in the general
conundrums of how culture is possible, how it is constituted, and how it is acquired. Part of
the interest unfolds in an evolutionary framework. Moving to a briefer time frame, other
investigators have raised the question of how children in a society "learn culture, using a
computational metaphor to account for that learning.
-The success of the cognitive scientific approach to anthropology will hinge on whether the
rigor of componential analysis (or some other computationally inspired approach) can be
wedded to the broad issues that have traditionally attracted scholars to the study of exotic
cultures.
-In a sense, it is useful to think of anthropology as representing a kind of "upper bound" for
cognitive science. Anthropology clearly deals with issues representing very large bodies
(such as entire cultures) and spanning a quite wide scope (such as the relationship between
a culture's linguistic practices and its thought patterns).
-It may turn out, however, that cognitive scientific methods are only partially successful in
dealing with such a broad assignment or can only be usefully brought to bear upon the most
constrained (and possibly the least interesting) domains. If, in the last analysis,
anthropology proves to lie largely outside of the mainstream of cognitive science, this will
be an important (if somewhat disappointing) finding. And it may signal the even less happy
outcome that large areas of psychology, philosophy, and linguistics may also fall outside of
cognitive science, at least as currently practiced.
-The question of how to study the mind remains hotly debated. In the last several years a
moderate middle ground between structuralism and hermeneutics seems to be emerging.
According to this tack, anthropology remains the field where careful case studies are
indispensable and where keen attention to particularities remains of enduring importance.
At the same time, there is no reason why these studies cannot be informed by the most
salient and useful cognitive concepts and analytic frameworks. Cognitive science can
contribute to anthropology, without enveloping it.
-In the last analysis, all that can be attained by any individual in any culture is restricted by
the particular species to which one belongs, and, more specifically, by the nervous system
that one possesses by virtue of one's humanity. For this reason, anthropologists of every
stripe savored discoveries about the human as an organism. Such insights from the areas of
biology and neuroscience will not in themselves answer questions about culture.
Neuroscience serves as a kind of "lower bound" to cognitive science and thus is maximally
distant from anthropology. But, in due course, findings from the study of the human
nervous system may well illuminate how an individual becomes able in such short order to
assimilate and to transmit to others the practices of the culture in which he or she happens
to live.
-Lashley was calling into question localization, the belief that specific behavior resides in
specific neural locations. At the same time, if less explicitly, he was also posing difficulties
for reductionism, the scientific program that seeks to explain behavior entirely in terms of
neural (or other lower-order) principles.
Equipotentiality and Engrams
-Lashley was attracted to the principal ideas of Gestalt psychology: perhaps the brain works
as an integrated unit, responding as an organized totality to complex patterns of stimulation.
-He spoke of equipotentiality, the capacity of any part of a functional area to carry out a
particular behavior. Impairment in performance is due not to the site of the injury, but rather
to the amount of tissue destroyed. All of the cells of the brain are constantly active and are
participating, by a sort of algebraic summation, in every activity. There are no special cells
reserved for special memories. He pondered also the property of plasticity, the potential for
remaining areas of the nervous system to take over when a specific region has been
damaged.
-Lashley concluded that we would never find the engram, the discrete representation in the
nervous system of specific ideas, concepts, or behaviors. During learning, information
comes to be represented widely within large regions of the brain, if not throughout the brain
as a whole. Whether the cells can be mobilized to carry out an impaired function depends
upon the percentage of them still remaining after brain injury, the degree to which the
pattern of behavior has been mastered beforehand, and the strength of motivation of the
animal.
-Lashley he helped to cast doubt on the reflect arc, the bond whereby each response is
triggered by a specific stimulus, which had been the principal neural model of behavior in
higher (as well as lower) organisms.
Lashley's Iconoclasm
-Lashley demonstrated that many sequences of behavior exhibited long planning units
which unfold too quickly for them to be altered or corrected live. In his view, it was
necessary to reconceptualize current associationist models of the nervous system to allow
for effects that can be manifest for a significant period after initial stimulation. No simple
stimulus-response bonds can explain this behavior: one needs a model of the nervous
system which is hierarchically arranged and features feedback and feed-forward
mechanisms.
-Lashley has criticized the comparison between the brain ant the computer. The neuron, like
a switch or valve, either does or does not complete a circuit. But at that point the similarity
ends. The switch in the digital computer is constant in its effect, and its effect is large in
proportion to the total output of the machine. The effect produced by the neuron varies with
its recovery from [the] refractory phase and with its metabolic state. The number of neurons
involved in any action runs into millions so that the influence of any one is negligible. Any
cell in the system can be dispensed with. The brain is an analogical machine, not digital.
-Lashley described behavior that eluded current mechanistic models and strongly implied
more abstract and hierarchically organized forms of representation. He did not call for
explanations on the mentalistic level, but his work, and his talk of "plans" and "structures",
cleared the way for Simon's belief in a symbol system, Piaget's call for mental operations,
Miller's TOTE system, and Chomsky's resorting to rules and representations.
-The nervous system turns out to be far more specific, far less equipotential than Lashley
had contended. His belief that the brain works in a Gestalt-like fashion would find few
adherents today.
-It has been held by many scientists, especially neuroscientists, that the optimal way to
account for human behavior and thoughts is in terms of the structure and functioning of the
human nervous system. To some investigators, this neuroanatomical account can
complement accounts proffered in psychological or behavioral language; but for others,
neuroanatomical accounts may eventually render unnecessary accounts in terms of
representations, or symbols, or other psychological argot. In the view of this latter
reductionist group, cognitive science emerges as, at best, a temporary account of mental
activity destined to vanish once an account in terms of synapses can be attained. The debate
about the possibility and the desirability of reductionism lurks in the background in any
account of neuroscientific work.
How Specific Is Neural Functioning?
Evidence for Localization
-Toward the end of the nineteenth century, the work of Fritsch, Hitzig, Ferrier and Broca
made the pendulum began to swing from Flourenss holism toward Galls localizationism.
It was a heyday for scholars of a localizationist persuasion. With increasingly sophisticated
methods for testing animals, claims were made for specificity in each region of the cortex.
As additional case studies of brain-injured patients accumulated, claims about the
astonishing specificity of certain cognitive deficits were forthcoming.
The Resurgence of Holism
-Against Broca, Pierre Marie showed that the third frontal convolution plays no special role
in the function of language, and that each of Broca's patients had far more extensive lesions
than Broca had reported, and the range of accompanying deficits had not been documented
with sufficient precision. Within a few years, a variety of neurologists had endorsed his
claims that cognitive functions are not highly localized in the nervous system. They
adduced evidence that the same kinds of deficits could be obtained from individuals with
lesions in a wide variety of areas; and conversely, patients with similar anatomical lesions
often exhibited contrasting sets of deficits or even at times no deficits at all. They spoke of
the plasticity of the nervous system, the capacity of uninjured areas to take over from
injured areas, and the loss of abstract thinking and other functions as a consequence of the
size, rather than the site, of lesion.
-Holists were far more sympathetic to the notion that behavior could not be explained
satisfactorily in terms of neural circuitry. As they saw it, there was a continued need for
explanation on the psychological level. There was a correlation between skepticism about
localization and skepticism about reductionism not a logically necessary association, to be
sure, but a meeting of two ideas in the minds of many scientists.
-Localization of symptoms did not signify localization of function. A specific section of the
brain may be necessary for a specific function, but there may be other sections that are also
necessary for that function.
Evaluating the Evidence
-By the end of the 1940s, many investigators were seeking some rapprochement between
the rabid holists and the extreme localizers. Claims about highly specific syndromes
following highly specific lesions could not be maintained; the variation across patterns, and
across clinics, was simply too great. Besides, any number of lines of investigation undercut
the extreme holist position.
Donald Hebb's Bold Synthesis
-Hebb argued that behavioral patterns, such as visual perceptions, are built up gradually
over long periods of time through the connection of particular sets of cells, which he called
cell assemblies. To this extent, behaviors or percepts can indeed be localized in specific
regions. However, with time, more complex behaviors come to be formed out of sets of cell
assemblies, which he called phase sequences. These phase sequences are less localized, and
involve much larger sets of cells drawn from disparate sections of the nervous system. A
phase sequence inevitably involves some equipotentiality. Finally, by the time the organism
has reached maturity and is capable of performing the most complex forms of behavior, it is
difficult to attribute any behavior to a discrete set of neurons in a delineated region of the
brain. It would be an oversimplification to see the developmental course as proceeding
from localizing to holism; for, in other respects, the sequence is exactly the reverse. A
beneficial effect of Hebb's work was to point up these various complexities and competing
tendencies, making it less plausible for anyone to adopt a rigid localizing or an inflexible
holist position.
The Hixon Symposium Revisited
-During the Hixon Symposium, the major neurologists, neurophysiologists, and
neuropsychologists debated with one another the tenability of the localization position.
Hubel and Wiesel's Decisive Demonstrations
-In the late 1950s, David Hubel and Torsten Wiesel began to record with microelectrodes
impulses from single cells in the cortex of cats and other animals. They demonstrated
beyond any doubt that specific cells in the visual cortex respond to specific forms of
information in the environment. They also demonstrated the critical role played by certain
early experiences in the development of the nervous system. For those in sympathy with
specificity and localization of function, the last few decades have yielded much confirming
evidence.
The Molar Perspective
Sperry on Split Brains
-Because they were suffering from intractable epilepsy, a small group of patients were
subjected to a surgical intervention where the two halves of the brain were disconnected
from one another. Sperry devised methods for testing separately the two halves of the brain.
He documented important differences in the functioning of two hemispheres. Sperry
reinforced the impression that the left hemisphere is dominant for language and other
conceptual and classificatory functions, while the right hemisphere assumes a dominant
role for spatial functions and for other fine-grained forms of discrimination. However, the
Sperry team was able to show some holistic conclusions: the right hemisphere of righthanded persons was capable of far more linguistic functioning than had hitherto been
thought. Besides, the younger they had been at the time of their operation, the more likely it
was that patients would reveal well-developed capacities in both hemispheres.
Gradients of Plasticity and Hierarchy of Functions
-Sperry's results suggest that the degree of plasticity depends of the earliness of the injury.
Other factors also influence the degree of plasticity: for example, younger individuals who
are left-handed or who have sustained some brain injury early in life also exhibit more
plasticity than those who exhibit contrasting traits.
-With development, different nervous centers gain dominance, and the hierarchy among
behavioral functions alters. For instance, in young children, sensory regions are dominant;
but in older individuals, the association cortexes and the "planning regions" of the frontal
lobes become ascendant. According to Luria's studies, no function is carried out fully by a
specific region, but nor is it the case, as Lashley implied, that all regions figure equally in a
specific function. Rather, several anatomical regions may figure in the performance of a
particular behavior, but each of them makes a characteristic and irreplaceable contribution.
-While, in general, it is preferable to sustain an injury to the brain early rather than late in
life, and to exploit the plasticity of that developmental stage, early is not always better.
There are at least three caveats. First of all, sometimes an early injury manifests no
apparent deficits at the time but produces severe deficits later in life. Second, when another
area of the brain takes over, the "rescuer" may well sacrifice the potential for carrying out
its own preordained functions. Finally, even when another area of the brain assumes a
function, it may not do so in an optimal way.
-Other considerations militate against a purely "plastic" perspective. Work in experimental
psychology documents that organisms are "prepared" to master certain behaviors and
"counterprepared" to learn other ones.
The Neural Base of Cognition: Studies of Two Systems
-It is now conceded that, at least at the level of sensory processing, the nervous system is
specifically constructed to respond to certain kinds of information in certain kinds of ways.
There is also evidence for "neural commitment" at much more molar levels of
representation, even extending to the two cerebral hemispheres. On the other hand,
impressive evidence continues to accumulate documenting the resilience and plasticity in
the nervous system, particularly during the early phases of development. As a tentative
conclusion, then, it seems that some localization is accepted by all, but that important
islands of plasticity remain within this general framework.
-Studies have reverted to a more circumscribed terrain. Neuroscientists are devoting the
bulk of their time to the careful study of specific systems in specific organisms.
Eric Kandel Bridges a Gap
Recently Eric Kandel has succeeded in bridging the gap between the functioning of the
individual nerve cell and the behavior of organisms. He focused on a snail, whose nervous
system can be readily described and which is also capable of simple forms of learning.
Kandel has shown that these elementary aspects of learning are not diffusely distributed in
the brain but rather can be localized in the activity of specific parts of neuronal networks.
Learning results from an alteration in the synaptic connections between cells; rather than
necessarily entailing new synaptic connections, learning and memory customarily come
about as a consequence of alteration in the relative strength of already existing contacts.
The potentialities for many behaviors of which an organism is capable are built into the
basic scaffolding of the brain and are to that extent under genetic and developmental
control. Environmental factors and learning bring out these latent capabilities by altering
the effectiveness of the pre-existing pathways thereby leading to the expression of new
patterns of behavior.
The Song of Birds
Fernando Nottebohm has studied the songs of birds. He showed that various deprivations
exert predictable influences on the course of song development. Canaries, for example,
require auditory feedback for normal development. They can, however, go on to produce a
well-structured song even in the absence of hearing the vocalizations of other members of
their species: their own songs suffice. In the chaffinch, however, both auditory feedback of
one's own song and exposure to the songs of other birds are needed if the chaffinch is to
produce a full normal song. Bird song is one of the few instances of brain lateralization
among infrahuman animals. The aphasic canary can recover its prior songs because the
homologous pathways of the right hemisphere have the potential of being exploited.
-The work of investigators like Nottebohm and Kandel is based on the premise that much
can be learned at this point through the careful study of a single system in a single
organism.
-The two research efforts proceed on somewhat different assumptions. Kandel hopes that
by studying habituation and conditioning, he will eventually illuminate processes known to
occur in a wide range of organisms, including humans. The Nottebohm line of research, on
the other hand, studies bird song as a behavior that clearly exists only in birds, and any
generalizations that may be validly extended from bird song to other systems in other
organisms will only emerge after careful study of these systems on their own terms. There
is the same tension between a modular and a general point of view.
-Hubel and Wiesel hold the belief that systems may work in their own way. Each region of
the central nervous system has its own special problems that require different solutions. For
the major aspects of the brain's operation no master solution is likely.
-The work of Kandel (and to a lesser extent of Nottebohm) raises afresh the issue of
reductionism. It seems to some observers that an account of the classical psychological
phenomenon of habituation in terms of neurochemical reactions is an important step on the
road to the absorption of cognition by the neurosciences. Yet most scientists of a cognitive
persuasion feel that such accounts, while informative, will still prove tangential to their
ultimate interests.
Pribram's Holographic Hypothesis
Pribram argued that the brain is better analogized to a holographic process. Holography is a
system of photography in which a three-dimensional image of an object can be reproduced
(with the appearance of the third dimension preserved) by means of light-wave patterns
recorded on a photographic plate or film. A hologram is the plate or film with the recorded
pattern: information about any point in the original image is distributed throughout the
hologram, thus making it resistant to damage. According to Pribram's holographic view, all
parts of the brain are capable of participating in all forms of representation, though
admittedly certain regions play a more important role in some functions, and other regions
are more dominant for other functions. Just as many holograms can be superimposed upon
one another, so can infinite images be stacked inside our brains. Since the hologram records
the same wave front over its surface, repeating it over and over, if only some of a shattered
hologram is left, it will still suffice to reconstruct the entire image. What interests brain
theorists about the hologram is this quality of a distributed memory: every piece of the
hologram says a little bit about every part of the scene, but no piece is essential.
Three Historical Moments
We might single out three moments in the age-old debate about the degree of localization of
representation in the human nervous system.
-The first moment involved scientific insights. When Descartes located the soul in the
pineal gland, when Gall spoke about the representation of amativeness and of criminality in
different lobes of the brain, each was announcing claims without benefit of experimental
evidence.
-A significant step forward took place when it was possible to examine the effects of
injuries to discrete areas of the nervous system. When Fritsch and Hitzig lesioned specific
sites in the nervous system of dogs, when Broca and Wernicke looked at the effects of
strokes in the human cerebral cortex, they were able to substantiate correlations between
regions of the brain and forms of behavior.
-Finally, when Hubel and Wiesel recorded from discrete cells in the visual cortex of the cat,
it became possible to ascertain with great specificity the function of particular units and the
circumstances under which they would function (or fail to function) normally.
-The thrust of the new research lines has had two effects: first, to render localization as a
more plausible general orientation; second, to direct the attention of active scientists to the
operation of specific systems, rather than to continued debate on broad conceptual issues.
And yet it is far too early to claim that the pendulum has stopped swinging, or that the
pivotal questions motivating neuroscience have been answered. The voices in favor of mass
action and plasticity have not been stilled. Moreover, within the specific areas of higher
cognitive functioning there remain debates about even the most basic issues. Moreover,
even if localization seems (on the whole) to be more tenable than holism, it is now apparent
that reductionism is a separate issue.
Will Neuroscience Devour Cognitive Science?
-Researchers in the neurosciences stand out from their cognitive-scientific peers because
they most closely partake of the model of the successful sciences of physics and biology,
because they can most readily state their questions unambiguously and monitor progress
toward their solutions.
-They are not all that different. Fundamental debate continues on many of the central
questions. While rapidly growing, neuroscience is a young field and is still in the process of
defining, rather than resolving, many principal issues.
-There are those (perhaps a majority) within the neurosciences who would maintain that
cognitive scientific concepts and concerns are not relevant to a biologically oriented
science. So long as the science remains relatively immature, it may be necessary, for the
time being, to carry out psychological experiments or to engage in computer simulations of
behavior. But once the appropriate neuroscientific studies have been carried out,
explanations that feature behaviors, thoughts, actions, schemas, or other molar or
representational concepts should become superfluous.