Sunteți pe pagina 1din 222

Cursul 1 – Introducere în neuropsihologie

Bibliografie video

Toate pasajele albastre din text sunt linkuri pe care se poate da click (și, Doamne-ferește,
citi și în plus)
https://www.youtube.com/watch?v=HKJlSY0DBBA – Înțelegerea lui Derrida, deconstrucție și “Despre
gramatologie”

https://www.youtube.com/watch?v=1Yxg2_6_YLs – Then & Now – Introducere în Baudrillard

https://www.youtube.com/watch?v=bf9J35yzM3E – Cuck Philosophy – Ce credea Baudrillard despre Matrix?

https://www.youtube.com/watch?v=pv6QHxkBFzY – Ted Aschulter – Marea dezbatere neurologică

https://www.youtube.com/watch?v=EFNpcIfvOnA&t - Daniel David - Despre Liberul Arbitru. Decizii Libere: Există?

https://www.youtube.com/watch?v=5Yj3nGv0kn8 - Nancy Kanwisher – Un portret neuronal al minții umane

https://www.youtube.com/watch?v=uhRhtFFhNzQ – David Chalmers – Cum explici conștiința?

https://www.youtube.com/watch?v=PeZ-U0pj9LI – Thomas Insel – Către o nouă înțelegere a psihopatologiei


localizaționism vs. holism vs. dualism vs. materialism
echipotențialism monism vs. idealism

glanda pineală. CUI PRODEST? Cine are câtă


MITURI putere politică în ce epocă și cum
frenologie
decide “adevărul științific”?

de la neurologie la neurobullshit (conflicte de interese și (in)accesibilitatea creierului –


neuropsihologie la referințe gratuite la neuroimagistică cutia craniană, bariera
psihologie cognitivă la irelevantă afirmațiilor făcute) hematoencefalică, LCR, religie,
neuroștiință cognitivă politică, epistemologie, finanțe
Bibliografie 1
David J. Linden – Mintea ca întâmplare. Cum ne-a oferit evoluția creierului
iubirea, memoria, visele și pe Dumnezeu, p.7-10;
“Creierul mare, întocmai ca un guvern mare, s-ar putea să nu fie
capabil să facă lucruri simple într-un mod simplu”

Donald O. Hebb

Prolog

Creierul, explicat

Partea cea mai bună în a fi cercetător în domeniul neuroştiinţei este că,


în anumite ocazii, poţi părea că ai capacitatea de a citi minţile
oamenilor. De exemplu, la petreceri. Cu paharul de Chardonnay în
mână, gazda iţi face una din acele prezentări în care se simte obligată
să-ţi menţioneze ocupaţia: „El este David. E cercetător în neuroştiinţă".
În acel moment, mulţi oameni sunt suficient de înţelepţi să-ţi întoarcă
pur şi simplu spatele şi să piece să-şi pună un whisky cu gheaţă. Dintre
cei care rămân, cam jumătate sigur vor face o pauză, îşi vor îndrepta
ochii spre ceruri şi-şi vor ridica sprâncenele pregătindu-se să vorbească.
„Eşti pe cale să mă întrebi dacă este adevărat că oamenii utilizează
doar 10% din creier, nu-i aşa?" Ei vor încuviinţa mut, făcând ochii mari.
Un episod uimitor de „citire a găndurilor".

Odată ce treci de chestia cu „10% din creier" (despre care ar trebui să


menţionez că nu are nicio bază în realitate), devine clar că mulţi oameni au o curiozitate profundă legată de
funcţionarea creierului. Întrebări cu adevărat fundamentale şi dificile se ivesc imediat:

„Chiar ajută muzica clasică la dezvoltarea cerebrală a bebeluşului meu?"

„Există vreun motiv biologic pentru faptul că evenimentele din vise sunt atât de bizare?"

„Este creierul homosexualilor diferit din punct de vedere fizic de creierul heterosexualilor?”

„De ce nu pot să mă gâdil singur?"

Toate acestea sunt întrebări importante. La unele din ele, cel mai bun răspuns ştiinţific este destul de clar, iar la
altele este întrucâtva evaziv. Este amuzant să vorbeşti cu nespecialişti în neuroştiinţă despre acest tip de lucruri,
deoarece ei nu se tem să-ţi adreseze întrebările grele şi să te pună în dificultate. Deseori, când conversaţia se
încheie, oamenii întreabă: „Poţi recomanda vreo carte bună despre creier şi comportament adresată
nespecialiştilor?" Asta mă pune iar în dificultate. Există unele cărţi, cum ar fi Synaptic Self a lui Joe LeDoux care e
grozavă pe partea ştiinţifică, dar dificilă dacă nu ai licenţă în biologie sau psihologie. Altele, cum ar fi The Man
Who Mistook His Wife For A Hat: And Other Clinical Tales a lui Oliver Sacks sau Phantoms in the Brain scrisă de V.S.
Ramachandran şi de Sandra Blackeslee, ce relatează poveşti clarificatoare şi fascinante bazate pe cazuri reale din
neurologie, nu transmit cu adevărat o înţelegere cuprinzătoare a funcţionării creierului şi ignoră în mare măsură
moleculele şi celulele. Există cărţi care vorbesc despre moleculele şi celulele din creier, dar multe dintre ele sunt
ucigător de plictisitoare - începi să simţi cum sufletul îţi părăseşte corpul inainte de a termina de citit prima
pagină. În plus, multe cărţi despre creier, şi încă şi mai multe emisiuni de televiziune educaţionale pe aceeaşi
temă, perpetuează o neînţelegere fundamentală legată de funcţionarea neurală. Ele prezintă creierul ca fiind
un mecanism superb meşterit şi optimizat, superlativul absolut al designului. Probabil aţi văzut şi voi: un creier
uman, luminat dramatic dintr-o parte, cu camera rotindu-se de sus in jurul lui ca un elicopter filmând
monumentul Stonehenge şi o voce baritonală modulată care ridică în slăvi designul elegant al creierului, cu
tonuri pătrunse de respect. Pure inepţii. Creierul nu este proiectat deloc elegant: este o învălmăşeală încropită
care, în mod uimitor şi în ciuda neajunsurilor pe care le are, reuşeşte să îndeplinească o serie de funcţii
impresionante. Dar în timp ce funcţionarea lui globală este impresionantă, designul lui nu este. Mai important,
planul alambicat, ineficient şi bizar al creierului şi al părţilor lui constitutive este fundamental pentru
experienţa noastră umană. Textura particulară a sentimentelor, percepţiilor şi acţiunilor este derivată, în mare
măsură, din faptul că creierul nu este o maşină de rezolvat probleme optimizată, generică, ci mai degrabă o
aglomerare ciudată de soluţii ad-hoc acumulate de-a lungul a milioane de ani de istorie evoluţionistă.

Aşadar, iată ce voi încerca să fac. Voi fi ghidul tău în această lume stranie şi deseori ilogică a funcţionării neurale,
având ca sarcină specială să-ţi expun cele mai neobişnuite şi contraintuitive aspecte ale designului cerebral şi neural
şi să-ţi explic cum ne modelează ele viaţa. În special, voi încerca să te conving că limitările designului alambicat,
evoluat al creierului au condus în ultimă instanţă la multe caracteristici extraordinare şi unice: copilăriile noastre
lungi, capacitatea mnezică extinsă (ce reprezintă substratul pe care experienţa creează individualitatea noastră),
căutarea relaţiilor de iubire pe termen lung, nevoia noastră de a născoci povești fascinante și, în ultimă instanță
ultimă instanţă, impulsul cultural universal de a crea explicaţii religioase. De-a lungul drumului, voi trece pe scurt în
revistă fundamentele biologiei necesare pentru înţelegerea lucrurilor care bănuiesc că te interesează cel mai mult
în ceea ce priveşte creierul şi comportamentul. Ştii tu, chestiile alea tari: emoţia, iluzia, memoria, visele, iubirea şi
sexul şi, desigur, poveşti neobişnuite cu gemeni. Apoi, îmi voi da toată silinţa să răspund la întrebările mari şi să fiu
onest când răspunsurile nu sunt la îndemână sau sunt incomplete.

Max Delbrück, un pionier al geneticii moleculare, a spus: „Imaginează-ţi că auditoriul tău are zero cunoaştere, dar
inteligenţă infinită". Mi se pare corect ce a spus el, aşa că asta voi încerca şi eu să fac.

Aşadar, să începem.

Bibliografie 2
Matthew Cobb – The Idea of the Brain. A
History, p.12-13;
Introduction

In 1665 the Danish anatomist Nicolaus Steno addressed a small


group of thinkers gathered together at Issy, on the southern
outskirts of Paris. This informal meeting was one of the origins
of the French Académie des Sciences; it was also the moment
that the modern approach to understanding the brain was set
out. In his lecture, Steno boldly argued that if we want to
understand what the brain does and how it does it, rather than
simply describing its component parts, we should view it as a
machine and take it apart to see how it works.

This was a revolutionary idea, and for over 350 years we have been
following Steno’s suggestion – peering inside dead brains,
removing bits from living ones, recording the electrical activity of
nerve cells (neurons) and, most recently, altering neuronal
function with the most astonishing consequences. Although most
neuroscientists have never heard of Steno, his vision has dominated centuries of brain science and lies at the root
of our remarkable progress in understanding this most extraordinary organ.

We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse
memory into a good one and even use a surge of electricity to change how people perceive faces.We are
drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species
we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most
profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a
robotic arm with the power of their mind.

We cannot do everything: at least for the moment, we cannot artificially create a precise sensory experience in a
human brain (hallucinogenic drugs do this in an uncontrolled way), although it appears that we have the exquisite
degree of control required to perform such an experiment in a mouse. Two groups of scientists recently trained
mice to lick at a water bottle when the animals saw a set of stripes, while machines recorded how a small number
of cells in the visual centres of the mice’s brains responded to the image. The scientists then used complex
optogenetic technology to artificially recreate that pattern of neuronal activity in the relevant brain cells. When this
occurred, the animal responded as though it had seen the stripes, even though it was in complete darkness. One
explanation is that, for the mouse, the pattern of neuronal activity was the same thing as seeing. More clever
experimentation is needed to resolve this, but we stand on the brink of understanding how patterns of activity in
networks of neurons create perception.

This book tells the story of centuries of discovery, showing how brilliant minds, some of them now forgotten, first
identified that the brain is the organ that produces thought and then began to show what it might be doing. It
describes the extraordinary discoveries that have been made as we have attempted to understand what the brain
does, and delights in the ingenious experiments that have produced these insights.

But there is a significant flaw in this tale of astonishing progress, one that is rarely acknowledged in the many books
that claim to explain how the brain works. Despite a solid bedrock of understanding, we have no clear
comprehension about how billions, or millions, or thousands, or even tens of neurons work together to
produce the brain’s activity.

We know in general terms what is going on – brains interact with the world, and with the rest of our bodies,
representing stimuli using both innate and acquired neural networks. Brains predict how those stimuli might change
in order to be ready to respond, and as part of the body they organise its action. This is all achieved by neurons and
their complex interconnections, including the many chemical signals in which they bathe. No matter how much it
might go against your deepest feelings, there is no disembodied person floating in your head looking at this activity
– it is all just neurons, their connectivity and the chemicals that swill about those networks.

However, when it comes to really understanding what happens in a brain at the level of neuronal networks and their
component cells, or to being able to predict what will happen when the activity of a particular network is altered,
we are still at the very beginning. We might be able to artificially induce visual perception in the brain of a mouse
by copying a very precise pattern of neuronal activity, but we do not fully understand how and why visual perception
produces that pattern of activity in the first place.

A key clue to explaining how we have made such amazing progress and yet have still barely scratched the
surface of the astonishing organ in our heads is to be found in Steno’s suggestion that we should treat the
brain as a machine. ‘Machine’ has meant very different things over the centuries, and each of those meanings
has had consequences for how we view the brain. In Steno’s time the only kinds of machine that existed were
based on either hydraulic power or clockwork. The insights these mahcines could provide about the structure and
function of the brain soon proved limited, and no one now looks at the brain this way. With the discovery that nerves
respond to electrical stimulation, in the nineteenth century the brain was seen first as some kind of telegraph
network and then, following the identification of neurons and synapses, as a telephone exchange, allowing for
flexible organisation and output (this metaphor is still occasionally used in research articles).

Since the 1950s our ideas have been dominated by concepts that surged into biology from computing –
feedback loops, information, codes and computation. But although many of the functions we have identified in
the brain generally involve some kind of computation, there are only a few fully understood examples, and some of
the most brilliant and influential theoretical intuitions about how nervous systems might ‘compute’ have turned
out to be wrong. Above all, as the mid-twentieth-century scientists who first drew the parallel between brain and
computer soon realised, the brain is not digital. Even the simplest animal brain is not a computer like anything
we have built, nor one we can yet envisage. The brain is not a computer, but it is more like a computer than it is
like a clock, and by thinking about the parallels between a computer and a brain we can gain insight into what is
going on inside both our heads and those of animals.Exploring these ideas about the brain – the kinds of machine
we have imagined brains to be – makes it clear that, although we are still far from fully understanding the brain, the
ways in which we think about it are much richer than in the past, not simply because of the amazing facts we
have discovered, but above all because of how we interpret them.

These changes have an important implication. Over the centuries, each layer of technological metaphor has
added something to our understanding, enabling us to carry out new experiments and reinterpret old findings. But
by holding tightly to metaphors, we end up limiting what and how we can think. A number of scientists are now
realising that, by viewing the brain as a computer that passively responds to inputs and processes data, we forget
that it is an active organ, part of a body that is intervening in the world and which has an evolutionary past that has
shaped its structure and function. We are missing out key parts of its activity. In other words, metaphors shape
our ideas in ways that are not always helpful.The tantalising implication of the link between technology and brain
science is that tomorrow our ideas will be altered yet again by the appearance of new and as yet unforeseen
technological developments. As that new insight emerges, we will reinterpret our current certainties, discard some
mistaken assumptions and develop new theories and ways of understanding. When scientists realise that how they
think – including the questions they can ask and the experiments they can imagine – is partly framed and limited by
technological metaphors, they often get excited at the prospect of the future and want to know what the Next Big
Thing will be and how they can apply it to their research. If I had the slightest idea, I would be very rich.

The history of how we have understood the brain contains recurring themes and arguments, some of which
still provoke intense debate today. One example is the perpetual dispute over the extent to which functions are
localised in specific areas of the brain. That idea goes back thousands of years, and there have been repeated
claims up to today that bits of the brain appear to be responsible for a specific thing, such as the feeling in your
hand, or your ability to understand syntax or to exert self-control. These kinds of claims have often soon been
nuanced by the revelation that other parts of the brain may influence or supplement this activity, and that the brain
region in question is also involved in other processes. Repeatedly, localisation has not exactly been overturned, but
it has become far fuzzier than originally thought. The reason is simple. Brains, unlike any machine, have not been
designed. They are organs that have evolved for over five hundred million years, so there is little or no reason
to expect they truly function like the machines we create. This implies that although Steno’s starting point –
treating the brain as a machine – has been incredibly productive, it will never produce a satisfying and full
description of how brains work. Understanding how past thinkers have struggled to understand brain function is
part of framing what we need to be doing now, in order to reach that goal. Our current ignorance should not be
viewed as a sign of defeat but as a challenge, a way of focusing attention and resources on what needs to be
discovered and on how to develop a programme of research for finding the answers. That is the subject of the
final, speculative part of this book, which deals with the future. Some readers will find this section provocative, but
that is my intention – to provoke reflection about what the brain is, what it does and how it does it, and above all to
encourage thinking about how we can take the next step, even in the absence of new technological metaphors. It
is one of the reasons this book is more than a history, and it highlights why the four most important words in
science are ‘We do not know’.
Bibliografie 3
Mihai Ioan Botez – Neuropsihologie clinică și neurologia comportamentului, p.3-
5;
Introducere istorică și critică în problema
localizărilor cerebrale

Bruno Cardu

Generalități și scurt istoric

Neuropsihologia, în sens larg, reprezintă examenul legăturii


între activitatea psihologică şi afecţiunea cerebrală
corespunzătoare. Ea studiază cum modificările de la nivelul
creierului afectează comportamentul: cum, de exemplu, ablaţia
regiunilor prefrontale la om îi va modifica acestuia capacităţile
intelectuale sau caracterul: cum ablaţia regiunilor infero-
temporale la maimuţe va influenţa percepţia şi memoria
vizuală, cum ablaţia extensivă a cortexului cerebral la sobolani
va modifica invățarea unui labirint cu mai multe fundături.

Răspunsul la aceste întrebări implică o dublă abordare


cantitativă:

1.pe de o parte, există abordarea ştiinţifică a psihologiei, care


vizează descrierea obiectivă şi înţelegerea funcţiilor ca percepţia, memoria, inteligenţa, limbajul, şi,

2.pe de altă parte, este vorba de ansamblul de cunoştinţe ştiinţifice despre creier, descrise de neuroanatomia
macroscopică şi microscopică (citoarhitectonică, traiecte nervoase), neurofiziologie, neurochimie, neurologie
clinică.

În general, în neuropsihologie se diferenţiază: neuropsihologia clinică, neuropsihologia experimentală, neurologia


comportamentului şi neuropsihologia cognitivă.

1.neuropsihologia clinică are ca obiect măsurarea şi analizarea, la om, a modificărilor capacităţilor intelectuale,
perceptuale, mnezice şi a modificărilor de personalitate apărute în urma unei leziuni cerebrale ca un accident
vascular, o intoxicaţie, o ablaţie sau propagarea intracerebrală a unei tumori. Neuropsihologul clinician stabileşte
diagnosticul leziunilor cerebrale, ţinând cont de dublul aspect psihologic şi neurologic. Pe plan psihologic, el va
utiliza teste standardizate care permit evaluarea şi situarea performanţelor psihologice ale unui pacient în interiorul
scalei cantitative a testului. Pe plan neurologic, el va avea sarcina de a stabili comportamentul pacientului, şi, în
colaborare cu neurologul şi neuroradiologul, de a stabili localizarea leziunii cerebrale, întinderea sa şi repercusiunile
asupra creierului în ansamblu. Cercetarea în neuropsihologia clinică va utiliza deseori comparaţii statistice între
loturi mari de pacienţi şi martori normali.

2.neuropsihologia experimentală răspunde cerinţei de rigoare şi de cercetare fundamentală în studiul relaţiei


creier-comportament. Cercetarea în neuropsihologia experimentală are loc deseori (dar nu totdeauna) fără
precauțiuni de aplicare, şi îi datorim contribuții importante în cunoașterea mecanisemelor cerebrale ale
comportamentului, cum ar fi motivația, fenomenul de autostimulare, rolul corpului calos, etc. Este vorba, de
exemplu, de experiențe de secționare a corpului calos la pisici și la maimuțe care au modificat problematica de corp
calos la om.

3.neurologia comportamentului este o disciplină particulară, care se concentrează asupra analizei aprofundate a
cazurilor individuale, ca sursă de date generalizabile. În loc să evalueze un comportament raportându-l la o scală
cantitativă prealabilă, ea studiază cazurile individuale organizând situații test care permit diferențierea deviațiilor
anormale de funcționarea normală.

4.neuropsihologia (neuroștiința) cognitivă, foarte răspândită și influentă la ora actuală, are ca scop al cercetării
înțelegerea mecanismelor psihologiei normale, plecând de la modificările de comportament induse de leziunile
cerebrale la oameni. Ea nu se interesează de localizarea leziunii ca atare, nici de localizarea cerebrală a funcţiilor
psihologice; ea încearcă mai ales să înţeleagă cum, plecând de la dezintegrarea psihologică produsă de leziune,
se poate defini organizarea psihoiogică normală.

Cititorul interesat va putea găsi o ilustrare a acestui curent de gândire în lucrările lui McCarthy şi Warrington şi ale
lui Seron. Concepţia modernă a creierului, ca motor al activităţii psihologice, este o cucerire relativ recentă,
care datează, în forma actuală, numai de la sfârşitul secolului 18. Frenologia lui Franz Joseph Gall ocupă aici un
loc important. Gall era un anatomist valoros, adept al psihologiei "facultăţilor” mentale. El a propus existența unor
facultăţi separate cum ar fi inteligenţa, memoria, percepţia, fiind astfel primul teoretician care a afirmat că aceste
funcții psihice ar putea avea localizări precise în anumite regiuni ale creierului. El propusese că ariile creierului
implicate în funcțiile respective exercitau o presiune asupra porţiunii de craniu corespunzătoare, dând naştere unor
proeminenţe craniene numite «bose». Ar fi fost suficientă, în opinia lui, examinarea lor pe craniul unei persoane,
pentru a face inventarul psihologic al facultăţilor sale mentale. Această concepţie, chiar atât de fantezistă pe cât
e azi, reprezintă prima tentativă importantă de a lega funcții psihologice de arii ale creierului. Astfel a luat naştere
teoria localizărilor cerebrale. Doctrina lui Gall a fost însă intens combătută de contemporanii săi. Unul din cei mai
mari adversari ai săi a fost fiziologul francez Pierre Flourens primul care a studiat sistematic efectul leziunilor
cerebrale asupra comportamentului la animale. În urma unor studii experimentale complexe, el a diferenţiat şase
unităţi ale sistemului nervos - emisferele cerebrale, cerebelul, corpii cvadrigemeni, bulbul, măduva spinării şi nervii.
Emisferelor cerebrale le corespundeau voinţa, judecata, amintirile, percepţia; dar toate aceste funcţii constituiau
pentu el o facultate esenţialmente unică, nelocalizabilă precis. Pierre Flourens a fost precursorul
antilocalizaţioniştilor moderni.

De la polernica Gall-Flourens, evoluţia teoriei localizărilor cerebrale seamănă cu o spirală cu mai multe nivele. La
fiecare nivel, dialectica localizaţionistă şi antilocalizaţionistă este reluată, o generaţie sau două mai târziu, cu noi
argumente provenite din noi descoperiri. De exemplu, teza avansată de Paul Broca în 1861, după care limbajul
articulat este localizat în piciorul celei de-a treia circumvoluţiuni frontale din emisfera stângă, constituia prima
localizare serioasă a unei funcţii psihologice într-o parte determinată a creierului. În această introducere, nu vom
expune istoria descoperirilor care ar putea duce la confirmarea tezelor localizaţioniste sau antilocalizaţioniste.
Aceasta poate fi găsită într-un mare număr de lucrări specializate. Ceea ce ne propunem noi este de a descrie
funcţionarea a două concepţii opuse şi de a analiza ponderea argumentelor în favoarea fiecăreia. Nu există o
corespondenţă univocă între o concepţie teoretică în psihologie şi o concepţie cerebrală localizaţionistă sau
holistică (non localizaţionistă). Până la sfarşitul secolului 19, doctrina asociaţionistă era dominată atât la
localizaţionişti (Broca, Wernicke, Dejerine etc.) cât și de holiști sau non-localizaţionişti (Hughlings Jackson,
Henry Head, von Monakow şi Mourgue). Mai târziu, o a doua concepţie holistică, inspirată din psihologia Gestalt-
ului, a fost argumentată în neuropsihologie de către Kurt Goldstein şi Karl Lashley.
Bibliografie 4

Bryan Kolb; Ian Q. Whishaw – Fundamentals of Human Neuropsychology, Fifth


Edition, p.1-3
The Development of Neuropsychology

The term neuropsychology in its English version originated


quite recently, in part because it represented a new approach
to studying the brain. According to Daryl Bruce, it was first
used by Canadian physician William Osler in his early-
twentieth-century textbook, which was a standard medical
reference of the time. It later appeared as a subtitle to
Canadian psychologist Donald O. Hebb’s 1949 treatise on
brain function, The Organization of Behavior: A
Neuropsychological Theory. Although Hebb neither defined
nor used the word in the text itself, he probably intended it
to represent a multidisciplinary focus of scientists who
believed that an understanding of human brain function was
central to understanding human behavior. By 1957, the term
had become a recognized designation for a subfield of the
neurosciences. Heinrich Klüver, an American investigator
into the neural basis of vision, wrote in the preface to his
Behavior Mechanism in Monkeys that the book would be of
interest to neuropsychologists and others. (Klüver had not
used the term in the 1933 preface to the same book.) In 1960,
it appeared in the title of a widely read collection of writings
by American psychologist Karl S. Lashley—The Neuropsychology of Lashley—most of which described rat and
monkey studies directed toward understanding memory, perception, and motor behavior. Again, neuropsychology
was neither used nor defined in the text. To the extent that they did use the term, however, these writers, who
specialized in the study of basic brain function in animals, were recognizing the emergence of a subdiscipline of
investigators who specialized in human research and would find the animal research relevant to understanding
human brain function. Today, we define neuropsychology as the study of the relation between human brain
function and behavior. Although neuropsychology draws information from many disciplines—for example
anatomy, biology, biophysics, ethology, pharmacology, physiology, physiological psychology, and
philosophy — its central focus is the development of a science of human behavior based on the function of the
human brain. As such, it is distinct from neurology, which is the diagnosis of nervous system injury by physicians
who are specialists in nervous system diseases, from neuroscience, which is the study of the molecular basis of
nervous system function by scientists who mainly use nonhuman animals, and from psychology, which is the study
of behavior more generally. Neuropsychology is strongly influenced by two traditional foci of experimental and
theoretical investigations into brain function: the brain hypothesis, the idea that the brain is the source of behavior;
and the neuron hypothesis, the idea that the unit of brain structure and function is the neuron. This chapter traces
the development of these two ideas. We will see that, although the science is new, its major ideas are not.

THE BRAIN HYPOTHESIS

People knew what the brain looked like long before they had any idea of what it did. Very early in human history,
hunters must have noticed that all animals have a brain and that the brains of different animals, including humans,
although varying greatly in size, look quite similar. Within the past 2000 years, anatomists began producing
drawings of the brain and naming some of its distinctive parts without knowing what function the brain or its parts
performed. We will begin this chapter with a description of the brain and some of its major parts and will then
consider some major insights into the functions of the brain.

What Is the Brain?

Brain is an Old English word for the tissue that is found within the skull. Figure 1.1 shows a typical human brain as
oriented in the skull of an upright human. The brain has two relatively symmetrical halves called hemispheres, one
on the left side of the body and one on the right. Just as your body is symmetrical, having two arms and two legs,
so is the brain. If you make your right hand into a fist and hold it up with the thumb pointing toward the front, the
fist can represent the position of the brain’s left hemisphere within the skull. Taken as a whole, the basic plan of the
brain is that of a tube filled with fluid, called cerebrospinal fluid (CSF). Parts of the covering of the tube have bulged
outward and folded, forming the more complicated looking surface structures that initially catch the eye.

The most conspicuous outer feature of the brain consists of a crinkled tissue that has expanded from the front of
the tube to such an extent that it folds over and covers much of the rest of the brain. This outer layer is known as
the cerebral cortex (usually referred to as just the cortex). The word cortex, which means “bark” in Latin, is aptly
chosen both because the cortex’s folded appearance resembles the bark of a tree and because its tissue covers most
of the rest of the brain, just as bark covers a tree. The folds of the cortex are called gyri, and the creases between
them are called sulci (gyrus is Greek for “circle” and sulcus is Greek for “trench”). Some large sulci are called fissures,
such as the longitudinal fissure that divides the two hemispheres and the lateral fissure that divides each
hemisphere into halves (in our fist analogy, the lateral fissure is the crease separating the thumb from the other
fingers). The cortex of each hemisphere is divided into four lobes, named after the skull bones beneath which they
lie.

The temporal lobe is located at approximately the same place as the thumb on
your upraised fist. The lobe lying immediately above the temporal lobe is called
the frontal lobe because it is located at the front of the brain. The parietal lobe
is located behind the frontal lobe, and the occipital lobe constitutes the area at
the back of each hemisphere.

The cerebral cortex comprises most of the forebrain, so named because it


develops from the front part of the tube that makes up the embryo’s primitive
brain. The remaining “tube” underlying the cortex is referred to as the
brainstem.

The brainstem is in turn connected to the spinal cord, which descends down the
back in the vertebral column. To visualize the relations between these parts of
the brain, again imagine your upraised fist: the folded fingers represent the
cortex, the hand represents the brainstem, and the arm represents the spinal
cord. This three-part division of the brain is conceptually useful evolutionarily,
anatomically, and functionally. Evolutionarily, animals with only spinal cords preceded those with brainstems,
which preceded those with forebrains. Likewise, in prenatal development, the spinal cord forms before the
brainstem, which forms before the forebrain. Functionally, the forebrain mediates cognitive functions; the
brainstem mediates regulatory functions such as eating, drinking, and moving;

THE NEURON HYPOTHESIS


After the development of the brain hypothesis, that the brain is responsible for all behavior, the second major
influence on modern neuropsychology was the development of the neuron hypothesis, the idea that the nervous
system is composed of discrete, autonomous units, or neurons, that can interact but are not physically connected.
In this section, we will first provide a brief description of the cells of the nervous system, and then we will describe
how the neuron hypothesis led to a number of ideas that are central to neuropsychology.
Nervous System Cells

The nervous system is composed of two basic kinds of cells, neurons and glia (a name that comes from the Greek
word for “glue”). The neurons are the functional units that enable us to receive information, process it, and produce
actions. The glia help the neurons out, holding them together (some do act as glue) and providing other supporting
functions. In the human nervous system, there are about 100 billion neurons and perhaps 10 times as many glial
cells. (No, no one has counted them all. Scientists have estimated the total number by counting the cells in a small
sample of brain tissue and then multiplying by the brain’s volume.)

Figure 1.8 shows the three basic parts of a


neuron. The neuron’s core region is called the cell
body. Most of a neuron’s branching extensions
are called dendrites (Latin for “branch”), but the
main “root” is called the axon (Greek for “axle”).
Neurons have only one axon, but most have
many dendrites. Some small neurons have so
many dendrites that they look like garden
hedges. The dendrites and axon of the neuron
are extensions of the cell body, and their main
purpose is to extend the surface area of the cell.
The dendrites of a cell can be a number of
millimeters long, but the axon can extend as long
as a meter, as do those in the pyramidal tract that
extend from the cortex to the spinal cord. In the
giraffe, these same axons are a number of meters
long. Understanding how billions of cells, many
with long, complex extensions, produce behavior
is a formidable task, even with the use of the
powerful instrumentation available today. Just
imagine what the first anatomists with their
crude microscopes thought when they first
began to make out some of the brain’s structural
details. But insights into the cellular organization
did follow. Through the development of new,
more powerful microscopes and techniques for
selectively staining tissue, good descriptions of
neurons emerged. By applying new electronic
inventions to the study of neurons, researchers
began to understand how axons conduct
information. By studying how neurons interact
and by applying a growing body of knowledge
from chemistry, they discovered how neurons
communicate and how learning takes place.

The Neuron
The earliest anatomists who tried to examine the
substructure of the nervous system found a
gelatinous white substance, almost a goo.
Eventually it was discovered that, if brain tissue were placed in alcohol or
formaldehyde, water would be drawn out of the tissue, making it firm. Then, if
the tissue were cut into thin sections, many different structures could be seen. Early
theories described nerves as hollow, fluid containing tubes; however, when the first
cellular anatomist, Anton van Leeuwenhoek (1632–1723), examined nerves with a
primitive microscope, he found no such thing. He did mention the presence of
“globules,” which may have been cell bodies. As microscopes improved, the various
parts of the nerve came into ever sharper focus, eventually leading Theodor
Schwann, in 1839, to enunciate the theory that cells are the basic structural units of
the nervous system, just as they are for the rest of the body.

An exciting development in visualizing cells was the introduction of staining, which


allows different parts of the nervous system to be distinguished. Various dyes used
for staining cloth in the German clothing industry were applied to thinly cut tissue
with various results: some selectively stained the cell body, some stained the
nucleus, and some stained the axons. The most amazing cell stain came from the
application of photographic chemicals to nervous system tissue. Italian anatomist
Camillo Golgi (1843–1926) in 1875 impregnated tissue with silver nitrate (one of the
substances responsible for forming the images in black-and-white photographs)
and found that a few cells in their entirety—cell body, dendrites, and axons—
became encrusted with silver. This technique allowed the entire neuron and all its
processes to be visualized for the first time. Golgi never described how he had been
led to his remarkable discovery. Microscopic examination revealed that the brain
was nothing like an amorphous jelly; rather, it had an enormously intricate
substructure with components arranged in complex clusters, each interconnected
with many other clusters.

How did this complex organ work? Was it a net of physically interconnected fibers
or a collection of discrete and separate units? If it were an interconnected net, then
changes in one part should, by diffusion, produce changes in every other part. Because it would be difficult for a
structure thus organized to localize function, a netlike structure would favor a holistic, or “mind,” type of brain
function and psychology.

Alternatively, a structure of discrete units functioning autonomously would favor a psychology characterized
by localization of function. In 1883, Golgi suggested that axons, the longest fibers coming out of the cell body, are
interconnected, forming an axonic net. Golgi claimed to have seen connections between cells, and so he did not
think that brain functions were localized. This position was opposed by Spanish anatomist Santiago Ramón y Cajal
(1852–1934), on the basis of the results of studies in which he used Golgi’s own silver-staining technique. Cajal
examined the brains of chicks at various ages and produced beautiful drawings of neurons at different stages of
growth. He was able to see a neuron develop from a simple cell body with few extensions to a highly complex cell
with many extensions. He never saw connections from cell to cell. Golgi and Cajal jointly received the Nobel Prize
in 1906; each in his acceptance speech argued his position on the organization of neurons, Golgi supporting the
nerve net and Cajal supporting the idea of separate cells.

Information conduction

We have mentioned early views that suggested a hydraulic flow of liquid through nerves into muscles (reminiscent
of the way that filling and emptying changes the shape and hardness of a balloon). Such theories have been called
balloonist theories. Descartes espoused the balloonist hypothesis, arguing that a fluid from the ventricles flows
through nerves into muscles to make them move English physician Francis Glisson (1597–1677) in 1677 made a direct
test of the balloon hypothesis by immersing a man’s arm in water and measuring the change in the water level when
the muscles of the arm were contracted. Because the water level did not rise, Glisson concluded that no fluid
entered the muscle (bringing no concomitant change in density). Johan Swammerdam (1637–1680) in Holland
reached the same conclusion from similar experiments on frogs, but his manuscript lay unpublished for 100 years.
(We have asked students in our classes if the water will rise when an immersed muscle is contracted. Many predict
that it will.)

The impetus to adopt a theory of electrical conduction in neurons came from an English scientist, Stephen Gray
(1666–1736), who in 1731 attracted considerable attention by demonstrating that the human body could conduct
electricity. He showed that, when a rod containing static electricity was brought close to the feet of a boy suspended
by a rope, a grass-leaf electroscope (a thin strip of conducting material) placed near the boy’s nose would be
attracted to the boy’s nose. Shortly after, Italian physicist Luigi Galvani (1737–1798) demonstrated that electrical
stimulation of a frog’s nerve could cause muscle contraction. The idea for this experiment came from his
observation that frogs’ legs hanging on a metal wire in a market twitched during an electrical storm. In 1886, Joseph
Bernstein (1839–1917) developed the theory that the membrane of a nerve is polarized (has a positive charge on
one side and a negative charge on the other) and that an electric potential can be propagated along the membrane
by the movements of ions across the membrane. Many of the details of this ionic conduction were worked out by
English physiologists Alan Hodgkin (1914–1988) and Andrew Huxley (1917– 2012), who received the Nobel Prize in
physiology in 1963.

As successive findings refuted the hydraulic models of conduction and brought more dynamic electrical models into
favor, hydraulic theories of behavior were also critically reassessed. For example, Viennese psychiatrist Sigmund
Freud (1856–1939) had originally envisioned the biological basis of his theory of behavior, with its three
levels
of id, ego, and superego, as being a hydraulic mechanism of some sort. Although conceptually useful for a
time, it had no effect on concepts of brain function, because there was no evidence of the brain functioning
as a hydraulic system.

Connections Between Neurons as the Basis of Learning

Even though neurons are independent structures, they must influence one another. Charles Scott Sherrington
(1857–1952), an English physiologist, examined how nerves connect to muscles and first suggested how
the connection is made. He applied an unpleasant stimulation to a dog’s paw, measured how long it took the
dog to withdraw its foot, and compared that rate with the speed at which messages were known to travel along
axons. According to Sherrington’s calculations, the dog took 5 milliseconds too long to respond. Sherrington
theorized that neurons are connected by junctions, which he called synapses (from the Greek word for “clasp”),
and that additional time is required for the message to get across the junction. The results of later electron
microscopic studies were to confirm that synapses do not quite touch the cells with which they synapse. The general
assumption that developed in response to this discovery was that a synapse releases chemicals to influence the
adjacent cell. In 1949, on the basis of this principle, Donald Hebb proposed a learning theory stating that, when
individual cells are activated at the same time, they grow connecting synapses or strengthen existing ones
and thus become a functional unit. He proposed that new or strengthened connections, sometimes called
Hebb or plastic synapses, are the structural bases of memory. Just how synapses are formed and change is a
vibrant area of research today.

Modern developments

Given the nineteenth-century advances in knowledge about brain structure and function—the brain and neuron
hypotheses, the concept of the special nature of cortical function, and the concepts of localization of function and
of disconnection—why was the science of neuropsychology not established by 1900 rather than after
1949, when the word neuropsychology first appeared? There are several possible reasons. In the
1920s, some scientists still rejected the classical approach of Broca, Wernicke, and others, arguing that
their attempts to correlate behavior with anatomical sites were little more sophisticated than the attempts
of the phrenologists. Then two world wars disrupted the progress of science in many countries. In
addition, psychologists, who traced their origins to philosophy rather than to biology, were not interested
in physiological and anatomical
approaches, directing their attention instead to behaviorism, psychophysics, and the psychoanalytical
movement.

A number of modern developments have contributed to the emergence of neuropsychology as a distinct scientific
discipline: neurosurgery; psychometrics (the science of measuring human mental abilities) and statistical
analysis; and technological advances, particularly those that allow a living brain to be imaged.

Bibliografie 5
Mark Solms; Michael Saling – A Moment of
Transition. Two Neuroscientific Articles by
Sigmund Freud – p.vii-xv;
Foreword
Mortimer Ostow

World War II, and especially the Holocaust that accompanied it,
drove the centre of ferment of psychoanalysis from Central Europe,
where it had originated, to England and the United States—the
English-speaking world. As a result it bcame necessary to provide
accurate English translations of the foundation works of
psychoanalysis, Freud's essays and books, as well as the
contributions of his disciples. These translations have been
undertaken by a series of dedicated and gifted individuals, for
whose efforts we are grateful. Of course the papers first translated
were those that bore directly upon clinical practice and its
supporting theory. Only latterly have the works of primarily
historical interest been presented to the English-speaking public.

Nevertheless, as Solms and Sating note, many of Freud's neurological works remain untranslated, so that
English readers are deprived of the opportunity to see Freud functioning as a neurologist and then turning his
interest to psychopathology, on the way to the evolution of psychoanalysis. Solms has undertaken to provide a
series of English translations of Freud's hitherto untranslated neurological papers, the first two of which are
presented in this volume. As the reader will observe, the work of translation has been carried out with an unusual
degree of meticulousness, faithfully rendering even complex thoughts into clear, lucid, unambiguous, and easily
readable English. The translator and his associate, Michael Saling, have provided also a thoroughgoing discussion
of the origin of these pre-analytical articles, what they reveal to us about the scientific climate in which Freud
worked, how his work and ideas related to those of his contemporaries, and how they can be seen as steps on the
road to the discipline of psychoanalysis. They also confront various challenges that have been posed to
psychoanalysis in recent decades, that are based upon claims that Freud was limited by the archaic and dated
views of central nervous system function that prevailed in his time. The authors demonstrate decisively Freud's
departure from the conventional wisdom and the development of his original ideas. We are treated to a view of the
thinking of the most distinguished neuropsychiatrist of history as he disappointingly comes to grips with the fact
that neurological studies offer no window onto the discipline of psychopathology. On the contrary, Freud
observed, psychological considerations must be taken into consideration in anatomical and physiological
theorizing. For example, in the 'Aphasie' article, on the basis of clinical observation of aphasic speech disorders, he
rejects the localization theories of aphasia, the concept that various 'speech centres' each controls a specific aspect
of speech, proposing instead a 'speech field', a broad area in which the operations subserving speech are carried on.
Lesions in specific regions impair those operations in some-what specific ways because they impinge upon fibres
passing through these regions, but one is not justified to infer that these regions are discrete speech centres in the
sense that they are concerned with the psychological formulation or control of speech. In this respect, Freud
anticipated important developments in twentieth-century aphasiology and is considered by the editors of this
volume to be one of the founding fathers of modern neuropsychology.

The 'Gehirn' (Brain) article demonstrates cogently Freud's dynamic mode of thinking, so that even his description
of the gross neuroanatomy of the brain takes the form of an exploration rather than a colourless exposition. I refer
to Freud as a neuropsychiatrist deliberately, because until recent decades, neurology and psychiatry were
practised by the same individual, the assumption being that knowledge of the one would facilitate the practice
of the other, and that the knowledge of brain anatomy and physiology would facilitate the practice of both.
For similar reasons, obstetrics and gynaecology are frequently practised together to this day.

Freud was a neuropsychiatrist. So was Meynert, and so was Charcot. The psychoanalytical reader will recognize
the name of Paul Flechsig, whose neuroanatomical investigations Freud quotes admiringly, as the psychiatrist who
cared for Daniel Paul Schreber. In practice, the two disciplines are approached independently. In the second half of
the nineteenth century, their interests coincided primarily in the case of dementia paralytics and of focal lesions of
the brain, traumatic, vascular, or neoplastic. In our day, psychopharmacological theory has been promising to
promote a fruitful exchange between neurophysiology and neurochemistry on the one hand, and psychiatric theory
on the other, but at this time, psycho-pharmacology remains an empirical discipline.

Sigmund Freud, like the other scholarly neuropsychiatrists of his time, seems to have been beguiled by the
hope that neuroanatomy could illuminate psychical function. He was disappointed. In the two essays in this
book I was able to find only two examples of a claimed relation between these areas. Freud, following Meynert,
calls the fibres linking parts of the cerebral cortex to each other, association fibres, 'because they serve the
association of ideas' (ibid.). Despite these exceptions. Freud seems to be giving up the hope of correlating brain
structure with mental life. He says clearly that he does not know how to relate brain function to psychical
function. We see him here cutting his psychology loose from neuroanatomy and neurophysiology.
Nevertheless, in his 1891 essay on aphasia, he comes out clearly for psychophysical parallelism (see Jones, 1953, p.
403). Evidently the transition from the search for a material basis for psychological and psychopathological
phenomena to the determination to formulate a theory of mental events in their own terms was not a smooth one.
In the Cocaine Papers (Freud, 1884-87) which were written just before these two pre-analytical articles, we find
splendid clinical descriptions of the effects of cocaine and suggestions about its clinical utility, but almost no
attempt to contrive any kind of psychological description or theory. In the same encyclopedia in which these two
pre-analytical papers were published, Freud also published an article on hysteria (1888), in which he makes no
attempt to relate its symptoms to brain structure or function, although he writes that:

“Hysteria is based wholly and entirely on physiological modifications of the nervous system and its essence
should be expressed in a formula which took account of the conditions of excitability in the different parts of
the nervous system”

In 1886, Freud announced that he was preparing a paper on 'Some points for a comparative study of organic and
hysterical motor paralyses' (1893c) (Quelques considerations pour une etude comparative des paralysies motrices
organiques et hysteriques). Presumably it was written soon after the cocaine papers and almost simultaneously with
the two papers in this volume. Here he proposes perhaps the first description of what has come to be called
'structure' in psychoanalytical theory—namely, an enduring complex of psychical function, in this case, the
conception of a thing.

Meanwhile, in 1891, Waldeyer gave final form to the neurone theory. The word 'neurone' appears in Freud's 1893
paper, followed by a parenthetical definition, 'cellulo-fibrillary neural unit'. With the concept of the neurone, Freud
saw an opportunity to revive his quest to base his psychology on knowledge of the structure of the nervous
system. He forsook gross anatomy for a feature of microscopic anatomy. But he realized intermittently that
he was not building upon true anatomy and physiology, but only on a fairly gross and simplistic schematic
hypothesis.

“Anyone, however, who is engaged scientifically in the construction of hypotheses will only begin to take his
theory seriously if they can be fitted into our knowledge from more than one direction and if the arbitrariness of
a constructio ad hoc can be mitigated in relation to them.” (1895)

Freud was evidently trying to overcome his own doubts. It is not surprising therefore to learn that during the
year 1895 he alternately worked feverishly on the Project and abandoned it. Focusing on the usefulness of the
'energy' concept, Freud, in the 'Project', ignored almost completely the complex gross structure of the nervous
system, the knowledge of which he demonstrates and communicates so skilfully in Gehirn. He differentiates
between sensory and motor nerves, between the superficial and deeper layers of the cortex, and among the various
sensory reception areas (ibid., p. 315). Otherwise he treats the brain as a homogeneous structure, composed of
a collection of small objects, which he calls neurones, all of which have the same properties. He does not allow
for variation among them in these properties, nor for spontaneous activity internally generated. By discarding the
manuscript, he tried to terminate this attempt to relate psychology to brain function permanently.

The 'energy' factor has run like a red thread from Freud's earliest thoughts about psychology and psychopathology
until the last. In 'Gehirn' he speaks of the 'specific energy' of sensory nerves (p. 65, this volume). In his final and
uncompleted work on psychoanalytical theory, An Outline of Psycho-analysis (1940a), Freud wrote:

“The future may teach us to exercise a direct influence by means of particular chemical substances on the
amounts of energy and their distribution in the mental apparatus”

I believe that in that statement he was thinking back to his experiences with cocaine, restating his persistent belief
in the importance of an energy factor in psychology, and anticipating the mode of action of other
psychopharmacological agents. It is of no importance that his earliest thoughts about energy dealt with
measurable conduction processes in nerves and his later thoughts with a non-material, hypothetical parameter of
psychical function. In both instances he was concerned about motivation and its disturbances.

Meanwhile, psychoanalysis has flourished. Its theories, the fundamentals of which were established by Freud, have
been elaborated, altered and simplified, challenged and defended. Practice has changed less than theory and is
probably more consistent among different psychoanalytical communities. While in some locations the demand for
psychoanalytical treatment has diminished and authentic analysis has been displaced to some extent by
inauthentic analysis and by other therapies, psychoanalysis has contributed to a host of psychodynamic therapies,
so that its influence has grown greatly.

The dynamic that Freud saw one hundred years ago was that psychology had to be disengaged from the
anatomy and physiology that was then available and permitted to develop on its own, and then in developed
form slowly recruited to re-engage a far more subtle and sophisticated anatomy, physiology, pharmacology,
and chemistry. As he has taught us in other contexts, needs can often best be satisfied if they are temporarily
renounced. Psychoanalysis, Freud said twenty years after these papers were written,

“hopes to discover the common ground on the basis of which the convergence of physical and mental disorder
will become intelligible. With this aim in view, psychoanalysis must keep itself free from any hypothesis that is
alien to it, whether of an anatomical, chemical or physiological kind, and must operate entirely with purely
psychological auxiliary ideas.” (Freud, 1916-17, p. 211)

In these papers we see the beginning of that renunciation and disengagement; they give us a glimpse of this
crucial moment, the preparation for the application of the psycho-analytical method to the resolution of
problems of human psychology. We are impatient to initiate the process of reengagement, but it continues to
elude us.
Bibliografie 6
Michael Gazzaniga – The Mind’s Past – p.11-13
Over a hundred years ago William James lamented, “I wished by
treating Psychology like a natural science, to help her to become
one.”

Well, it never occurred. Psychology, which for many was the


study of mental life, gave way during the past century to other
disciplines. Today the mind sciences are the province of
evolutionary biologists, cognitive scientists, neuroscientists,
psychophysicists, linguists, computer scientists—you name it.
This book is about special truths that these new practitioners of
the study of mind have unearthed.

Psychology itself is dead. Or, to put it another way, psychology is


in a funny situation. My college, Dartmouth, is constructing a
magnificent new building for psychology. Yet its four stories go
like this: The basement is all neuroscience. The first floor is
devoted to classrooms and administration. The second floor
houses social psychology, the third floor, cognitive science, and
the fourth, cognitive neuroscience. Why is it called the psychology
building?

Traditions are long lasting and hard to give up. The odd thing is that everyone but its practitioners knows about
the death of psychology. A dean asked the development office why money could not be raised to reimburse the
college for the new psychology building. “Oh, the alumni think it’s a dead topic, you know, sort of just counseling. If
those guys would call themselves the Department of Brain and Cognitive Science, I could raise $25 million in a week.”
The grand questions originally asked by those trained in classical psychology have evolved into matters other
scientists can address. My dear friend the late Stanley Schachter of Columbia University told me just before his
death that his beloved field of social psychology was not, after all, a cumulative science. Yes, scientists keep
asking questions and using the scientific method to answer them, but the answers don’t point to a body of
knowledge where one result leads to another. It was a strong statement—one that he would be the first to qualify.
But he was on to something. The field of psychology is not the field of molecular biology, where new discoveries
building on old ones are made every day. This is not to say that psychological processes and psychological states
are uninteresting, even boring, subjects.

On the contrary, they are fascinating pieces of the mysterious unknown that many curious minds struggle to
understand. How the brain enables mind is the question to be answered in the twenty-first century—no doubt about
it.

The next question is how to think about this question. That is the business of this little book. I think the message
here is significant, one important enough to be held up for examination if it is to take hold. My view of how the brain
works is rooted in an evolutionary perspective that moves from the fact that our mental life reflects the actions of
many, perhaps dozens to thousands, of neural devices that are built into our brains at the factory. These devices do
crucial things for us, from managing our walking and breathing to helping us with syllogisms.
There are all kinds and shapes of neural devices, and they are all clever.

At first it is hard to believe that most of these devices do their jobs before we are aware of their actions. We human
beings have a centric view of the world. We think our personal selves are directing the show most of the time. I
argue that recent research shows this is not true but simply appears to be true because of a special device in our left
brain called the interpreter. This one device creates the illusion that we are in charge of our actions, and it does so
by interpreting our past—the prior actions of our nervous system.

Bibliografie 7
J. Graham Beaumont – Introduction to
Neuropsychology, Second Edition, p.3-8;
What is Neuropsychology?

The human brain is a fascinating and engimatic machine. Weighing


only about 3 pounds (1.36 kilograms) and with a volume of about 1,250
cubic centimeters, it has the ability to monitor and control our basic
life support systems, to maintain our posture and direct our
movements, to receive and interpret information about the world
around us, and to store information in a readily accessible form
throughout our lives. It allows us to solve problems that range from
the strictly practical to the highly abstract, to communicate with our
fellow human beings through language, to create new ideas and
imagine things that have never existed, to feel love and happiness and
disappointment, and to experience an awareness of ourselves as
individuals. Not only can the brain undertake such a variety of
different functions, but it can do more or less all of them
simultaneously. How this is achieved is one of the most challenging
and exciting problems faced by contemporary science.

It has to be said at the outset that we are completely ignorant of many of the things that the brain does, and
of how they are done. Nevertheless, very considerable advances have been made in the neurosciences over the
last decade or two, and there is growing confidence among neuroscientists that a real understanding is beginning
to emerge. This feeling is encouraged by the increasing integration of the various disciplines involved in
neuroscience, and a convergence of both experimental findings and theoretical models.

Neuropsychology, as one of the neurosciences, has grown to be a separate field of specialization within
psychology over about the last 40 years, although there has always been an interest in it throughout the 120-
year history of modern scientific psychology. Neuropsychology seeks to understand the relationship between the
brain and behavior, that is, it attempts to explain the way in which the activity of the brain is expressed in observable
behavior. What mechanisms are responsible for human thinking, learning, and emotion, how do these mechanisms
operate, and what are the effects of changes in brain states upon human behavior? There are a variety of ways in
which neuropsychologists conduct their investigations into such questions, but the central theme of each is
that to understand human behavior we need to understand the human brain. A psychology without any
reference to physiology can hardly be complete. The operation of the brain is relevant to human conduct, and the
understanding of how the brain relates to behavior may make a significant contribution to understanding how
other, more purely psychological, factors operate in directing behavior. Just how the brain deals with intelligent
and complex human functions is, in any case, an important subject of investigation in its own right, and one that
has an immediate relevance for those with brain injuries and diseases, as well as a wider relevance for medical
practice.

BRANCHES OF NEUROPSYCHOLOGY

Neuropsychology is often divided into two main areas: clinical neuropsychology and experimental
neuropsychology. The distinction is principally between clinical studies, on brain-injured subjects, and experimental
studies, on normal subjects, although the methods of investigation also differ. The division between the two is not
absolutely clear-cut but it helps to form an initial classification of the kinds of work in which neuropsychologists are
involved.

Clinical neuropsychology deals with patients who have lesions of the brain. These lesions may be the effects of
disease or tumors, may result from physical damage or trauma to the brain, or be the result of other biochemical
changes, perhaps caused by toxic substances. Trauma may be accidental, caused by wounds or collisions; it may
result from some failure in the vascular system supplying blood to the brain; or it may be the intended result of
neurosurgical intervention to correct some neurological problem. The clinical neuropsychologist measures deficits
in intelligence, personality, and sensory–motor functions by specialized testing procedures, and relates the results
to the particular areas of the brain that have been affected. The damaged areas may be clearly circumscribed and
limited in extent, particularly in the case of surgical lesions (when an accurate description of the parts of the brain
that have been removed can be obtained), or may be diffuse, affecting cells throughout much of the brain, as is the
case with certain cerebral diseases. Clinical neuropsychologists employ these measurements not only in the
scientific investigation of brain–behavior relationships, but also in the practical clinical work of aiding diagnosis of
brain lesions and rehabilitating brain-injured patients.

Behavioral neurology, as a form of clinical neuropsychology, also deals with clinical patients, but the emphasis is
upon conceptual rather than operational definitions of behavior. The individual case rather than group statistics is
the focus of attention, and this approach usually involves less formal tests to establish qualitative deviations from
“normal” functioning. Studies in behavioral neurology may often sample broader aspects of behavior than is usual
in clinical neuropsychology. The distinction between clinical neuropsychology and behavioral neurology is not
entirely clear, and it is further blurred by the historical traditions of investigation in different countries,
particularly in the United States, the former Soviet Union, and Great Britain. Examples of clinical work in these
countries are discussed below and in Chapter 15. By contrast, experimental neuropsychologists work with normal
subjects with intact brains. This is the most recent area of neuropsychology to develop and has grown rapidly since
the 1960s, with the invention of a variety of techniques that can be employed in the laboratory to study higher
functions in the brain. There are close links between experimental neuropsychology and general experimental and
cognitive psychology, and the laboratory methods employed in these three areas have strong similarities. Subjects
are generally required to undertake performance tasks while their accuracy or speed of response is recorded, from
which inferences about brain organization can be made. Associated variables, including psychophysiological or
electrophysiological variables, may also be recorded.

COMPARATIVE NEUROPSYCHOLOGY

Although the subject of this book is human neuropsychology, it should not be forgotten that much experimental
neuropsychology has been conducted with animals, although this form of research is now in decline. At one time,
the term neuropsychology was in fact taken to refer to this area, but it is now used more generally and the relative
importance of the animal studies of comparative neuropsychology has decreased. The obvious advantage of
working with animals, ethical issues apart, is that precise lesions can be introduced into the brain and later
confirmed by histology. Changes in the animal’s behavior are observed and can be correlated with the
experimental lesions. The disadvantages are the problems of investigating high-level functions using animals as
subjects (the study of language is ruled out, to take the most obvious example) and the difficulty of generalizing
from the animal brain to the human brain. Although it may be possible to discover in great detail how some
perceptual function is undertaken in the brain of the rat, the cat, or the monkey, it may not necessarily be
undertaken in the same way in the human brain. There are also basic differences in the amount and distribution of
different types of cortical tissue in the brains of the various animals and of humans, which add to the difficulties of
generalization.

Nevertheless, animal studies continue to be important, particularly with regard to the functions of subcortical
systems—those functions located in the structures below the surface mantle of the brain that deal with relatively
basic aspects of sensation, perception, learning, memory, and emotion. These systems are harder to study in
humans, because damage to these regions may interfere much more radically with a whole range of behaviors,
and may often result in death. One of the problems facing contemporary neuropsychology is to integrate the
study of cortical functions and higher-level behaviors, which have generally been studied in humans, with the study
of subcortical structures and more basic behavioral systems, which have been studied in animals. These have
tended to be separate areas of research, although there are now signs of integration between the two. For example,
intelligence is now being discussed not just in terms of human performance on intelligence tests, but also in terms
of underlying basic processes of learning, attention, and motivation that are only understood, in
neuropsychological terms, from animal studies. Sexual behavior is another area where the basic systems are only
open to experimental study in animals, yet must be viewed within the context of socialized and cognitively
controlled behavior in humans.

CONCEPTUAL ISSUES

Neuropsychology suffers philosophical and conceptual difficulties no less than other areas of psychology, and
perhaps more than many. There are two problems in particular of which every student of the subject should be
aware.

1.The first of these springs from the nature of the methods that must be used in neuropsychological
investigation. Descriptions of brain organization can only be relatively distant inferences from the human
performance that is actually observed. The real states of the brain are not observed. Behavioral measures are taken,
and by a line of reasoning that is based on background information about either the general arrangement of the
brain (in the case of experimental neuropsychology) or about the gross changes in the brain of a particular type of
patient (in the case of clinical neuropsychology), conclusions are drawn about what the correlation must be
between brain states and behavior. The one exception to this general rule is in electrophysiological studies and
studies of cerebral blood flow and metabolism through advanced scanning techniques, where actual brain states
can be observed, albeit rather crudely, in “real time” alongside the human performance being measured. This
makes these studies of special importance in neuropsychology.

However, in general, neuropsychological study proceeds only by inference. It is important to remember this in
assessing the validity of many of the findings claimed by neuropsychologists, and also to be particularly vigilant
that the reasoning used in drawing inferences is soundly based and the data not open to alternative
explanations.

2.The second problem is even more fundamental, and is that usually referred to as the mind–body problem. It is
a subject far too complex to receive satisfactory treatment here, but in brief it is concerned with the philosophical
difficulties that arise when we talk about mental events or “mind,” and physiological events or “body,” and try to
relate the two. We first have to decide whether mind and body are, or are not, fundamentally different kinds
of things. If they are, then there are problems in giving explanations that correlate the two. If they are not, then
we have to be careful not to be misled by our everyday language and concepts, which tend to treat mind and body
as if they were different kinds of things. The debate has gone on for some centuries, and is far from being
resolved, but there is a general position accepted by most if not all neuropsychologists. This position is known
as “emergent materialism” or “emergent psychoneural monism.” It rejects the idea that mind and body are
fundamentally different (hence it is “monist” rather than “dualist”) and proposes that all mental states are
states of the brain. Mental events therefore exist, but are not separate entities. However, mental states cannot
be reduced to a set of physical states because the brain is not a physical machine but a biosystem, and so
possesses properties peculiar to living things. The brain is seen as not simply a complex composition of cells, but
as having a structure and an environment. The result is that there are “emergent” properties that include being
able to think and feel and perceive. These properties are emergent just as the sweetness of an apple is an
emergent property. There is nothing in the chemistry or physical structure of the apple that possesses sweetness.
It is the whole object, in interaction with the eater, that produces the quality of sweetness. Mind is therefore seen
as a collection of emergent bioactivities, and this has implications for both theories and methods in
neuropsychology. It means that it is sometimes quite proper and sensible to reduce explanations to lower levels of
description, purely in terms of the physiology or the biochemistry involved. However, it also means that integration
among these lower processes and their description in terms of higher level concepts (concerning the emergent
properties) are both feasible and valuable.

The student first taking an interest in neuropsychology should not be overly concerned about these philosophical
issues; much, if not most, of neuropsychological work is conducted while ignoring them altogether. However, some
position is always implied in any investigation or theoretical model, and it is wise not to lose sight of the
implications of holding a particular position for a satisfactory understanding of how the brain works.

Bibliografie 8

Leon Dănăilă; Mihai Golu – Tratat de


neuropsihologie, volumul 1, p.17-19; 21;
Capitolul I
RAPORTUL PSIHIC-CREIER

Indiferent dacă definirea neuropsihologiei are un caracter mai


restrâns sau mai extins, domeniul ei specific de studiu rămâne
tot raportul psihic-creier. Principala sa finalitate
epistemologică rezidă tocmai în elucidarea, pe baze
experimentale şi clinice, a naturii şi esenţei acestui raport în
jurul căruia, în istoria gândirii filosofice şi ştiinţifice, au existat
aprinse dispute şi controverse. Trebuie subliniat că dacă
problema originii şi naturii psihicului s-a conştientizat şi a
devenit obiect de preocupare intelectuală din cele mai vechi
timpuri, de când omul a dobândit conştiinţă de sine, studiul
organizării structural-funcţionale a creierului a intrat mult mai
târziu în circuitul epistemologic. Evidenţierea şi afirmarea
legăturii celor două entităţi - psihicul şi creierul - se realizează abia în antichitatea târzie, doar cu câteva secole
înaintea erei noastre. Până atunci, cea mai inrădăcinată era convingerea că sufletul este un atribut al întregului corp,
mecanismul „dinamizării" şi „primenirii" lui fiind considerat actul respiraţiei sau circulaţia sângelui. Chiar în secolul
V a.e.n. , Hippocrate şi Kroton implicau creierul numai în realizarea gândirii, procesele şi stările afective fiind puse
pe seama aparatului cardiovascular.

De-abia în secolul II î.e.n., Galen a făcut un pas mai serios înainte, afirmând într-o formă mai explicită şi mai
completă existenţa unei legături permanente între viaţa psihică internă şi creier. El formulează pentru prima dată
ipoteza localizării directe a funcţiilor şi proceselor psihice în structurile cerebrale.
Astfel, considerând că impresiile din lumea externă pătrund în forma fluidelor, prin ochi, în ventriculii cerebrali,
acest gânditor propunea că talamusul optic reprezintă acel mecanism în care fluidele respective se asociază cu
fluidele vitale sosite din ficat, transformându-se, la nivelul sistemului vascular, în fluide psihice (pneuma psihikon
sau pneuma loghistikon). Pe cât de naivă şi puerilă ne pare astăzi această explicaţie, pe atât de avansată şi
revoluţionară a fost ea pentru timpul acela. De altfel, ideea că ventriculii cerebrali (mai exact, lichidul care-i irigă)
constituie substratul material nemijlocit al psihicului s-a perpetuat mai bine de un mileniu şi jumătate. Modul de
abordare şi soluţionare ulterioară a problemei raportului dintre psihic şi creier a fost condiţionat atât de evoluţia
reprezentărilor şi testărilor psihologice, cat şi de perfecţionarea metodelor şi tehnicilor de investigare şi descriere
anatomofiziologică a sistemului nervos. În general, reprezentările şi teoriile psihologice, evoluand în cadrul
diferitelor sisteme filosofice s-au detașat tot mai mult de suportul lor intuitiv-concret iniţial, dobândind un caracter
speculativ-abstract. Chiar în aceste condiţii, viaţa psihică a omului nu va mai fi însă abordată global, ca ceva
amorf, ci va fi supusă operaţiei de analiză şi clasificare, care va duce la desprinderea şi descrierea unor funcţii
precise şi capacităţi particulare distincte.

Cât priveşte cunoaşterea sistemului nervos, ea va continua încă multă vreme să se menţină la un nivel vag,
ipotetic, lipsită de un material faptic dobândit prin metode ştiinţifice riguroase. Aceasta a făcut ca raportarea
psihicului la substratul neuronal să fie concepută tot într-o manieră globală, fenomenologică.

Iată, de pildă, chiar în secolul al XVII-lea, Descartes (1649) considera posibil ca întregul psihic să fie localizat într-un
singur organ – glanda epifiză (pineală) – situată central la baza emisferelor cerebrale, poziție care îi conferea, în
opinia lui, roul de dispecer al “spiritelor animale”, purtătoarele nemijlocite ale psihicului. Willis (1664) susținea că
organul vieții psihice este reprezentat de corpii striați. Mai târziu, Lancisi (1739), lega procesele psihice de corpul
calos.

Tendința de a localiza fiecare funcție psihică, oricât de complexă, într-o zonă precis delimitată a creierului atinge
punctul ei culminant la anatomistul austriac Franz Gall (1810), care avansează ideea că scoarța cerebrală e un strat
de centri integratori, fiecare dintre ei îndeplinind o anumită funcție psihică. Trebuie menționat că, în pofida
caracterului ei exagerat și esențialmente naiv, concepția lui Gall prezină din perspectiva actuală o dublă importanță
metodologică:

1.atrage pentru prima oară atenția asupra caracterului diferențiat al scoarței cerebrale, într-o epocă în care
aceasta continua să fie considerată o masă amorfă

2.ideile despre centrii corticali înalt specializați în plan funcțional au avut o influență deosebită asupra
constituirii modulului îngust localizaționist de mai târziu, având ecouri până în neuroștiința cognitivă
contemporană.

Reținem ca semnificativ faptul că prima modalitate de a rezolva problema raportului psihic-creier s-a concretizat în
tendința de a localiza funcțiile și procesele psihice particulare în structuri și formațiuni cerebrale cât mai exact
delimitate anatomic. Această tendință va fi continuată de cercetările anatomo-patologice realizate de Paul Broca
și, mai târziu, de Carl Wernicke. Primul a pus în evidență într-o manieră concludentă legătura diferențiat-selectivă
între un focar patologic al creierului și tulburarea unei anumite funcții psihice. Astfel, analizând postmortem
creierele a doi pacienți care decedeaseră prezentând grave tulburări ale vorbirii (afazie motorie), Broca (1861, 1865)
a descoperit existența unei leziuni a porțiunii posterioare a circumvoluțiunii frontale inferioare din emisfera stângă.
Pe această bază, el a formulat concluzia potrivit căreia vorbirea are o localizare precisă, zona descrisă de el putând
fi denumită “centrul imaginii motorii ale cuvintelor”. În finalul raportului său, prezentat în cadrul Societății de
Antropologie din Paris, Broca își exprima optimist speranța că, în viitor vor fi descoperiți centrii și pentru celelalte
funcții psihice superioare, ideea localizaționismului îngust dobândind astfel o acceptare pe scară largă în neurologia
epocii. Fapt este că descoperirile lui Broca au dat un puternic impuls investigațiilor clinice asupra tulburărilor de
focar, care vor avea un rol esențial în constituirea și dezvoltarea neuropsihologiei moderne.

Rezumând, putem spune că orientarea neuroanatomică localizaționistă susține următoarele teze principale:
funcțiile psihice au o reprezentare corticală separată, legându-se prin fascicule de substanță albă, acestea
putând fi constituite din subfascicule care fac posibil transferul unui anumit tip de informație în diferite
“puncte” ale creierului. Cu toate acestea, în ciuda inițialei sale solidități experimentale, modelul îngust
localizaționist nu a putut dobândi în cele din urmă o recunoaștere unanimă.

Bibliografie 9

Todd E. Feinberg; Martha J. Farah –


Behavioral Neurology and
Neuropsychology, p.14-16;
The Rise of Experimental Neuropsychology

Most of the advances described so far in this chapter were


made by studying individual patients, or at most a small
series of patients with similar disorders. In many instances,
particularly before the middle of this century, patients'
behavior was studied relatively naturalistically, without
planned protocols or quantitative measurements. In the
1960s and 70s, a different approach to the study of brain-
behavior relations took hold. Neurologists and
neuropsychologists began to design experiments
patterned on research methods in experimental
psychology.

Typical research designs in experimental psychology involved groups of normal subjects given different
experimental treatments (for example, different training or different stimulus materials), and the effects of the
treatments were measured in standardized protocols and compared using statistical methods such as analysis of
variance (ANOVA). In neuropsychology, the "treatments" were, as a rule, naturally occurring brain lesions.
Groups of patients with different lesion sites or behavioral syndromes were tested with standard protocols,
yielding quantitative measures of performance, and these performances were compared across patient groups
and with non-brain-damaged control groups. Unlike the impairments studied previously in single-case designs,
which were so striking that control subjects would generally have been superfluous, experimental
neuropsychology often focused on group differences of a rather subtle nature, which required statistical
analysis to substantiate.

The most common question addressed by these studies concerned localization of function. Often the localization
sought was no more precise than left versus right hemisphere or one quadrant of the brain (which, in the days
before computed tomography, often amounted to left versus right hemisphere with presence or absence of visual
field defects and/or hemiplegia). Given the huge amount of research done during this period on language,
memory, perception, attention, emotion, praxis, and so-called executive functions, it would be hopeless
even to attempt a summary.
The influential research program of the Montreal Neurological Institute also began during this period. In the wake
of William Scoville's discovery that the bilateral medial temporal resection he performed on epileptic patient H.M.
resulted in permanent and dense amnesia, Brenda Milner and her colleagues investigated this patient and groups
of other operated epileptic patients. This enabled them to address questions of functional localization with the
anatomic precision of known surgical lesions. At the same time, another surgical intervention for epilepsy,
callosotomy, also spawned a productive and influential research program. Roger Sperry and his students and
collaborators were able to address a wide variety of questions about hemispheric specialization by studying the
isolated functioning of the human cerebral hemispheres.

In addition to answering questions about localization, the experimental neuropsychology of the sixties and
seventies also uncovered aspects of the functional organization of behavior. By examining patterns of association
and dissociation among abilities over groups of subjects, researchers tried to determine which abilities depend
on the same underlying functional systems and which are functionally independent. For example, the
frequent association of aphasia and apraxia had been taken by some to support the notion that aphasia was not
language-specific but was just one manifestation of a more pervasive loss of the ability to symbolize or represent
("asymbolia"). A classic group study by Goodglass and Kaplan undermined this position by showing that severity
of apraxia and aphasia were uncorrelated in a large sample of left-hemisphere-damaged subjects. A second
example of the use of dissociations between groups of patients from this period is the demonstration of the
functional distinction, by Newcombe and Russell, within vision between pattern recognition and spatial
orientation.

By the end of the seventies, experimental neuropsychology had matured to the point where many perceptual,
cognitive, and motor abilities bad been associated with particular brain regions, mod certain features of the
functional organization of these abilities had been delineated. Accordingly, it was at this time that first editions
of some at the best-known neuropsychology texts appeared, such as those by Hecaen and Albert, Heilman and
Valenstein, Kolb and Whishaw, Springer and Deutsch, and Walsh.

Despite the tremendous progress of this period, experimental neuropsychology remained distinct from and
relatively unknown within academic psychology. Particularly in the United States, but also to a large extent in
Canada and Europe (the three largest contributors to the world's psychology literature), experimental
neuropsychologists tended to work in medical centers rather than university psychology departments and
to publish their work in journals separate from mainstream experimental psychology. An important turning
point in the histories of both neuropsychology and the psychology of normal human function came when
researchers in each area became aware of the other.

THE MARRIAGE OF EXPERIMENTAL NEUROPSYCHOLOGY AND COGNITIVE PSYCHOLOGY

The predominant approach to human experimental psychology in the 1970s was cognitive psychology. The
hallmark of this approach was the assumption that all of cognition (broadly construed to include perception and
motor control) could be viewed as information processing. Although the effects of damage to an information-
processing mechanism might seem to be a good source of clues as to its normal operation, cognitive
psychologists of the seventies were generally quite ignorant of contemporary neuropsychology.

The reason that most cognitive psychologists of the 1970s ignored neuropsychology stemmed from an overly
narrow conception of information processing, based on the digital computer. A basic tenet of cognitive
psychology was the computer analogy for the mind: the mind is to the brain as software is to hardware in a
computer. Given that the same computer can run different programs and the same program can be run on
different computers, this analogy suggests that hardware and software are independent and that the brain is
therefore irrelevant to cognitive psychology. If you want to understand the nature of the program that is the
human mind, studying neuropsychology is as pointless as trying to understand how a computer is programmed
by looking at the circuit boards.

The problem with the computer analogy is that hardware and software are independent only for very special
types of computational systems: those systems that have been engineered, through great effort and
ingenuity, to make the hardware and software independent, enabling one computer to run many programs and
enabling those programs to be portable to other computers. The brain was "designed" by very different
pressures, and there is no reason to believe that, in general, information-processing functions and the
physical subtrate of those functions will be independent. In fact, as cognitive psychologists finally began to
learn about neuropsychology, it became apparent that cognitive functions break down in characteristic and highly
informative ways after brain damage. By the early 1980s, cognitive psychology and neuropsychology were finally
in communication with one another. Since then, we have seen an explosion of meetings, books, and new journals
devoted to so-called cognitive neuropsychology. Perhaps more important, existing cognitive psychology journals
have begun to publish neuropsychological studies, and articles in existing neuropsychology and neurology
journals frequently include discussions of the cognitive psychology literature.

Let us take a closer look at the scientific forces that drove this change in disciplinary boundaries. By 1980, both
cognitive psychology and neuropsychology had reached stages of development that were, if not exactly
impasses, points of diminishing returns for the concepts and methods of their own isolated disciplines. In
cognitive psychology, the problem concerned methodologic limitations. By varying stimuli and instructions and
measuring responses and response latencies, cognitive psychologists made inferences about the information
processing that intervened between stimulus and response. But such inferences were indirect, and in some cases
they were incapable of distinguishing between rival theories. In 1978 the cognitive psychologist John Anderson
published an influential paper in which he called this the "identifiability" problem and took as his example the
debate over whether mental images were more like perceptual representations or linguistic representations. He
argued that the field's inability to resolve this issue, despite many years of research, was due to the impossibility
of uniquely identifying internal cognitive processes from stimulus-response relations. He suggested that the
direct study of brain function could, in principle, make a unique identification possible, but he indicated that such
a solution probably lay in the distant future.

That distant future came to pass within the next 10 years, as cognitive psychologists working on a variety of
different topics found that the study of neurologic patients provided a powerful new source of evidence for testing
their theories. In the case of mental imagery, taken by Anderson to be emblematic of the identifiability problem,
the finding that perceptual impairments after brain damage were frequently accompanied by parallel imagery
impairments strongly favored the perceptual hypothesis. The study of learning and memory within cognitive
psychology was revolutionized by the influx of ideas and findings on preserved learning in amnesia, leading to the
hypothesis of multiple memory systems.

In the study of attention, cognitive psychologists had for years focused on the issue of early versus late attentional
selection without achieving a resolution, and here too neurologic disorders were crucial in moving the field
forward. The phenomena of neglect provided dramatic evidence of selection from spatially formatted perceptual
representations, and the variability in neglect manifestations from case to case helped to establish the possibility
of multiple loci for attentional selection as opposed to a single early or late locus. The idea of separate visual
feature maps, supported by cases of acquired color, motion, and depth blindness, provided the inspiration for the
most novel development in recent cognitive theories of attention—namely, feature integration theory.

What did neuropsychology gain from the rapprochement with cognitive psychology? The main benefits were
theoretical rather than methodologic. Traditionally, neuropsychologists studied the localization and functional
organization of abilities, such as speech, reading, memory, object recognition, and so forth. But few would doubt
that each of these abilities depends upon an orchestrated set of component cognitive processes, and it seems far
more likely that the underlying cognitive components, rather than the task-defined abilities, are what is
implemented in localized neural tissue. The theories of cognitive psychology therefore allowed
neuropsychologists to pose questions about the localization and functional organization of the components
of the cognitive architecture, a level of theoretical analysis that was more likely to yield clear and
generalizable findings.

Among patients with reading disorders, for example, some are impaired at reading nonwords (e.g., plif) while
others are impaired at reading irregular words (e.g., yacht). Rather than attempt to localize nonword reading
or irregular word reading per se and delineate them as independent abilities, neuropsychologists have been able
to use a theory of reading developed in cognitive psychology to interpret these disorders in terms of damage
to a whole-word recognition system and a grapheme-to-phoneme translation system, respectively. This
interpretation has the advantage of correctly predicting additional features of patient behavior, such as the
tendency to misread nonwords as words of overall similar appearance when operating with only the whole-
word system.

In recent years the neurology and neuropsychology of every major cognitive system has adopted the theoretical
framework of cognitive psychology in a general way, and in some cases specific theories have been incorporated.
This is reflected in the content and organization of the present book. For the most intensively studied areas of
behavioral neurology and neuropsycholohgy—namely, visual attention, memory, language, frontal lobe function,
and Alzheimer disease—integrated pairs of chapters review the clinical and anatomic aspects of the relevant
disorders and their cognitive theoretical interpretations. Chapters on other topics will cover both the clinical and
theoretical aspects together.

Bibliografie 10
Christian Jarett – Great Myths of the Brain,
p.1-15; 101-106;
Introduction

“As humans, we can identify galaxies light years away, we can


study Particles smaller than an atom. But we still haven’t unlocked
the mystery of the three pounds of matter that sits between our
ears.” That was US President Barack Obama speaking in April
2013 at the launch of the multimillion dollar BRAIN Initiative. It
stands for “Brain Research through Advancing Innovative
Neurotechnologies” and the idea is to develop new ways to
visualize the brain in action. The same year the EU announced its
own €1 billion Human Brain Project to create a computer model
of the brain.

This focus on neuroscience isn’t new – back in 1990, US President


George H.W. Bush designated the 1990s the “Decade of the
Brain” with a series of public awareness publications and events.
Since then interest and investment in neuroscience has only
grown more intense; some have even spoken of the twenty-first
century as the “Century of the Brain.” Despite our passion for all
things neuro, Obama’s assessment of our current knowledge was accurate. We’ve made great strides in our
understanding of the brain, yet huge mysteries remain. They say a little knowledge can be a dangerous thing and
it is in the context of this excitement and ignorance that brain myths have thrived. By brain myths I mean stories
and misconceptions about the brain and brain-related illness, some so entrenched in everyday talk that large
sections of the population see them as taken-for-granted facts.

With so many misconceptions swirling around, it’s increasingly difficult to tell proper neuroscience from brain
mythology or what one science blogger calls neurobollocks (see neurobollocks.wordpress.com), otherwise known
as neurohype, neurobunk, neurotrash, or neurononsense. Daily newspaper headlines tell us the “brain spot” for
this or that emotion has been identified. Salesmen are capitalizing on the fashion for brain science by placing
the neuro prefix in front of any activity you can think of, from neuroleadership to neuromarketing (see p. 188).
Fringe therapists and selfhelp gurus borrow freely from neuroscience jargon, spreading a confusing mix of brain
myths and self-improvement propaganda.

In 2014, a journalist and over-enthusiastic neuroscientist even attempted to explain the Iranian nuclear negotiations
(occurring at that time) in terms of basic brain science. Writing in The Atlantic, the authors actually made some
excellent points, especially in terms of historical events and people’s perceptions of fairness. But they undermined
their own credibility by labeling these psychological and historical insights as neuroscience, or by gratuitously
referencing the brain. It’s as if the authors drank brain soup before writing their article, and just as they were
making an interesting historical or political point, they hiccupped out another nonsense neuro reference. This book
takes you on a tour of the most popular, enduring and dangerous of brain myths and misconceptions, from the
widely accepted notion that we use just 10 percent of our brains, to more specific and harmful misunderstandings
about brain illnesses, such as the mistaken idea that you should place an object in the mouth of a person having
an epileptic fit to stop them from swallowing their tongue. I’ll show you examples of writers, filmmakers, and
charlatans spreading brain myths in newspaper headlines and the latest movies. I’ll investigate the myths’ origins
and do my best to use the latest scientific consensus to explain the truth about how the brain really works.

The Urgent Need for Neuro Myth-Busting

When Sanne Dekker at the Vrije Universiteit in Amsterdam and her colleagues surveyed hundreds of British and
Dutch teachers recently about common brain myths pertaining to education, their results were alarming. The
teachers endorsed around half of 15 neuromyths embedded among 32 statements about the brain. What’s more,
these weren’t just any teachers. They were teachers recruited to the survey because they had a particular interest
in using neuroscience to improve teaching. Among the myths the teachers endorsed were the idea that there
are left-brain and right-brain learners and that physical coordination exercises can improve the integration of
function between the brain hemispheres. Worryingly, myths related to quack brain-based teaching were
especially likely to be endorsed by the teachers. Most disconcerting of all, greater general knowledge about the
brain was associated with stronger belief in educational neuromyths – another indication that a little brain
knowledge can be a dangerous thing.

If the people educating the next generation are seduced by brain myths, it’s a sure sign that we need to do more to
improve the public’s understanding of the difference between neurobunk and real neuroscience. Still further
reason to tackle brain myths head on comes from research showing that presenting people, including
psychology students, with correct brain information is not enough – many still endorse the 10 percent myth
and others. Instead what’s needed is a “refutational approach” that first details brain myths and then debunks
them, which is the format I’ll follow through much of this book. Patricia Kowalski and Annette Taylor at the
University of San Diego compared the two teaching approaches in a 2009 study with 65 undergraduate psychology
students. They found that directly refuting brain and psychology myths, compared with simply presenting
accurate facts, significantly improved the students’ performance on a test of psychology facts and fiction at
the end of the semester. Post-semester performance for all students had improved by 34.3 percent, compared
with 53.7 for those taught by the refutational approach.

Yet another reason it’s important we get myth-busting is the media’s treatment of neuroscience. When Clíodhna
O'Connor at UCL’s Division of Psychology and Language Sciences, and her colleagues analyzed UK press coverage
of brain research from 2000 to 2010, they found that newspapers frequently misappropriated new neuroscience
findings to bolster their own agenda, often perpetuating brain myths in the process (we’ll see through examples
later in this book that the US press is guilty of spreading neuromyths too). From analyzing thousands of news
articles about the brain, O’Connor found a frequent habit was for journalists to use a fresh neuroscience finding as
the basis for generating new brain myths – dubious self improvement or parenting advice, say, or an alarmist
health warning. Another theme was using neuroscience to bolster group differences, for example, by referring to
“the female brain” or “the gay brain,” as if all people fitting that identity all have the same kind of brain.
“[Neuroscience] research was being applied out of context to create dramatic headlines, push thinly disguised
ideological arguments, or support particular policy agendas,” O’Connor and her colleagues concluded.

The need for humility to debunk misconceptions about the brain and present the truth about how the brain really
works, I’ve pored over hundreds of journal articles, consulted the latest reference books and in some cases made
direct contact with the world’s leading experts. I have strived to be as objective as possible, to review the evidence
without a pre-existing agenda. However, anyone who spends time researching brain myths soon discovers that
many of today’s myths were yesterday’s facts. I am presenting you with an account based on the latest
contemporary evidence, but I do so with humility, aware that the facts may change and that people make mistakes.
While the scientific consensus may evolve, what is timeless is to have a skeptical, open-minded approach, to
judge claims on the balance of evidence, and to seek out the truth for its own sake, not in the service of some
other agenda.

Before finishing this Introduction with a primer on basic brain anatomy, I’d like to share with you a contemporary
example of the need for caution and humility in the field of brain mythology. Often myths arise because a single
claim or research finding has particular intuitive appeal. The claim makes sense, it supports a popular argument,
and soon it is cemented as taken-for-granted fact even though its evidence base is weak. This is exactly what
happened in recent years with the popular idea, accepted and spread by many leading neuroscientists, that colorful
images from brain scans are unusually persuasive and beguiling. Yet new evidence suggests this is a modern brain
myth. Two researchers in this area, Martha Farah and Cayce Hook, call this irony the “seductive allure of ‘seductive
allure.’” Brain scan images have been described as seductive since at least the 1990s and today virtually every
cultural commentary on neuroscience mentions the idea that they paralyze our usual powers of rational scrutiny.

Consider an otherwise brilliant essay that psychologist Gary Marcus wrote for the New Yorker late in 2012 about the
rise of neuroimaging: “Fancy color pictures of brains in action became a fixture in media accounts of the human
mind and lulled people into a false sense of comprehension,” he said. Earlier in the year, Steven Poole writing for
the New Statesman put it this way: “the [fMRI] pictures, like religious icons, inspire uncritical devotion.”

What’s the evidence for the seductive power of brain images? It mostly hinges on two key studies. In 2008, David
McCabe and Alan Castel showed that undergraduate participants found the conclusions of a study (watching TV
boosts maths ability) more convincing when accompanied by an fMRI brain scan image than by a bar chart or an
EEG scan. The same year, Deena Weisberg and her colleagues published evidence that naïve adults and
neuroscience students found bad psychological explanations more satisfying when they contained gratuitous
neuroscience information (their paper was titled “The Seductive Allure of Neuroscience Explanations”).

What’s the evidence against the seductive power of brain images? First off, Farah and Hook criticize the 2008
McCabe study. McCabe’s group claimed that the different image types were “informationally equivalent,” but
Farah and Hook point out this isn’t true – the fMRI brain scan images are unique in providing the specific shape and
location of activation in the temporal lobe, which was relevant information for judging the study.

Next came a study published in 2012 by David Gruber and Jacob Dickerson, who found that the presence of brain
images did not affect students’ ratings of the credibility of science news stories. Was this failure to replicate the
seductive allure of brain scans an anomaly? Far from it. Through 2013 no fewer than three further investigations
found the same or a similar null result. This included a paper by Hook and Farah themselves, involving 988
participants across three experiments; and another led by Robert Michael involving 10 separate replication
attempts and nearly 2000 participants. Overall, Michael’s team found that the presence of a brain scan had only
a tiny effect on people’s belief in an accompanying story. The result shows “the ‘amazingly persistent meme of the
overly influential image’ has been wildly overstated,” they concluded. So why have so many of us been seduced by
the idea that brain scan images are powerfully seductive? Farah and Hook say the idea supports non-scanning
psychologists’ anxieties about brain scan research stealing all the funding. Perhaps above all, it just seems so
plausible. Brain scan images really are rather pretty, and the story that they have a powerful persuasive effect is
very believable. Believable, but quite possibly wrong.

Brain scans may be beautiful but the latest evidence suggests they aren’t as beguiling as we once assumed.
It’s a reminder that in being skeptical about neuroscience we must be careful not to create new brain myths of our
own.

Arm Yourself against Neurobunk

This book will guide you through many of the most popular and pervasive neuromyths but more are appearing every
day. To help you tell fact from fiction when encountering brain stories in the news or on TV, here are six simple tips
to follow:

1.Look out for gratuitous neuro references. Just because someone mentions the brain it doesn’t necessarily make
their argument more valid. Writing in The Guardian in 2013, clinical neuropsychologist Vaughan Bell called out a
politician who claimed recently that unemployment is a problem because it has “physical effects on the brain,” as
if it isn’t an important enough issue already for social and practical reasons. This is an example of the mistaken
idea that a neurological reference somehow lends greater authority to an argument, or makes a societal or
behavioral problem somehow more real. You’re also likely to encounter newspaper stories that claim a particular
product or activity really is enjoyable or addictive or harmful because of a brain scan study showing the activation
of reward pathways or some other brain change. Anytime someone is trying to convince you of something, ask
yourself – does the brain reference add anything to what we already knew? Does it really make the argument
more truthful?

2.Look for conflicts of interest. Many of the most outrageous and farfetched brain stories are spread by people
with an agenda. Perhaps they have a book to sell or they’re marketing a new form of training or therapy. A common
tactic used by these people is to invoke the brain to shore up their claims. Popular themes include the idea that
technology or other aspects of modern life are changing the brain in a harmful way, or the opposite – that some
new form of training or therapy leads to real, permanent beneficial brain changes. Often these kinds of brain claims
are mere conjecture, sometimes even from the mouths of neuroscientists or psychologists speaking outside their
own area of specialism. Look for independent opinion from experts who don’t have a vested interest. And check
whether brain claims are backed by quality peer-reviewed evidence (see point 5). Most science journals require
authors to declare conflicts of interest so check for this at the end of relevant published papers.

3.Watch out for grandiose claims. No Lie MRI is a US company that offers brain scan-based lie detection services.
Its home page states, “The technology used by No Lie MRI represents the first and only direct measure of truth
verification and lie detection in human history!” Sound too good to be true? If it does, it probably is.

Words like “revolutionary,” “permanent,” “first ever,” “unlock,” “hidden,” “within seconds,” should all set
alarm bells ringing when uttered in relation to the brain. One check you can perform is to look at the career of the
person making the claims. If they say they’ve developed a revolutionary new brain technique that will for the first
time unlock your hidden potential within seconds, ask yourself why they haven’t applied it to themselves and
become a best-selling artist, Nobel winning scientist, or Olympic athlete.

4.Beware of seductive metaphors. We’d all like to have balance and calm in our lives but this abstract sense of
balance has nothing to do with the literal balance of activity across the two brain hemispheres (see also p. 196)
or other levels of neural function. This doesn’t stop some self-help gurus invoking concepts like “hemispheric
balance” so as to lend a scientific sheen to their lifestyle tips – as if the route to balanced work schedules is having
a balanced brain. Any time that someone attempts to link a metaphorical concept (e.g. deep thinking) with actual
brain activity (e.g. in deep brain areas), it’s highly likely they’re talking rubbish. Also, beware references to
completely made up brain areas. In February 2013, for instance, the Daily Mail reported on research by a German
neurologist who they said had discovered a tell-tale “dark patch” in the “central lobe” of the brains of killers and
rapists. The thing is, there is no such thing as a central lobe!

5.Learn to recognize quality research. Ignore spin and take first-hand testimonials with a pinch of salt. When it
comes to testing the efficacy of brain-based interventions, the gold standard is the randomized, double-blind,
placebo-controlled trial. This means the recipients of the intervention don’t know whether they’ve received the
target intervention or a placebo (a form of inert treatment such as a sugar pill), and the researchers also don’t know
who’s been allocated to which condition. This helps stop motivation, expectation, and bias from creeping into the
results. Related to this, it’s important for the control group to do something that appears like a real intervention,
even though it isn’t. Many trials fail to ensure this is the case. The most robust evidence to look for in relation to
brain claims is the meta-analysis, so try to search for these if you can. They weigh up all the evidence from existing
trials in a given area and help provide an accurate picture of whether a treatment really works or whether a stated
difference really exists.

6.Recognize the difference between causation and correlation (a point I’ll come back to in relation to mirror
neurons in Chapter 5). Many newspaper stories about brain findings actually refer to correlational studies that only
show a single snapshot in time. “People who do more of activity X have a larger brain area Y,” the story might say.
But if the study was correlational we don’t know that the activity caused the larger brain area. The causal
direction could run the other way (people with a larger Y like to do activity X), or some other factor might influence
both X and Y. Trustworthy scientific articles or news stories should draw attention to this limitation and any others.
Indeed, authors who only focus on the evidence that supports their initial hypotheses or beliefs are falling prey to
what’s known as “confirmation bias.” This is a very human tendency, but it’s one that scrupulous scientists and
journalists should deliberately work against in the pursuit of the truth.

Arming yourself with these six tips will help you tell the difference between a genuine neuroscientist and a
charlatan, and between a considered brain-based news story and hype. If you’re still unsure about a recent
development, you could always look to see if any of the following entertaining expert skeptical bloggers have
shared their views:

www.mindhacks.com - Vaughan Bell (Twitter)

https://www.discovermagazine.com/blog/neuroskeptic - NeuroSkeptic (Twitter)

https://neurocritic.blogspot.com/ - NeuroCritic (Twitter)

http://neurobollocks.wordpress.com – Abandoned since 2015, but still great content on it.

A Primer on Basic Brain Anatomy, Techniques, and Terminology

Hold a human brain in your hands and the first thing you notice is its impressive heaviness. Weighing about three
pounds, the brain feels dense. You also see immediately that there is a distinct groove – the longitudinal fissure –
running front to back and dividing the brain into two halves known as hemispheres. Deep within the brain, the two
hemispheres are joined by the corpus callosum, a thick bundle of connective fibers. The spongy, visible outer layer
of the hemispheres – the cerebral cortex (meaning literally rind or bark) – has a crinkled appearance: a swathe of
swirling hills and valleys, referred to anatomically as gyri and sulci, respectively. The cortex is divided into five
distinct lobes: the frontal lobe, the parietal lobe near the crown of the head, the two temporal lobes at each side
near the ears, and the occipital lobe at the rear. Each lobe is associated with particular domains of mental function.
For instance, the frontal lobe is known to be important for self-control and movement; the parietal lobe for
processing touch and controlling attention; and the occipital lobe is involved in early visual processing. The extent
to which mental functions are localized to specific brain regions has been a matter of debate throughout
neurological history and continues to this day.
Hanging off the back of the brain is the cauliflower-like cerebellum, which almost looks like another mini-brain (in
fact cerebellum means“little brain”). It too is made up of two distinct hemispheres, and remarkably it contains
around half of the neurons in the central nervous system despite constituting just 10 percent of the brain’s volume.
Traditionally the cerebellum was associated only with learning and motor control (i.e. control of the body’s
movements), but today it is known to be involved in many functions, including emotion, language, pain, and
memory. Holding the brain aloft to study its underside, you see the brain stem sprouting downwards, which would
normally be connected to the spinal cord. The brain stem also projects upwards into the interior of the brain to a
point approximately level with the eyes. Containing distinct regions such as the medulla and pons, the brain stem
is associated with basic life support functions, including control of breathing and heart rate. Reflexes like sneezing
and vomiting are also controlled here. Some commentators refer to the brain stem as “the lizard brain” but this is
a misnomer. Slice the brain into two to study the inner anatomy and you discover that there are a series of fluid-
filled hollows, known as ventricles, which act as a shock-absorption system. You can also see the midbrain that sits
atop the brainstem and plays a part in functions such as eye movements. Above and anterior to the midbrain is the
thalamus – a vital relay station that receives connections from, and connects to, many other brain areas.
Underneath the thalamus is the hypothalamus and pituitary gland, which are involved in the release of hormones
and the regulation of basic needs such as hunger and sexual desire.

Also buried deep in the brain and connected to the thalamus are the hornlike basal ganglia, which are involved in
learning, emotions, and the control of movement. Nearby we also find, one on each side of the brain, the
hippocampi (singular hippocampus) – the Greek name for “sea-horse” for that is what early anatomists believed it
resembled. Here too are the almond shaped amygdala, again one on each side. The hippocampus plays a vital role
in memory and the amygdala is important for memory and learning, especially when emotions are involved. The
collective name for the hippocampus, amygdala, and related parts of the cortex is the limbic system, which is an
important functional network for the emotions.

The brain’s awesome complexity is largely invisible to the naked eye. Within its spongy bulk are approximately 86
billion neurons forming a staggering 100 trillion plus connections. There are also a similar number of glial cells,
which recent research suggests are more than housekeepers, as used to be believed, but also involved in
information processing. However, we should be careful not to get too reverential about the brain’s construction –
it’s not a perfect design by any means (more about this on p. 135). In the cortex, neurons are arranged into layers,
each containing different types and density of neuron. The popular term for brains – “gray matter” – comes from
the anatomical name for tissue that is mostly made up of neuronal cell bodies. The cerebral cortex is entirely made
up of gray matter, although it looks more pinkish than gray, at least when fresh. This is in contrast to “white matter”
– found in abundance beneath the cortex – which is tissue made up mostly of fat-covered neuronal axons (axons
are a tendrillike part of the neuron that is important for communicating with other neurons). It is the fat-covered
axons that give rise to the whitish appearance of white matter. Neurons communicate with each other across small
gaps called synapses. This is where a chemical messenger (a “neurotransmitter”) is released at the end of the axon
of one neuron, and then absorbed into the dendrite (a branch-like structure) of a receiving neuron. Neurons release
neurotransmitters in this way when they are sufficiently excited by other neurons. Enough excitation causes an
“action potential,” which is when a spike of electrical activity passes the length of the neuron, eventually leading it
to release neurotransmitters. In turn these neurotransmitters can excite or inhibit receiving neurons. They can also
cause slower, longer-lasting changes, for example by altering gene function in the receiving neuron. Traditionally,
insight into the function of different neural areas was derived from research on brain damaged patients. Significant
advances were made in this way in the nineteenth century, such as the observation that, in most people, language
function is dominated by the left hemisphere. Some patients, such as the railway worker Phineas Gage, have had a
particularly influential effect on the field. The study of particular associations of impairment and brain damage also
remains an important line of brain research to this day. A major difference between modern and historic research
of this kind is that today we can use medical scanning to identify where the brain has been damaged. Before such
technology was available, researchers had to wait until a person had died to perform an autopsy.

Modern brain imaging methods are used not only to examine the structure of the brain, but also to watch how it
functions. It is in our understanding of brain function that the most exciting findings and controversies are emerging
in modern neuroscience. Today the method used most widely in research of this kind, involving patients and healthy
people, is called functional magnetic resonance imaging (fMRI). The technique exploits the fact that blood is more
oxygenated in highly active parts of the brain. By comparing changes to the oxygenation of the blood throughout
the brain, fMRI can be used to visualize which brain areas are working harder than others. Furthermore, by carefully
monitoring such changes while participants perform controlled tasks in the brain scanner, fMRI can help build a
map of what parts of the brain are involved in different mental functions. Other forms of brain scanning include
Positron Emission Tomography (PET) and Single-Photon Computed Tomography, both of which involve
injecting the patient or research participant with a radioactive isotope. Yet another form of imaging called Diffusion
Tensor Imaging (DTI) is based on the passage of water molecules through neural tissue and is used to map the
brain’s connective pathways. DTI produces beautifully complex, colorful wiring diagrams.

The Human Connectome Project, launched in 2009, aims to map all 600 trillion wires in the human brain. An older
brain imaging technique, first used with humans in the 1920s, is electroencephalography (EEG), which involves
monitoring waves of electrical activity via electrodes placed on the scalp. The technique is still used widely in
hospitals and research labs today. The spatial resolution is poor compared with more modern methods such as
fMRI, but an advantage is that fluctuations in activity can be detected at the level of milliseconds (versus seconds
for fMRI). A more recently developed technique that shares the high temporal resolution of EEG is known as
magnetoencephalography (MEG), but it too suffers from a lack of spatial resolution. Brain imaging is not the only
way that contemporary researchers investigate the human brain. Another approach that’s increased hugely in
popularity in recent years is known as transcranial magnetic stimulation (TMS). It involves placing a magnetic coil
over a region of the head, which has the effect of temporarily disrupting neural activity in brain areas beneath that
spot. This method can be used to create what’s called a “virtual lesion” in the brain. This way, researchers can
temporarily knock out functioning in a specific brain area and then look to see what effect this has on mental
functioning. Whereas fMRI shows where brain activity correlates with mental function, TMS has the advantage of
being able to show whether activity in a particular area is necessary for that mental functioning to occur. The
techniques I’ve mentioned so far can all be used in humans and animals.

There is also a great deal of brain research that is only (or most often) conducted in animals. This research involves
techniques that are usually deemed too invasive for humans. For example, a significant amount of research with
monkeys and other nonhuman primates involves inserting electrodes into the brain and recording the activity
directly from specific neurons (called single-cell recording). Only rarely is this approach used with humans, for
example, during neurosurgery for severe epilepsy. The direct insertion of electrodes and cannulas into animal brains
can also be used to monitor and alter levels of brain chemicals at highly localized sites. Another ground-breaking
technique that’s currently used in animal research is known as optogenetics. Named 2010 “method of the year” by
the journal Nature Methods, optogenetics involves inserting light-sensitive genes into neurons. These individual
neurons can then be switched on and off by exposing them to different colors of light. New methods for
investigating the brain are being developed all the time, and innovations in the field will accelerate in the next few
years thanks to the launch of the US BRAIN Initiative and the EU Human Brain Project. As I was putting the
finishing touches to this book, the White House announced a proposal to double its investment in the BRAIN
Initiative “from about $100 million in FY [financial year] 2014 to approximately $200 million in FY 2015.”

Myth 18 – The Brain is a Computer

“We have in our head a remarkably powerful computer” (Daniel Kahneman)

An alien landing on earth today might well come to the conclusion that we consider ourselves to be robots. The
computer metaphors are everywhere, including in popular psychology books; also in the self-help literature: “your
mind is an operating system” says Dragoș Rouă, “Do you run the best version of it?”; and in novels too, like this
example in The Unsanctioned by Michael Lamke: “It had become a habit of his when deeply troubled to clear his mind
of everything in an effort to let his brain defragment the jumbled bits and pieces into a more organized format.”

The popularity of the mind-as-computer metaphor has to do with the way psychology developed through the last
century. Early on, the dominant “behaviorist” approach in psychology outlawed speculation about the inner
workings of the mind. Psychologists like John Watson in 1913 and Albert Weiss during the following decade argued
the nascent science of psychology should instead concern itself only with outwardly observable and measurable
behavior.

But then in the 1950s, the so-called Cognitive Revolution began, inspired in large part by innovations in computing
and artificial intelligence. Pioneers in the field rejected the constraints of behaviorism and turned their attention to
our inner mental processes, often invoking computer metaphors along the way. In his 1967 book, Cognitive
Psychology, which is credited by some with naming the field, Ulric Neisser wrote: “the task of trying to understand
human cognition is analogous to that of … trying to understand how a computer has been programmed.” Writing in
1980, the American personality psychologist Gordon Allport was unequivocal. “The advent of Artificial
Intelligence,” he said, “is the single most important development in the history of psychology.”

Where past generations likened the brain to a steam engine or a telephone exchange, psychologists today, and
often the general public too, frequently invoke computer-based terminology when describing mental processes. A
particularly popular metaphor is to talk of the mind as software that runs on the hardware of the brain. Skills are
said to be “hard-wired.” The senses are “inputs” and behaviors are the “outputs.” When someone modifies an
action or their speech on the fly, they are said to have performed the process “online.” Researchers interested
in the way we control our bodies talk about “feedback loops.” Eye-movement experts say the jerky saccadic eye
movements performed while we read are “ballistic,” in the sense that their trajectory is “pre-programmed” in
advance, like a rocket. Memory researchers use terms like “capacity,” “processing speed” and “resource limitations”
as if they were talking about a computer. There’s even a popular book to which I contributed, called Mind Hacks,
about using self-experimentation to understand your own brain.

Is the Brain Really a Computer?

The answer to that question depends on how literal we’re being, and what exactly we mean by a computer. Of
course, the brain is not literally made up of transistors, plastic wires, and mother boards. But ultimately both the
brain and the computer are processors of information. This is an old idea. Writing in the seventeenth century, the
English philosopher Thomas Hobbes said “Reason … is nothing but reckoning, that is adding and subtracting.” As
Steven Pinker explains in his book The Blank Slate (2002), the computational theory of mind doesn’t claim that the
mind is a computer, “it says only that we can explain minds and human-made information processors using some
of the same principles.”

Although some scholars find it useful to liken the mind to a computer, critics of the computational approach
have argued that there’s a deal-breaker of a difference between humans and computers. We think, computers
don’t. In his famous Chinese Room analogy published in 1980, the philosopher John Searle asked us to imagine a
man in a sealed room receiving Chinese communications slipped through the door. The man knows no Chinese but
he has instructions on how to process the Chinese symbols and how to provide the appropriate responses to them,
which he does. The Chinese people outside the room will have the impression they are interacting with a Chinese
speaker, but in fact the man has no clue about the meaning of the communication he has just engaged in. Searle’s
point was that the man is like a computer – outwardly he and they give the appearance of understanding, but in
fact they know nothing and have no sense of the meaning of what they are doing.

Another critic of computational approaches to the mind is the philosopher and medic Ray Tallis, author of Why The
Mind is Not a Computer (2004). Echoing Searle, Tallis points out that although it’s claimed that computers, like
minds, are both essentially manipulators of symbols, these symbols only actually have meaning to a person who
understands them. We anthropomorphize computers, Tallis says, by describing them as “doing calculations” or
“detecting signals,” and then we apply that same kind of language inappropriately to the neurobiological processes
in the brain. “Computers are only prostheses; they no more do calculations than clocks tell the time,” Tallis wrote
in a 2008 paper. “Clocks help us to tell the time, but they don’t do it by themselves.”

These criticisms of the computer metaphor are all arguably rather philosophical in nature. Other commentators
have pointed out some important technical differences between computers and brains. On his popular Developing
Intelligence blog, Chris Chatham outlines 10 key differences, including the fact that brains are analog whereas
computers are binary. That is, whereas computer transistors are either on or off, the neurons of the brain can
vary their rate of firing and their likelihood of firing based on the inputs they receive from other neurons.

Chatham also highlights the fact that brains have bodies, computers don’t. This is particularly significant in light of
the new field of embodied cognition, which is revealing the ways our bodies affect our thoughts. For example,
washing our hands can alter our moral judgments; feeling warm can affect our take on a person’s character;
and the weight of a book can influence our judgment of its importance (see p. 164). The opportunity to make
hand gestures even helps children learn new math strategies. In each case, it’s tricky to imagine what the
equivalent of these phenomena would be in a computer.

Memory provides another useful example of how, even on a point of similarity, brains do things differently from
computers. Although we and they both store information and retrieve it, we humans do it in a different way from
computers. Our digital friends use what psychologist Gary Marcus calls a “postal-code” system – every piece of
stored information has its own unique address and can therefore be sought out with almost perfect reliability. By
contrast, we have no idea of the precise location of our memories. Our mental vaults work more according to
context and availability. Specific names and dates frequently elude us, but we often remember the gist – for
example, what a person looked like and did for a living, even if we can’t quite pin down his or her name.

Myth: The Computational Theory of the Mind Has Served No Benefit

So, there are important differences between computers and brains, and these differences help explain why artificial
intelligence researchers frequently run into difficulties when trying to simulate abilities in robots that we humans
find easy – such as recognizing faces or catching a Frisbee. But just because our brains are not the same as
computers doesn’t mean that the computer analogy and computational approaches to the mind aren’t useful.
Indeed, when a computer program fails to emulate a feat of human cognition, this suggests to us that the brain
must be using some other method that’s quite different from how the computer has been programmed.

Some of these insights are general – as we learned with memory, brains often take context into account a lot more
than computer programs do, and the brain approach is often highly flexible, able to cope with missing or poor
quality information. Increasingly, attempts to simulate human cognition try to factor in this adaptability using so-
called “neural networks” (inspired by the connections between neurons in the brain), which can “learn” based on
feedback regarding whether they got a previous challenge right or wrong.

In their article for The Psychologist magazine, “What computers have shown us about the mind” Padraic Monaghan
and his colleagues provided examples of insights that have come from attempting to simulate human cognition in
computer models. In the case of reading, for example, computer models that are based on the statistical properties
of a word being pronounced one way rather than another, are better able to simulate the reading of irregular words
than are computer models based on fixed rules of pronunciation. Deliberately impairing computer models running
these kinds of statistical strategies leads to dyslexia-like performance by the computer, which has led to novel clues
into that condition in humans.

Other insights from attempts to model human cognition with computers, include: a greater understanding of the
way we form abstract representations of faces, based on an averaging of a face from different angles and in
different conditions; how, with age, children change the details they pay attention to when categorizing objects;
and the way factual knowledge is stored in the brain in a distributed fashion throughout networks of neurons, thus
explaining the emerging pattern of deficits seen in patients with semantic dementia – a neurodegenerative disease
that leads to problems finding words and categorizing objects. Typically rare words are lost first (e.g. the names for
rare birds). This is followed by an inability to distinguish between types of a given category (e.g. all birds are
eventually labeled simply as bird rather than by their species), as if the patient’s concepts are becoming
progressively fuzzy.

There’s no question that attempts to model human cognition using computers have been hugely informative
for psychology and neuroscience. As Monaghan and his co-authors concluded: “computer models … have provided
enormous insight into the way the human mind processes information.” But there is clearly debate among scholars
about how useful the metaphor is and how far it stretches. The philosopher Daniel Dennett summed up the
situation well. “The brain’s a computer,” he said recently, “but it’s so different from any computer that you’re used to.
It’s not like your desktop or your laptop at all.”

Will We Ever Build a Computer Simulation of the Brain?

“It is not impossible to build a human brain and we can do it in 10 years” said the South African-born neuroscientist
Henry Markram during his TEDGlobal talk in 2009. Fast forward four years and Markram’s ambitious ten-year
Human Brain Project was the successful winner of over €1 billion in funding from the EU. The intention is to build a
computer model of the human brain from the bottom up, beginning at the microscopic level of ion channels in
individual neurons.

The project was borne out of Markram’s Blue Brain Project, based at the Brain and Mind Institute of the École
Polytechnique Fédérale de Lausanne, which in 2006 successfully modeled a part of the rat’s cortex made up of
around 10 000 neurons. The Human Brain Project aims to accumulate masses of data from around the world and
use the latest supercomputers to do the same thing, but for an entire human brain. One hoped-for practical
outcome is that this will allow endless simulations of brain disorders, in turn leading to new preventative and
treatment approaches. Entering the realms of sci-fi, Markram has also speculated that the final version of his
simulation could achieve consciousness.

Experts are divided as to the credibility of the aims of the Human Brain Project. Among the doubters is
neuroscientist Moshe Abeles at Bar-Ilan University in Israel. He told Wired magazine: “Our ability to understand all
the details of even one brain is practically zero. Therefore, the claim that accumulating more and more data will lead to
understanding how the brain works is hopeless.” However, other experts are more hopeful, even if rather skeptical
about the ten-year time frame. Also quoted in Wired is British computer engineer Steve Furber. “There aren’t any
aspects of Henry’s vision I find problematic,” he said. “Except perhaps his ambition, which is at the same time both
terrifying and necessary.”
Cursul 2 – Metode de cercetare și diagnostic în neuroștiință
Bibliografie video

Toate pasajele albastre din text sunt linkuri pe care se poate da click (și, Doamne-ferește,
citi și în plus)
https://www.youtube.com/watch?v=f_hxX_xvHQY – Suzanne Stensas – Neuroanatomy video lab – Terminologie
topografică. Orientarea în spațiu.

https://www.youtube.com/watch?v=Zj3RxtJ_Ljc – Allen Institute – Mărind creierul uman

https://www.youtube.com/watch?v=WAkaLEM0gTQ – Allen Institute – Tur virtual: microscopie electronică

https://www.youtube.com/watch?v=W1DPfZk5iF8 – Allen Institute – Inteligență artificială în microscopie

https://www.youtube.com/watch?v=m0rHZ_RDdyQ – USC Stevens Neuroimaging and Neuroinformatics Institute


– Neuroni și sinapse

https://www.youtube.com/watch?v=tZcKT4l_JZk – 2 Minute Neuroscience – Electroencefalografia (EEG)

https://www.youtube.com/watch?v=86zIa3pGM50 - Jonathan Mayhew – EEG și potențiale de acțiune

https://www.youtube.com/watch?v=iXXxL0EOJqs – BPM Biosignals – Potențiale vizuale evocate

https://www.youtube.com/watch?v=u50HPRe3rOY – Translational Brain Mapping Program – Cartografierea


limbajului în timpul unei proceduri neurochirurgicale cu pacientul treaz

https://www.youtube.com/watch?v=N2apCx1rlIQ – 2 Minute Neuroscience – Neuroimagistică


https://www.youtube.com/watch?v=eFmKUQSKDXc – Michael Posner despre metodele de cercetare în
neuroștiința cognitivă

https://www.youtube.com/watch?v=rgftzF6MwpI – Karl Friston – Neuroimagistică

https://www.youtube.com/watch?v=JJyf2lvB-Ps – Johns Hopkins Meidicine – Angiografie cerebrală diagnostică

https://www.youtube.com/watch?v=BLfwZ1NPNKY – Alt Shift X- Magnetoencefalografie – măsurarea activității


neurale cu ajutorul magnetismului

https://www.youtube.com/watch?v=kgInT8hbDuQ&t – Futurescape – Înregistrarea creierului cu MEG

https://www.youtube.com/watch?v=Hx7_SlvWmEs – Wellcome Trust – Crearea unui nou tip de


magnetoencefalograf

https://www.youtube.com/watch?v=qabTdk928eQ – Doug Lake – Diferența dintre un CT și un RMN

https://www.youtube.com/watch?v=RXDAXdHANUI – Jerome Maller – Cum merge un RMN?

https://www.youtube.com/watch?v=BmQR57V5TVU – Universitatea Melbourne – Introducere în RMN funcțional

https://www.youtube.com/watch?v=zHb6rBMIhp0&t - Christopher Hess – Principiile fizice ale neuroimagisticii

https://www.youtube.com/watch?v=S4u-tDbs6WI - Christopher Hess – Fundamentele interpretării


neuroimagistice

https://www.youtube.com/watch?v=ZL-Tr1KSMKY – Tor Wager – Principiile RMNf Introducere în RMNf

https://www.youtube.com/watch?v=tQyMqRqHwao – Martin Lindquist – Principiile RMNf. Analiza datelor RMNf

https://www.youtube.com/watch?v=OuRdQJMU5ro – Princpiile RMNf. Structura datelor RMNf și terminologie

https://www.youtube.com/watch?v=ZiFQRFv4IBk – Nancy Kanwisher – Imagistică prin difuzie și tractografie

https://www.youtube.com/watch?v=YqsfgMZ0ZtM – Louis Sokoloff – Pionier al tomografiei cu emisie de pozitron

https://www.youtube.com/watch?v=yrTy03O0gWw – Imperial College Londra – Cum funcționează o tomografie


cu emisie de pozitron (PET)?

https://www.youtube.com/watch?v=r3TiTfMNLw8 – ZP Rad – Ce e PET/RMN?

https://www.youtube.com/watch?v=WZRaWiCvyT4 – NIH Clinical Center – Un scanner PET/RMN

https://www.youtube.com/watch?v=Lq5rIILcVgA – CNN – Neurostimulare profundă pentru tratarea depresiei

https://www.youtube.com/watch?v=bwchix_YRUM – Daniel Barton – Stimulare magnetică transcranială. Ce e și


cum funcționează?

https://www.youtube.com/watch?v=HTvZ_djHQkE – Stimulare magnetică transcranială. Tratament pentru


depresie.

https://www.youtube.com/watch?v=JliczINA__Y – Universitatea McGill – Brenda Milner. Neuropsiholog.

https://www.youtube.com/watch?v=fIR6JTt_Xsc – Inside the Psychologist’s Studio cu Brenda Milner

https://www.youtube.com/watch?v=lqoFBvmRMSw – Celeste Campbell – Ce face un neuropsiholog, mai exact?

https://www.youtube.com/watch?v=DkDlGUWSLFY – Robert Duff – Ce e evaluarea neuropsihologică?


localizaționism vs. holism vs. dualism vs. materialism
echipotențialism monism vs. idealism

glanda pineală. CUI PRODEST? Cine are câtă


MITURI putere politică în ce epocă și cum
frenologie
decide “adevărul științific”?

de la neurologie la neurobullshit (conflicte de interese și (in)accesibilitatea creierului –


neuropsihologie la referințe gratuite la neuroimagistică cutia craniană, bariera
psihologie cognitivă la irelevantă afirmațiilor făcute) hematoencefalică, LCR, religie,
neuroștiință cognitivă politică, epistemologie, finanțe

neuroanatomie organic vs. funcțional devine organic vs. psihanaliza ca dominantă a


localizaționistă vs. “psihologic” în absența accesibilității psihologiei clinice, abandonarea
neurofiziologie holistă neurofiziologiei (dualism de facto) neuroștiinței în psihiatrie

anatomo-clinic microscopie EEG PET CT RMN RMNf MEG tM/DCs


Bibliografie 1
David J. Linden – Mintea ca întâmplare. Cum ne-a oferit evoluția creierului
iubirea, memoria, visele și pe Dumnezeu, p.11-17;
Când eram în gimnaziu, în Califomia anilor '70, o glumă populară era să
întrebi: „Vrei să scapi de 3 kilograme de grăsime hidoasă?" Dacă
răspunsul era pozitiv, era întâmpinat apoi cu replica: „Atunci taie-ți
capul! Ha-ha-ha!”

Creierul nu era la loc de mare cinste în imaginația colectivă a colegilor


mei, asta e clar. Ca mulți alții, m-am simtit uşurat când gimnaziul s-a
apropiat de sfârşit. Totuşi, mulți ani mai târziu, am fost la fel de
deranjat de perspectiva opusă. În special când citeam cărți sau reviste
ori mă uitam la emisiuni educative, am fost foarte surprins să constat
un soi de venerare a creierului. Discuția despre el este deseori purtată
cu o voce precipitată, copleşită de emotie. În aceste prezentări, creierul
este „un kilogram şi jumătate de tesut uimitor de eficient, mai puternic
decât cel mai mare supercomputer" ori „sediul minții, vârful designului
biologic". Ce mi se pare problematic la asemenea afirmatii nu este
recunoştinta profundă pentru faptul că functia mentală sălăşluieşte în
creier, lucru într-adevăr uimitor. Mai degrabă, este presupunerea că
de vreme ce mintea se află în creier, iar ea este o realizare însemnată,
designul şi functionarea creierului trebuie să fie elegante şi eficiente.
Pe scurt, mulți îşi imaginează că creierul este bine proiectat.

Nimic mai departe de adevăr. Creierul este, ca să folosesc unul din


cuvintele mele preferate, un talmeș-balmeș cu un design ineficient, neelegant şi de nepătruns, dar care, cu toate
acestea, funcţionează. Mai expresiv, folosind sintagma istoricului militar Jackson Granholm, un talmeș-balmeș
este „o colecţie pestriţă de părţi care abia se potrivesc şi care alcătuieşte un întreg dezagreabil". Ce sper să arăt aici
este că la fiecare nivel al organizării lui, de la regiuni şi circuite la celule şi molecule, creierul este o aglomerare
neelegantă şi ineficientă de chestii, care cu toate acestea funcţionează surprinzător de bine. Creierul nu este
supercomputerul suprem de uz general. Nu a fost proiectat deodată, de un geniu, pe o coală albă de hârtie. Mai
degrabă este un edificiu foarte ciudat ce reflectă milioane de ani de istorie evoluţionistă. În multe cazuri, creierul a
adoptat în trecutul îndepărtat soluţii la anumite probleme, soluţii ce au persistat de-a lungul timpului şi au fost
reciclate pentru alte utilizări ori au limitat sever posibilitatea de schimbare ulterioară. Vorba lui Francis Jacob,
pionier al biologiei moleculare: „Evoluţia este un cârpaci, nu un inginer." Punctul important legat de această idee
nu este doar că ea contestă ideea designului optimizat. Mai degrabă, aprecierea proiectării alambicate a creierului
poate furniza înţelegerea unora din cele mai profunde şi mai specifice aspecte umane ale experienţei, atăt în privinţa
comportamentului de zi cu zi, cât şi în situaţiile de vătămare şi boală.

Cu toate cele de mai sus în minte, haide să aruncăm o privire asupra creierului şi să vedem ce putem discerne în ceea
ce priveşte designul lui. Care sunt principiile de organizare pe care le identificăm? În acest scop, imaginează-ți că
avem în faţa noastră un creier uman adult proaspăt disecat (figura 1.1). Ceea ce vei vedea este un obiect alungit, gri-
roz, ce cântăreşte cam 1,4 kg. Suprafaţa sa exterioară, numită cortex, este acoperită de riduri groase ce formează
şanţuri adânci. Tiparul acestor şanţuri şi riduri arată ca şi cum ar putea fi variabil, ca o amprentă, dar este de fapt
foarte asemănător în toate creierele umane. În spatele lui atârnă o structură de mărimea unei mingi de baseball
turtite, cu mici şanţuri diagonale. Ea se numeşte cerebel, ceea ce înseamnă „creierul mic". Ieşind din partea de jos a
creierului, cumva spre zona cea mai din spate, se află o tulpină groasă numită trunchi cerebral. Vom trece peste
partea cea mai de jos a trunchiului cerebral, ce se îngustează pentru a forma partea de sus a măduvei spinării. O
observaţie atentă ar dezvălui nervii, numiţi nervi cranieni, ce duc informaţia de la ochi, urechi, nas, limbă şi faţă
spre trunchiul cerebral.

O caracteristică evidentă a creierului este simetria: o privire de sus ne arată un şanţ lung, din faţă în spate, ce
divide cortexul (ce înseamnă „scoarţă” şi este partea groasă ce acoperă creierul) în două jumătăţi egale. Dacă tăiem
complet creierul folosind ca reper acest şanţ străbate din faţă în spate şi apoi întoarcem partea tăiată a jumătăţii
drepte spre noi, vedem perspectiva ilustrată in josul figurii 1.1.

Partea anterioară Partea posterioară a


a capului (rostral) (caudal)

Partea anterioară a Partea posterioară a


capului (rostral) capului (caudal)

Figura 1.1 Creierul uman. Prima imagine arată creierul intact privit din partea stângă, iar imaginea a doua arată
creierul tăat în jumătate într-un plan medial, iar apoi deschis pentru a ne permite să vedem perpendicular partea sa
din dreapta
Privid această imagine, devine limpede că în creier nu există o aglomerare omogenă de tot felul de chestii. Există o
varietate de forme, culori și texturi ale țesutului cerebral de-a lungul regiunilor creierului, dar ele nu ne spun nimic
despre funcțiile acestor regiuni diferite. Una dintre cele mai utile modalități de a investiga funcția acestor regiuni
este să examinăm oamenii care au suferit leziuni ale diverselor părţi ale creierului. Asemenea investigații au fost
completate cu experimente pe animale în care mici regiuni ale creierului au fost vătămate chirurgical sau prin
administrarea de diverse substanțe, după care s-au observat cu atenţie funcțiile somatice şi comportamentul
animalului.

Bibliografie 2
Charles Watson; Matthew
Kirkcaldie; George Paxinos – The
Brain. An Introduction to Functional
Neuroanatomy, p.154-155; 160; 162-
165;
Techniques for Studying the Brain

Most of the information we have about the brain has


come from the study of thin sections that have been
stained to show nerve cells and their processes. The
most famous of these studies are those conducted by a
Spanish neuroanatomist, Santiago Ramón y Cajal.
Cajal used a special silver stain to study the main neuron
types found in the brain. During the second half of the
twentieth century, a range of techniques were
developed that enable us to trace connections between
different parts of the brain. In recent years, a new
generation of imaging techniques has allowed us to
look at brain anatomy in live subjects.

Cutting thin sections of the brain


Microscopy requires tissue to be sliced (sectioned) very thinly. The brain is usually prepared for sectioning by
treatment with a chemical preservative such as formalin, but fresh tissue can be sectioned using a microtome with
a vibrating blade, or by cutting sections of brain that has been frozen. Thin sections are cut on a microtome (a
machine that can cut sections at thicknesses between 5 and 100 micrometers). When fixed tissue is prepared for
sectioning, it is usually dehydrated in alcohol and then embedded in paraffin wax or celloidin. Each of these
methods presents specific challenges: fixation can permanently change some of the molecules in a neuron, whereas
frozen sections are more difficult to handle and may become torn.

Staining brain sections


The routine ways to stain brain sectionsare with a stain for neuronal nuclei and RNA (most commonly the Nissl
stain) and a stain for myelinated axons (such as the Weigert stain or Luxol fast blue). However, more information
can be gained by using histochemical stains and tracers for neuronal connections. Nissl stains use a variety of
dyes (e.g. thionin, cresyl violet, fluorescent compounds) to show charged structures (Nissl bodies) in the soma of
neurons and glia. The Nissl stain is most intense in nucleoli and in the rough endoplasmic reticulum of neurons. For
myelinated axons in the nervous system, a variety of techniques selectively label the unique physical properties of
the densely wound membranes. Some preparation methods deposit silver haematoxylin in the protein scaffold of
the membranes. Other myelin stains use dyes such as luxol fast blue or osmium salts, which stain the fatty content
of the sheaths.
Histochemical staining

Histochemical stains are used to mark particular


parts of cells in the brain based on physical and
chemical properties. Typically they involve
chemical reactions which render specific cellular
components into particular chemical states,
followed by application of dyes attracted to
those properties, or reactions in which colored
products are deposited on specific types of
structures. Histochemical techniques for
detecting enzymes usually work best on unfixed
or lightly fixed sections, whereas some of the
more chemically aggressive methods require
robustly preserved material.

The Golgi silver impregnation technique relies on


chemical preparation of thick blocks of tissue,
after which grains of metallic silver crystallize
inside the membranes of individual cells,
producing a dense black precipitate that
highlights every detail of the cell body and
dendrites against a golden background. If this
occurred in every cell the block would be solid
black, but for unknown reasons only about 1% of
neurons are stained by Golgi methods. This
allows extraordinary detail to be seen in a tiny
subset of cells, and enabled Cajal and other
investigators of neuroanatomy to view and infer
the connectivity of the nervous system in
amazing detail. Frequently, only the cell body
and dendrites are labeled by Golgi deposits, and myelinated axons are not stained.

Cell culture

Another way of studying nervous system cells is to remove them from the living animal (the in vivo setting,
meaning ‘in life’) and keep them alive and growing in an artificial setting (in vitro, meaning ‘in glass’). To do this the
cells must be kept in a culture medium, a solution of ions and nutrients designed to resemble the extracellular fluid
environment in which the cells normally live. As long as the medium allows for gas exchange across its surface, or
by bubbling gas mixes through it, cells can stay alive for weeks and even grow and differentiate.

Monocultures use isolated preparations of a single cell type, separated by a choice of tissue collected and the
medium used to keep cells alive. This allows large groups of similar cells to respond to experimental conditions in
large numbers. The weakness of this technique is that the cells are deprived of their normal interactions with
other cells and are therefore in a situation which is very different from their normal environment in vivo. Co-
cultures use multiple cell types grown on the same surface or in close proximity to each other in order to better
reproduce the interactions of cells. When the mix of cell types is the same as those found in the original tissue it is
referred to as an organotypic culture. Slice culture refers to the technique of taking slices about 1 mm thick from
chilled but still-living tissue and maintaining them in a culture medium.

In this way the relationships between cell types, and many of the interconnections between them found in vivo, are
preserved intact, and by careful selection of the slice plane it may be possible to investigate quite complex
connected systems, such as thalamo-cortical interactions. Removing any nervous tissue from its normal context in
the organism will distort its function to a greater or lesser extent. Because of this, the usefulness of data derived
from culture preparations depends both on the care taken to compensate for the change of environment, and on
the choice of culture technique in order to minimize the effects on the system under study.

Non-invasive imaging techniques

The most common images of the human


nervous system are provided by non-
invasive or minimally invasive scanning
techniques that can be applied to living
tissue. Normal X-ray images cannot
reveal much about brain structure, but
the use of contrast media injected into the
bloodstream, or air introduced into the
ventricular spaces can be used to enhance
X-ray images of the brain. Unfortunately,
these enhancement procedures are
technically difficult and occasionally
dangerous. A significant improvement in
X-ray imaging came in the 1980s from
computer reconstruction of multiple X-ray
images from a moving source, using
mathematical techniques to deduce the
three-dimensional distribution of tissue.
These computed tomography (CT) scans
can show gross outlines of nervous system
structures, and can detect alterations in
density, caused by strokes, or some types
of tumors.

A major breakthrough in the 1980s


resulted in the use magnetic resonance
(MR) for imaging the brain. MR relies on
the ability to excite specific types of
atoms–usually hydrogen in water
molecules–and to image them in a
systematic way. This allows three
dimensional maps to be obtained. Since
neuropil and fiber tracts vary in their
hydrogen content, detailed delineation of
nervous system structures can be
achieved, with the resolution dependent
on the strength of the magnetic field used.

Early MR imaging used field strengths of


1-3 Tesla, yielding resolutions of 1-3
millimeters, but recent small-volume
MR scanners use 4-16 Tesla fields, which
can resolve structures in the 20-50
micrometer range. Diffusion tensor imaging (DTI) uses MR scanners to obtain information about fiber orientation
in the brain by structuring the scanning pulses to pick up how freely molecules are moving by diffusion along axons.
Using three axes of diffusion scanning, every point in the brain is assigned an intensity value along with a
mathematical description of the freedom of movement in three dimensions (called a tensor). The result is an image
that gives an indication of which way axon bundles run in different parts of the brain.

Functional imaging

Functional imaging methods depend on detecting changes in blood flow or metabolic consumption in active
regions of the nervous system. Functional MRI (fMRI–also called Blood- Oxygen-Level-Dependent MRI -BOLD
MRI) depends on differences in MRI signal intensity produced by oxygenated blood levels in tissue. A reference
structural scan is made while the subject is resting. Additional scans are performed while the subject performs
a task designed to access a particular ability or mental state, and differences in intensity are found by
subtracting the control scan values. The differences are then color-coded and overlaid on the structural scan.
Blood flow changes are mediated by astrocytes and depend on neuron-astrocyte interactions during periods
of high activity. These changes are greatest in the cerebral cortex, which is why fMRI studies tend to show most
significant changes in cortical regions. Although other parts of the nervous system may be just as involved in the
task, they may not manipulate their blood flow as much and hence remain undetected. Another problem with
fMRI is deciding whether the increased activity is due to active processing, suppression, or activity that is only
peripherally related to the task. Positron emission tomography (PET) and single photon emission computed
tomography (SPECT) use radioactive substances administered intravenously. A three-dimensional distribution map
is produced by scanning for a long period with a gamma ray detector. The radioactive markers used are designed
to be taken up by active cells but are resistant to being broken down, therefore they remain in the cells and emit
radiation for a brief period. As such this technique is not recommended for frequent or ongoing use in children.

Electrophysiology

Electrical recording and stimulation techniques have been used to study the nervous system for 200 years. They
rely on the fact that the activity of neurons and glia is associated with changing electrical potentials, and that these
electrical events can also be artificially provoked by applying electrical currents. In recent times, this field has
tended to focus on understanding how the nervous system codes and processes sensory and motor control
information in terms of action potentials.

Sensitive amplifiers are used to magnify the voltage changes caused by graded action potentials in receptors,
primary afferent neurons and other regions of the nervous system. Electrodes may be constructed from glass
pipettes, sharpened wires with insulated coatings, silver/silver chloride wires, or carbon fibers. The electrodes can
be inserted inside cells, ‘patched’ onto the cell membrane, or positioned extracellularly or near groups of neurons
to record field potentials. Amplified signals can be monitored (visually and audibly), digitized, timed, averaged, and
classified.

Extracellular recordings use sharp, insulated electrodes to penetrate into neural tissue and measure electrical
potentials for recording and analysis. Electrode design and construction influence the types of signals that can be
detected. For example, the electrical impedance of the electrode dictates the size of the tissue volume in which it
detects signals. A low-impedance electrode might collect data from tens or hundreds of neurons, whereas a high-
impedance electrode might be able to isolate the responses of a single cell from the background of general activity.
Recently, array electrodes with several (4-100) large-area, low-impedance electrodes have been used to sample
activity across a large volume of tissue; comparison of the signals seen by each electrode allows spike sorting, which
attributes each measured action potential to an individual cell, based on consistencies in which electrodes detect it
most strongly. In this way the activity of dozens of individual neurons can be measured simultaneously.

The most refined type of electrophysiological investigation of cells is the patch clamp, in which a small area of
membrane is sealed into a glass electrode tip and becomes part of an electrical circuit. Current balancing electronics
permit an extraordinarily detailed investigation of ions passing through the membrane and the resultant changes
in voltage. Patch clamps can be used on slice-cultured cells, or in living brains, so that their responses can be
explored in a realistic setting. The glass electrode also allows different solutions to be applied to the membrane
surface. Patch clamp amplification is so sensitive that the flickering open-close behavior of individual ion channels
has been recorded at nanosecond time-scales. Sensory electrophysiology Computer control of stimuli enables
systematic analysis of the effect of sensory input on particular neurons. Electrophysiological experiments require
careful controls, such as the systematic variation of stimulus parameters, and careful monitoring of anesthesia and
physiology in order to make the results accurate and reproducible. The electrical properties of the recording setup
are also crucial, since the presence of noise can ruin the detection of responses. Shielding, earthing, signal
processing, and digitization all need to be optimized. Recording neuronal activity from the skin surface. The large
scale electrical activity of groups of excitable cells stirs up tiny electric currents in the tissue around them. These
currents can be recorded and studied, using electrodes placed on the skin over the area of interest. Although the
signal is a combination of millions of individual action potentials, the massed activity can take on different qualities
under various circumstances, usually described in terms of frequency and amplitude. EEG
(electroencephalography) is an example of this technique applied to recording brain activity.
By placing electrodes at multiple points on the scalp, EEG recordings can provide significant information about the
spatial distribution, amplitude, timing and frequency composition of the electrical potentials from the brain.
Because skin, bone, membranes and fluid tend to spread the signal out and reduce its size, EEG recordings
depict large-scale electrical events in the cortex lasting from milliseconds to several hours. At these longer time
scales and larger spatial scales, the mass action of the tissue is strongly influenced by astrocytes as well as neurons
and reflects changes in the overall activity and excitability of the cortex. These patterns shift dramatically through
the course of sleeping and waking, as well as during epileptic seizures. Careful control of stimulus presentation
and the use of computer averaging, allow components of the EEG to be extracted as event-related potentials
(ERPs). Magnetoencephalography (MEG) measures similar scales of activity but instead uses sensitive quantum
devices to detect the minuscule magnetic fields given off by electrical events. MEG has somewhat lower resolution
than EEG but can deduce the signal sources in three dimensions, providing a better visualization of the brain
structures involved.

Bibliografie 3
Todd E. Feinberg; Martha J. Farah –
Behavioral Neurology and
Neuropsychology p.17-20
NEW TOOLS FOR THE STUDY OF MIND AND
BRAIN
In the past two decades, behavioral neurology and
neuropsychology have been transformed not only by the
influx of theoretical ideas from cognitive psychology but
also by the advent of powerful new methods for studying
brain activity during cognition. The first of these methods
to find wide use was functional neuroimaging.

Functional Neuroimaging

Following its introduction in the 1970s, positron emission


tomography (PET) was quickly embraced by researchers
interested in brain-behavior relations. This technique
provides images of regional glucose utilization, blood flow, oxygen consumption, or receptor density in the
brains of live humans. Resting studies, in which subjects are scanned while resting passively, have provided
a window on differences between normal and pathologic brain function in a number of neurologic and
psychiatric conditions. With the use of radioactive ligands, abnormalities can be localized to specific
neurotransmitter systems as well as specific anatomic regions. Activation studies, in which separate images
are collected while normal subjects perform different tasks (typically one or more active tasks and one resting
baseline) yielded new insights on the localization of cognitive processes. These localizations were not studied
region by region, as necessitated by the lesion technique, but could be apprehended simultaneously in a
whole intact brain.

Positron emission tomography was soon joined by other techniques for measuring regional brain activity,
each of which has its own strengths and weaknesses. Single photon emission computed tomography
(SPECT) was quickly adapted for some of the same applications as PET, providing a less expensive but also
less quantifiable and spatially less accurate method for obtaining images of regional cerebral blood flow.
With new developments in the measurement and analysis of electromagnetic signals, the relatively old
techniques of electroencephalography (EEG) and event-related potentials (ERPs), as well as
magnetoencephalography (MEG), joined the ranks of functional imaging techniques allowing some degree
of anatomic localization of brain activity, with temporal resolution that is superior to the blood flow and
metabolic techniques. Most recently, functional magnetic resonance imaging (fMRI) has provided a
particularly attractive package of reasonably good spatial and temporal resolution, using techniques that are
noninvasive and can be implemented with equipment available for clinical purposes in many hospitals.

Much of the early work with functional neuroimaging could be considered a form of "calibration," in that
researchers sought to confirm well-established principles of functional neuroanatomy using the new
techniques — for example, demonstrating that visual stimulation activates visual cortex. As functional
neuroimaging matured, researchers began to address new questions, to which the answers were not already
known in advance. An important development in this second wave of research was the introduction of
theories and methods from cognitive psychology, which specified the component cognitive processes involved
in performing complex tasks and provided a means of isolating them experimentally. In neuroimaging studies of
normal subjects, as with the purely behavioral studies of patients, the entities most likely to yield clear and consistent
localizations are these component cognitive processes and not the tasks themselves. Starting in the mid-1980s, a
collaboration between cognitive psychologist Michael Posner and neurologist Marcus Raichle at Washington
University led to a series of pioneering studies in which the neural circuits underlying language, reading, and
attention were studied by PET. Since then, researchers at Washington University and a growing number of other
centers around the world have adapted neuroimaging techniques to all manner of topics in behavioral neurology and
neuropsychology. This progress is reflected in many of the chapters of this book, illustrating the synergism that is
possible between behavioral studies with patients and neuroimaging studies with both patients and normal subjects.

Computational Modeling of Higher Brain Function

A second and even more recent methodologic development is computational modeling of higher brain function.
Computational models enable us to test hypotheses about the functioning of complex, interactive systems and about
the effects of lesions to individual parts of such a system by building, running, and "lesioning" the models.

The roots of the computational approach can be traced back to earlier thinking in both behavioral neurology and
cognitive psychology. Within behavioral neurology, the perennial critiques of localizationism can be viewed as an early
expression of the need to consider the brain as "more than the sum of its parts" for purposes of understanding
cognition. Aleksandr Luria's concept of "functional systems" placed explicit emphasis on the importance of the
system or circuit as the correct level of analysis for understanding brain function, and this point has been
reemphasized more recently by such writers as Marsel Mesulam, Kenneth Heilman, and Patricia Goldman-Rakic.

The most radical reconceptualization of brain function in terms of global system properties as opposed to local
centers can be attributed to Marcel Kinsbourne, who has argued that local phenomena such as spreading activation
or competition between reciprocally connected areas can lead the system as a whole to become "captured" in certain
global states of attention, awareness, or asymmetrical hemispheric control of behavior. Within cognitive psychology
a parrelel evolution has taken place, from discrete-stage models of human information processing, in which
cognition unfolds through a series of functionally and temporally isolable stages, to models in which
information is processed simultaneously and interactively over a network of processors. The latter type
of model, similar in spirit to behavioral neurology's system-level theorizing, often takes the form of a
running computer simulation in cognitive psychology. Such models have been termed "parallel distributed
processing" (PDP) models, calling attention to the parallel or simultaneous nature of processing at multiple loci
within the model and the distributed as opposed to localist nature of representation and processing within
the model. They have also been called "artificial neural networks," calling attention to the analogy
between the units in the network and neurons in the brain. Like neurons, PDP units are highly
interconnected; they process information by summing inputs received from other units across
excitatory and inhibitory connections and sending outputs to a large number of other units in the same
way.
The initial development of such computational models in cognitive psychology was largely due to David
Rumelhart, James McClelland, and their collaborators. In addition to activation flowing from letters to
words, it also flows between words (through inhibitory connections, thus helping the most strongly
supported word "win out" over other possible words) and from words to letters (through excitatory
connections, thus explaining the previously puzzling findint that letters are perceived more clearly in words than
in isolation).

By embodying the general ideas of distributedenss, parallelism, and interactivity in concrete computer
simulations, the specific predictions of hypotheses could be derived and tested. This is valuable because
predictions about the effects local lesions in a highly interactive network on the behavior of the network as a
whole can be difficult to derive by intuition alone and can sometimes be quite counterintuitive. Pioneering
work in the appbcation of PDP models to problems in behavioral neurology and neuropsychology was done
in the realm of acquired disorders of reading by Geoffrey Hinton, David Plaut, and Tim Shallice, Karalyn
Patterson and colleagues, and Michael Mozer and Marlene Behrmann. Since then, such models have been
applied to disorders of perception, attention, memory, frontal lobe function, and language.

Bibliografie 4
Edward E. Smith; Susan Nolen-
Hoeksema; Barbara L. Fredrickson;
Geoffrey R. Loftus – Introducere în
psihologie, ediția a XIV-a, p.66-68;
Fotografii ale creierului viu

Metode computerizate sofisticate, care au devenit doar


recent realizabile, ne ajută să obținem fotografii
detaliate ale creierului uman viu, fără să producem
pacientului o suferință sau o leziune. Înainte de
dezvoltarea acestor tehnici, localizarea şi
identificarea precisă a majorității leziunilor cerebrale
putea fi realizată numai prin neurochirurgie
exploratorie — un diagnostic neurologic complicat sau
prin autopsie după moartea pacientului. Una dintre
aceste tehnici recente este tomografia axială
computerizată (CAT sau CT — computerized axial
tomography). În această procedură, un fascicul îngust de
raze X este trimis prin capul pacientului şi se măsoară
cantitatea de radiație la ieşire. Aspectul revoluționar al
acestei tehnici l-a reprezentat faptul că măsurătorile pot fi făcute din sute de orientări (sau axe) diferite prin
craniu.

Datele obţinute în urma acestor măsurători sunt introduse într-un computer pentru a construi o imagine în secţiune
a creierului, care poate fi apoi fotografiată sau afişată pe un monitor. „Felia" (tomo este un cuvânt din greaca veche
care înseamnă „felie" sau „tăietură") sau secţiunea transversală poate fi trasată apoi sub orice unghi sau la orice
nivel. O tehnică mai nouă şi mai eficientă este rezonanţa magnetică nucleară (MRI — magnetic resonance
imaging), în care scannerele folosesc câmpuri magnetice foarte puternice, pulsaţii de frecvenţă radio computere
pentru a compune o imagine. În această procedură, pacientul este introdus într-un tunel de forma unei gogoşi,
înconjurat de un magnet uriaş care generează un câmp magnetic foarte puternic. Când o anumită parte a corpului
este amplasată într-un câmp magnetic puternic şi expusă unde radio de o anumită frecvenţă, ţesuturile emit un
semnal care poate fi măsurat. Ca şi în cazul scannerelor CT, se pot face sute de mii de măsurători, care sunt apoi
manipulate de un computer pentru a produce o imagine bidimensională a părţii din corp examinate. Oamenii de
ştiinţă numesc, de obicei, această tehnică „rezonanţă magnetică nucleară" (RMN), deoarece măsoară variaţia
nivelului de energie al nucleelor atomilor de hidrogen, produsă de undele de frecvenţă radio. Totuşi, mulţi medici
omit cuvântul nuclear, deoarece publicul ar putea avea senzaţia că acesta se referă la radiaţia nucleară.

RMN-ul oferă o precizie mai mare decât CT în diagnosticarea tulburărilor creierului și măduvei spinării. De
exemplu, o secţiune transversală RMN prin creier arată trăsături caracteristice sclerozei multiple, care nu sunt
detectabile cu un scanner CT. Diagnosticarea acestei boli necesită în trecut spitalizarea și injectarea unei substanţe
de contrast în canalele din jurul măduvei spinării. RMN-ul este, de asemenea, util şi în detectarea anomaliilor la
nivelul măduvei spinării şi în structurile de la baza creierului, ca de exemplu hernia de disc, tumorile şi malformaţiile
congenitale. Pe lângă detaliile anatomice ale creierului pe care le oferă CT-ul şi RMN-ul, este deseori dezirabil să
evaluăm nivelurile de activitate în diferite locuri din creier. O procedură care se bazează pe scanarea computerizată,
tomografia cu emisie de pozitroni (PET — positron emission tomography), oferă toate aceste informaţii. Această
tehnică se bazează pe faptul că fiecare celulă din corp are nevoie de energie pentru procesele sale metabolice.
În creier, neuronii folosesc glucoza (obţinută din fluxul sanguin) ca principală sursă de energie. O mică cantitate
dintr-un marker radioactiv poate fi amestecată cu glucoza, astfel încât fiecare moleculă de glucoză să aibă o mică
urmă radioactivă (adică o etichetă) ataşată. Dacă acest amestec este injectat în circuitul sanguin, după câteva
minute celulele din creier încep să folosească glucoza marcată radioactiv în acelaşi fel în care folosesc glucoza
obişnuită. Tomografia cu emisie de pozitroni este, în esenţă, un detector foarte sensibil al radioactivităţii.
Neuronii cei mai activi necesită cea mai mare cantitate de glucoză şi, din acest motiv, sunt cei mai radioactivi.
Fig. 1 PET la un subiect uman, arătând că arii diferite din creier sunt implicate în modalități diferite de procesare
a cuvintelor.

Scanarea cu pozitroni măsoară cantitatea de radioactivitate şi trimite informaţiile la un computer care construieşte
o secţiune transversală prin creier, cu diferite culori reprezentând diferite niveluri de activitate neuronală.

Măsurarea radioactivităţii se bazează pe emisia de pozitroni, care sunt particule cu sarcină electrică pozitivă —
de aici denumirea de „tomografie cu emisie de pozitroni". Afecţiunile creierului (ca epilepsia, cheagurile de sânge şi
tumorile cerebrale) pot fi identificate folosind această tehnică. Scanarea cu pozitroni, care compară creierele unor
persoane care suferă de schizofrenie cu cele ale unor indivizi normali, a arătat diferenţe în nivelurile metabolice
pe anumite arii corticale (Schultz şi colab., 2002). Tomografia cu emisie de pozitroni a fost folosită şi pentru
investigarea ariilor cerebrale care se activează în timpul unor funcţii mentale (ca rezolvarea unor probleme de
matematică, ascultarea unor melodii sau vorbirea), pentru a identifica structurile cerebrale implicate în aceste
activităţi (Posner, 1993). CT-ul, RMN-ul şi PET-ul se dovedesc nişte instrumente valoroase în studiul relaţiilor dintre
creier şi comportament. Ele sunt un exemplu al felului în care progresul într-un domeniu al ştiinţei deschide noi
orizonturi în altul datorită progreselor tehnicii (Pechura şi Martin, 1991; Raichle, 1994). De exemplu, studiile PET
pot fi folosite în studiul diferenţelor de activitate neuronală între cele două emisfere cerebrale. Aceste diferenţe de
activitate între emisfere sunt numite asimetrii cerebrale.

Bibliografie 5
Joel E. Morgan; Joseph H. Ricker –
Textbook of Clinical Neuropsychology -
2nd Edition, p.87-91;
Neuroanatomy for the Neurospcyhologist

Cristopher M. Filley and Erin D. Bigler

Introduction

The details of human neuroanatomy are vast, intricate,


and continually expanding. As a result, the study of
neuro-anatomy may seem forbidding to clinicians and
researchers whose focus is on clinical behavioral
assessment. Nevertheless, a working knowledge of
neuroanatomy is fundamental for
neuropsychologists. This chapter will endeavor to
develop such an understanding, presenting an overview
of human neuroanatomy while emphasizing the most
relevant aspects for those engaged in the
neuropsychological study of higher functions.

Since the introduction of computed tomography (CT) in the early 1970s, the fidelity of brain imaging to visualize
gross brain anatomy has improved at a rapid pace hastened by the development of magnetic resonance (MR)
imaging (MRI). Today much of neuroanatomy is now taught via neuroimaging methods (Leichnetz, 2006; Nowinski
& Chua, 2013). This chapter will use the basic information from the first edition of this book, as the fundamentals
of neuroanatomy have not changed, and fuse this traditional approach with MR methods of imaging that highlight
neuroanatomical detail.
The origins of human behavior and consciousness are an enduring source of fascination. Few have not had
occasion to ponder the sources of thought and feeling, and the personal immediacy of daily conscious experience
is an inescapable aspect of human existence. Whereas richly descriptive literary and artistic accounts of mental life
have been offered by the humanities for generations, the biomedical sciences have also advanced our
understanding of these phenomena through formal investigation of the nervous system.

Appendix: Structural Neuroimaging Basics for Understanding Neuroanatomy

Viewing neuroanatomy from brain imaging typically involves either CT or MRI, with MRI clearly superior for
anatomical detail. In the same subject, Figure 6.15 compares CT with various MR pulse sequences that have
different sensitivities to tissue type. In the mid-1990s MR DTI came on the scene, with the discovery that aggregate
white matter tracts could be identified and extracted from the image because healthy axonal membranes
constrain the direction of water diffusion perpendicular with the orientation of the fiber tract. By assessing
directionality of water diffusion, fiber tract projections may be inferred. As shown in Figure 6.16, the diffusion scan
from which DTI is derived has a rather fuzzy appearance in native space, but the actual diffusion color maps are
rich in information about the directionality of water diffusion where green reflects anterior-to-posterior projecting
tracts, warm colors (orange to red) side-to-side projections, and cool colors (blues) vertically oriented tracts. Figures
6.6, 6.8b and 6.12a and b all present white matter fiber tracts derived from DTI. Understanding neuroanatomy from
neuroimaging is facilitated by the sensitivity of both CT and MRI in detecting differences in white matter and gray
matter. Because specific white matter and gray matter boundaries may be distinctly differentiated with high-field
MRI, the actual gray matter cortical ribbon and subcortical nuclei can be readily identified, as shown in Figure 6.17.
Also, CSF has very different signal intensity from brain parenchyma, meaning it too can be segmented as
shown in Figure 6.17. Segmenting tissue also provides the basis for identifying classic brain regions, like the
hippocampus as presented in Figure 6.17.

Figure 6.15 Comparison


of CT imaging in the axial
plane with other
standard MRI pulse
sequences all from the
same indiviudal and all at
approximately the same
level and imaging plane.

Note how each imaging


sequence highlights
differences in tissue type
(see Table 6.3 for tissue
characterization). FLAIR =
Fluid Attenuated Inversion
Recovery sequence.

GRE = Gradient Recalled


Echo Sequence.

PD: Proton Density–


weighted sequence.
By defining the boundaries of the hippocampus, that region of interest (ROI) may be extracted from the image and
depicted in three-dimensional space, also demonstrated in Figure 6.17. Using similar techniques, any
neuroanatomical ROI may be extracted from an image showing its anatomical position in relation to other
structures as well as quantified in terms of volume, surface area, and shape, to name the most common quantitative
measurements. Figure 6.18 shows the same sagittal view of Figures 6.4c and 6.14 but this time with a vertical line
showing the approximately level of a cut through the frontal and anterior temporal lobes (although at the mid
sagittal level the anterior temporal lobe cannot be visualized in the mid-sagittal cut), with the resulting coronal
image below (on the left). Adjacent to the coronal image from the MRI is a formalin-fixed coronal cut of a
postmortem brain in approximately the same plane. Note the similarity of the MR image with that of the
postmortem image, proof of the anatomical approximation of MRI findings to identify gross anatomy. From this
image, the beautiful symmetry of the typical developed human brain also becomes apparent. Notice how the
structures in one hemisphere mirror the other. This symmetry principle applies throughout the brain as
depicted in a different coronal section more posterior to the position previously shown in Figure 6.19 or in the
axial plane in Figure 6.20.

By applying the similarity and symmetry principles to understanding age-typical brain anatomy, in most cases
a scan image may be straightforwardly identified as normal in appearance or not. That last piece of a general
overview to understand anatomy from imaging is understanding how the underlying physics of CT and MRI provide
the basis for generating the resulting image.

Figure 6.16 Diffusion imaging


showing the diffusion scan in
native space in the top center,
compared to the T2- and T1-
weighted images on either side,
with the actual color map
centered in the bottom row
bordered by the apparent
diffusion coefficient (ADC) map
on the bottom left and the T2-
weighted antomical image.

CT is based on x-ray beam


technology where the physical
density of tissue influences the
speed of the x-ray beam as it
passes through skin, the skull,
and brain parenchyma.
Reconstructing this information
in two-or three-dimensional
space provides an image as
shown in the top left of Figure
6.15. By convention, on CT, bone
is white, reflecting the greatest
den-sity encountered by the x-ray beam, whereas CSF and air pockets (as in a sinus area) provide the least
density and are categorized as dark in a CT image. Because white matter is largely comprised of myelinated
axons, it has a different density and water content compared to gray matter comprised of cell bodies. Accordingly,
in viewing CT, white matter is darker gray, gray matter is lighter gray, CSF is dark gray to black, air is black, and
bone bright white.
The MR signal is the result of a resonance interaction between hydrogen nuclei and externally applied
magnetic fields spatially encoded to provide a mapping of the image area in two or three dimensions. The
signal intensity depends on the density and the magnetic environment of the hydro-gen nuclei (i.e., protons). Since
white matter and gray matter differ in water content and have characteristically different MR signal
properties, MR images of the brain with visible and distinguishable differences in gray and white matter may
be shown as depicted in the various illustrations within this chapter, especially Figures 6.15 and 6.16. How
distinct white and gray matter may be differentiated depends on the pulse sequence used, which will yield different
findings as outlined in Table 6.3. The use of innovative methods for varying the magnetic field strength, the delays
between the sending and receiving of the radio waves, and the acquisition and display of the signal intensity allow
a wide range of images to be produced.

For example, the behavior of the protons is characterized by two time constants, called T1 and T2.

T1 reflects the rapidity with which protons become realigned with the magnetic field after a radio frequency
(RF) pulse. Scans that are T1-weighted tend to show greater detail but less contrast between structures; these
images are therefore optimum for showing anatomy.

T2 reflects the decay of in-phase precession (desynchronization or `dephasing") of protons after the pulse.
Scans that are T2-weighted generally show normal structures as having an intermediate (gray) intensity, while
fluid and many pathologic abnormalities appear with high intensity (white).

These images provide excellent contrast between normal and abnormal structures and are, therefore, used
for identifying both anatomy and pathology.

Sequences that provide an average of T1 and T2 weighting are called proton density sequences. The
appearance (brightness) on the various sequences can be used to characterize the tissue. The true inversion
recovery sequence shown in Figures 6.19 and 6.20 depicts the exquisite detail that can be achieved with MRI for
portraying anatomy.

Figure 6.17
Standard T1-
weighted coronal
image that has been
segmented
to differentiate gray
matter from white
matter
and CSF. The image is
then classified into
identifiable
regions of interest or
actual anatomical
structures

For example, in Figure 6.19 the very thin band of gray matter that forms the claustrum may be visualized. Another
sequence that uses subtle changes in magnetic field strength, called gradient echo (GRE), allows excellent image
detail in short imaging times and has the added advantage of being sensitive to the presence of blood as well as
blood breakdown products (hemo-siderin) as a result of hemorrhage. A susceptibility-weighted imaging sequence
(SWI) that uses a GRE pulse sequence is particularly sensitive in detecting venous blood as shown in Figure 6.21 and
in pathological conditions, is sensitive in demonstrating presence of microhemorrhages. SWI impressively
demonstrates the complex architecture of venous blood in a healthy individual as seen in Figure 6.21 (same as
shown in Figures 6.18 and 20). The fluid attenuated inversion recovery (FLAIR) sequence is particularly sensitive
to white matter pathology, but within a normal brain, as shown in Figure 6.15 , signal in the parenchyma offers little
distinction between white and gray matter.
Figure 6.18 The mid-sagital
view shown at the top of
this figure has the
downward arrow showing
the coronal plane where the
approximate cut occurred
to generate the image in
the lower left panel. The
lower right panel shows a
similar location in a
formalin-fixed postmortem
brain sectioned at
approximately the same
level. Note the similarity
of the postmortem section
to the MRIderived coronal
image as well as the
general symmetry of the
brain.

Figure 6.19 This is a coronal


image using a true inversion
recovery sequence that
provides exquisite and
detailed anatomical detail.
Note how each hemisphere
is the mirror of the other
in terms of the distribution
and organization of major
brain areas and ROIs: (1)
interhemispheric fissure; (2)
the number sits in the
central white matter of the
frontal lobe, with the arrow
pointing to the caudate
(gray matter) and lateral
ventricle (dark space); (3)
the lower part of the
number sits in the corpus
callosum, with the top of
the number in the cingulum
bundle within the cingulate gyrus, and the arrow points to the body of the fornix; (4) thalamus; (5) the number sits
in the lenticular nucleus, which is formed by the lighter (meaning more white matter) globus pallidus (to the right
of the number) and the putamen (darker gray, to the left of the number); (6) hippocampus; (7) superior temporal
gyrus of the temporal lobe, which forms the top of the temporal lobe, with in descending order followed by the
middle temporal gyrus, inferior temporal gyrus, fusiform, and parahippocampal gyrus; (8) Sylvian fissure to the left
of the number, frontal lobe above, temporal lobe below and to the right of the number, insular cortex.

Bibliografie 6
Angelo Mosso; Marcus Raichle; Gordon
M. Shepherd – Angelo Mosso's
Circulation of Blood in the Human Brain,
p.20-26;

Mosso's impressive work on the brain and its circulation


garnered the attention of no less a figure than William
James. As noted previously in the Brief Biography, in his
monumental two-volume text Principles of Psychology
(James, 1890), James has an entire chapter, “On Some
General Conditions of Brain Activity" wherein he states,
“All parts of the cortex, when electrically excited, produce
alterations both of respiration and circulation" (Volume 1,
p. 97). This remarkably foresighted statement for its time
explicitly references the work of Mosso (for some additional
details see end note'). Despite a promising beginning,
interest in the relationship between brain function and
brain blood flow virtually ceased during the first quarter
of the 20th century. Undoubtedly, this was due in part to a
lack of tools sufficiently sophisticated to pursue this line of
research. In addition, the work of Leonard Hill, Hunterian
professor of the Royal College of Surgeons in England, was
probably influential (Hill, 1896). His eminence as a physiologist overshadowed the inadequacy of his own
experiments that wrongly led him to conclude that no relationship existed between brain function and brain
circulation.

It was not until the end of World War II that Seymour Kety and his colleagues opened the next chapter in studies of
brain circulation and metabolism. In 1948, Kety and Carl Schmidt (1948) developed the first quantitative method
for measuring human whole brain blood flow and metabolism. Their work provided the first quantitative
measurement of the enormous burden the brain places on the energy budget of the body. They documented that
while the brain is only 2% of the body weight, it consumes 20% of the energy. Interestingly, Mosso
demonstrated the unique sensitivity of the brain to its nutritional requirements many years before. Because Kety
and Schmidt's initial measurements were confined to the whole brain, they were not suitable for brain mapping.
However, in 1955, Kety and colleagues introduced an in vivo tissue autoradiographic measurement of regional
blood flow in laboratory animals (Landau et al., 1955), which provided the first glimpse of quantitative regional
changes in blood flow in the brain related directly to brain function, just as Mosso had predicted. Derivatives of
this tissue autoradiographic technique many years later became important for the measurement of blood flow in
humans with positron emission tomography (PET), which provided a means of quantifying the spatial distribution
of radiotracers in tissue without the need for invasive autoradiography (see later). In 1963, following up on the
important early observations by Kety and colleagues, a pair of Scandinavian investigators, David Ingvar (Sweden)
and Niels Lassen (Denmark), and their colleagues collaboratively devised methods for the measurement of regional
cerebral blood flow in humans (Ingvar and Risberg, 1965). That regional cerebral blood flow reflects the mental
state of humans was clear from their work and that of others (for a review of this work see Lassen et al., 1978).
In 1970, it became possible for the first time to relate regional oxygen consumption in the human brain to blood
flow as well (Ter-Pogossian et al., 1970).

In 1973, X-ray computed tomography (X-ray CT) was introduced by its inventor Godfrey Hounsfield (1973). It
is hard to overestimate the importance of this invention for clinical medicine, as well as being a critical stimulus for
the development of other imaging technologies. In creating CT, Hounsfield had arrived at a practical solution to the
problem of producing three-dimensional transaxial tomographic images of an intact object from data obtained by
passing highly focused X-ray beams through the object and recording their attenuation. Hounsfield's invention
received enormous attention and quite literally changed the way in which we looked at the human brain. Gone,
also, were difficult-to-interpret, unpleasant, and sometimes dangerous clinical techniques like
pneumoencephalography. CT was, however, an anatomical tool. Function was to be the province of PET and
MRI. The first out of the box in 1975 was positron emission tomography, or PET, which was developed (Phelps et
al., 1975; Ter-Pogossian et al., 1975) to take advantage of the unique decay scheme of positron-emitting
radionuclides (e.g., 50, 1C, and 18F). With PET, there was now a way to perform quantitative tissue autoradiography
in humans obtaining detailed measurements of blood flow, metabolism, pharmacology, and biochemistry in health
and disease (for a review see Raichle, 1983).

Also in 1975, Louis Sokoloff and his colleagues at the National Institutes of Health introduced another activity
marker in the form of 14C-2-deoxyglucose, which was to play a key role in studies of functional brain mapping in
animals not possible in the human. This isotope of glucose, lacking an oxygen on the number 2 carbon atom, is
taken up by cells and phosphorylated but is not a substrate for further metabolism in the cell. In large doses it had
been used in experiments to block glucose metabolism. Sokoloff and colleagues labeled it with 14C and injected
it into laboratory rats in tracer amounts. Because active cells for their energy needs are exquisitely dependent
on glucose and oxygen delivered by the local vasculature, the labeled compound was trapped unmetabolized
in the active cells and could be visualized by autoradiography to show locations where cells had been active.
This technique provided our first detailed view of regional brain metabolism in the mammalian brain and its
relationship to brain function. The method was introduced by Kennedy et al. (1975; see also Sokoloff, 1984), who
confirmed its validity by demonstrating, among other things, ocular dominance columns in monkey visual cortex.
One of the first new findings with the method, reported in the same year, was evidence not seen before that in the
awake rat, odor stimulation elicits activity patterns in the thin layer of glomeruli in the olfactory bulb (Sharp et al.,
1975).

Other new findings soon followed: mapping of focal seizures (Collins, 1978), circadian rhythms in the
suprachiasmatic nucleus (Schwartz and Gainer, 1977), and changes in single barrels (Durham and Woolsey, 1977).
The method could even be extended to invertebrates, demonstrating with tritiated deoxyglucose odor-elicited
patterns in Drosophila (Rodrigues and Buchner, 1984) and labeling in single neurons in molluscs (Sejnowskl et al.,
1980). Animal experiments thus played a critical role in building the legacy of Mosso. In 1979, the 14C-deoxyglucose
technique was adapted for PET utilizing 5F-fluorodeoxyglucose (Reivich et al., 1979; Phelps et al, 1979) and is
now widely used in clinical medicine as well as in research.

From 1984 to 1990, there was an explosion of technical developments associated with functional brain
mapping with PET, including task analysis by image subtraction, stereotaxic image registration and
normalization, and image averaging (e.g., see Petersen et al., 1988). The development of these strategies
became critical to the later development of fMRI.
In 1986, the first of two papers was published (Fox and Raichle, 1986; Fox et al., 1988) that surprised the brain
blood flow and metabolism world by demonstrating with PET that when brain blood flow changes, it does so
more than oxygen consumption. The result was that changes in brain activity were heralded by changes in the
oxygenation of hemoglobin. Importantly, MRI is very sensitive to the oxygenation of hemoglobin (Pauling and
Coryell, 1936), setting the stage for the later development of fMRI. In 1990, Seiji Ogawa and his colleagues at Bell
Laboratories proposed the use of deoxyhemoglobin as an MRI contrast agent for functional brain imaging and
coined the term BOLD for blood oxygen level-dependent contrast. BOLD fMRI was, however, not the first
functional imaging done with MRI. That occurred in 1991 when Jack Belliveau and his colleagues performed the
first fMRI using an intravenously administered contrast agent (Belliveau et al., 1991). While this work signaled
the arrival of fMRI, it could not compete with PET because only a limited number of measurements could be
made in each subject due to the need to readminister the contrast agent. That all changed in 1992, when four
groups almost simultaneously introduced BOLD fMRI (Bandettini et al., 1992; Frahm et al., 1992; Kwong et
al., 1992; Ogawa et al., 1992).

From that time BOLD fMRI has transformed both animal and human studies of brain function. One driving force
has been ever more powerful magnets, currently up to 7 Tesla for humans and 12 Tesla for animal experiments. This
makes it possible to achieve resolutions far below the regional level, in the mouse down to 100-um voxels or below,
enabling the investigation of changes in blood flow at the level of layers, barrels, and glomerul l (Yang et al, 1998;
Xu et al., 2003). Use of two-photon microscopy in animal experiments is revealing blood flow changes in relation to
sensory responses at the glomerular, cellular, and capillary level (Lecoq et al., 2009). At these levels, Mosso's early
vision merges with modern cellular and molecular brain studies.

For cognitive studies in humans, the BOLD fMRI package was completed in 1996 with the introduction of event-
related fMRI (Buckner et al., 1998), which freed investigators to employ the true sophistication of behavioral
paradigms from cognitive psychology. The impact of fMRI on cognitive neuroscience has been remarkable.
Since the introduction of fMRI, there have been nearly 13,000 refereed publications employing the technique
largely but not exclusively in humans of all ages. Most of this work is being performed on MR instruments
dedicated to research, in contrast to the early days in which work was done, after hours, on hospital instruments.

The study of human cognition with blood flow-based methods such as PET and MRI was aided greatly by the
involvement of cognitive psychologists in the 1980s whose experimental designs for dissecting human
behaviors using information-processing theory fit extremely well with emerging functional brain imaging
strategies (Posner and Raichle, 1994). It may well have been the combination of cognitive psychology and
systems neuroscience with brain imaging that lifted this work from a state of indifference and obscurity in the
neuroscience community in the 1970s to its current role of prominence. Remarkably, the basic behavioral strategy
employed in this work was developed at almost the same time that Mosso was making his observations on the
effects of behavior on brain circulation.

This strategy for dissecting human behavior that became the centerpiece of functional brain imaging was based on
a concept introduced by a contemporary of Mosso, the Dutch physiologist Franciscus C. Donders (for a recent
review of his work in English, see Draaisma, 2002). In 1868, Donders proposed a general method to measure
thought processes based on a simple logic. He subtracted the time needed to respond to a light (say, by pressing
a key) from the time needed to respond to a particular color of light. He found that discriminating color required
about 50 msec. In this way, Donders isolated and measured a mental process for the first time by subtracting a
control state (i.e., responding to a light) from a task state (i.e., discriminating the color of the light). In the
modem era of functional brain imaging, this strategy was first fully implemented in a study of single word
processing (Petersen et al., 1988). Since then it has been exploited with exponentially increasing sophistication in
the functional imaging world. In 1987, a meeting attended by representatives of the scientific community and
various research foundations was held at the headquarters of the James S. McDonnell Foundation in St Louis. At
this meeting, it was decided that the time had come to explore the possibility of developing a new field of
research that combined the quantitative measurement of behavior, as exemplified by the branch of
psychology known as cognitive psychology, and systems neuroscience at the human level employing the
rapidly developing techniques in functional brain imaging. After 2 years of planning, a program, largely focused
on training, was initiated with funds from the McDonnell Foundation and the Pew Charitable Trusts. In addition to
institutional and individual investigator funding, the program ran a summer workshop first at Harvard and then at
Dartmouth. The McDonnell-Pew program in cognitive neuroscience continued for 13 years and quite literally
created the field of cognitive neuroscience. A cadre of newly trained young people was prepared to take proper
advantage of the developments in imaging and other cognitive methods. Two important new directions have
emerged.

First, one of the most remarkable discoveries to emerge from functional imaging of the brain is the role of
ongoing or intrinsic activity. It consumes the vast majority of the brain's enormous energy budget (fully 20% of
the body's energy requirements in adults) as compared to task-evoked activity (Raichle, 2010a, 2010b; Raichle and
Mintun, 2006). Furthermore, this ongoing activity exhibits a remarkable degree of organization that can be
interrogated through the spontaneous fluctuations in BOLD fMRI (Fox and Raichle, 2007; Raichle, 2010b, 2011) in
subjects of all ages who are awake, asleep, or under general anesthesia. Its study also has revealed many new
insights into the pathophysiology of disease (Zhang and Raichle, 2010).

Despite the several observations we have chosen to highlight and the many more that fill his book, it would have
been impossible for Mosso and his contemporaries to imagine what their work foretold about brain activity
mapping as we see it today. And the story continues to unfold. At present, the work includes not only the creation
of brain maps during a countless variety of mental states including quiet repose and unconsciousness, in
humans as well as in laboratory animals, but also how these maps change across the life span in health and
disease.

A critical challenge is to establish a deeper understanding of the nature of the brain imaging signals,
particularly the BOLD signal, as a means of relating brain to behavior. This work includes attempts to understand
not only its relationship to brain metabolism (Raichle and Mintun, 2006) but also its relationship to the underlying
neurophysiology of individual neurons and microcircuits (e.g., see He et al., 2008; Logothetis et al., 2001). When
thoughtfully considered, it has become apparent that all
parties to this important endeavor need to pause and
consider carefully how brain circulation, metabolism,
biochemistry, and electricity relate to one another and to our
understanding of human behavior. A call for such a broad-
ranging view of science is not new or related just to the brain
(e.g., see Smith, 1968). In sum, as we harvest the fruits of our
present labors and contemplate the future, we should have a
deeper appreciation for the roles played by Mosso and the
many other pioneers on whose shoulders we stand. We hope
that the translation of this important early work will serve as
a reminder of our debt to Angelo Mosso, an individual who
truly exemplified what it means to be a great scientist. He
was someone, to paraphrase his own words, whose
accomplishments richly merit being plucked from
undeserved neglect and made a model against which to
judge our own advances.

Bibliografie 7
Robert Newman – Neuropolis: A Brain
Science Survival Guide, p.9-15;

1. VOXEL & I
From the get-go, it is important to remind ourselves that brain-imaging does not actually film your brain in
action.

There is no live action footage of thoughts or feelings. No one will ever be able to read your mind — except your
mum. Brains do not light up during functional magnetic resonance imaging (fMRI) and electroencephalography
(EEG). Strictly speaking fMRI and EEG are not techniques of brain imaging but of blood imaging (fMRI) or electrical
imaging (EEG), since they map blood flows or electrical activity on the scalp to different brain regions on the
working hypothesis that active neurons devour more oxygen, and blood is the brain's oxygen delivery service.

On 17 May 2016 the Proceedings of the National Academy of Sciences of the USA published the first comprehensive
review of 25 years of fMRI data. The conclusions were damning:

In theory, we should find 5 per cent false positives ... but instead we found that the most common software
packages for fMRI analysis ... can result in false-positive rates of up to 70 per cent. These results question the
validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.

In 2009, the journal Perspectives on Psychological Science published `Puzzlingly High Correlations in fMRI Studies
of Emotion, Personality and Social Cognition'. (Original title: 'Voodoo Correlations in Social Neuroscience): Two
of the paper's authors, Ed Vul and Harold Pashler first became suspicious when they heard a conference speaker
claim he could predict from brain-images how quickly someone would walk out of a room two hours later. This
had to be voodoo. Imagine a scenario in which roaring floodwaters smash the office windows. Your colleagues flee,
but then turn back to see you stranded in the rising water.

`Save yourself!' they cry.

'Run for you life!'

`You go on ahead', you holler back.

`I'm not gonna make it. I decided a couple of hours back on an airy saunter through the doorway.' `Well, sashay for your
life! Mince like you've never minced before!'

Vul et al. set about re-examining the data. They surveyed the authors of 55 published fMRI papers and found that
half acknowledged using a strategy that cherry-picked only those voxels exceeding chosen thresholds. These
cherry-picked voxels were then averaged out as if they were the average of all voxels, not just the ones that fit the
hypothesis they were supposed to prove. This strategy, says Ed Vul, 'inflates correlations while yielding
reassuring-looking scattergrams.' Voxels are the organisation of statistical correlations into cuboid 3D pixels. Each
cube represents a selective sample of billions of brain cells. They provide a computer-generated image of what
brain activity would look like if cherry-picked statistics matched raw data.

Together the cubes build a Minecraft map of the mind.

1.To form each cuboid voxel, you collate all the neuronal clusters that have a blood oxygen level of x at split second
0.0000001 with all the ones that have a value x at split-sec-ond 0.0000007.

2.Junk all the non-x brain cell activity going on between 0.0000002 and 0.0000006. (Call it `noise').

3.Now amalgamate your cherry-picked voxel with other voxels, (themselves boxes of cherry-picked data), and there
you have your fMRI picture showing which region of the brain spontaneously 'lights up' when we are thinking about
love or loss or buying a house.

There you have the murky world of the technicolour voxel. Ed Vul et al. call this strategy 'non-independent
analysis'. To illustrate how this strategy inflates correlations they used it to show how daily share values on the New
York Stock Exchange could be accurately 'predicted' by the recorded fluctuations in temperature at a weather
station on Adak Island in Alaska. Here's how it works:

Non-independent analysis simply skims the strongest correlations between each of the 3,315 stocks being
offered on Wall Street, and finds a handful whose value appears to strongly correlate with the previous day's
temperature drops on the windswept Alaskan tundra. `For $50, we will provide the list of stocks to any interested
reader,' wrote Ed Vul. 'That way, you can buy the stock every morning when the weather station posts a drop in
temperature, and sell when the temperature goes up.' Not long after came the banking crash. It turned out that
Wall Street had been using some 'non-independent analysis' of its own.

In the same way that fMRI false positives are boxed up into cuboid voxels, Wall Street was boxing bad debt into
mortgage bonds and `collateralised debt obligations', or CDOs —better known as the wobbly stack of Jenga blocks
Ryan Gosling eloquently demolishes in The Big Short. And yet ponzi voxels were to rescue ponzi bonds.
Newsrooms used brain-imaging data to explain the banking crash. Neuroscience helped shift the blame from
banks to brains, and from rich to poor. It turned out that the system of short-term greed that caused the banking
crash was the limbic system. Neuroeconomists popped up on the nightly news to explain that sub-prime
mortgages were entered into by people who let their limbic system's urge for instant gratification triumph
over the prudence of their prefrontal lobes. Those who lack the mental strength to resist the limbic system's short-
term greed, it turned out, would always make bad property investments. Grotesquely, ponzi voxels rescued ponzi
bonds by shifting the blame onto the feckless poor.

A huge jar of sweets in a sweetshop window

To want to understand human behaviour is a human need, but frustratingly the answers are always complex and
incomplete. There is no royal road to the truth, just a multiplicity of weakly-acting causal pathways. And so when
we are shown a new technology that appears to answer our deepest questions, it is only human for us to want to fill
our boots. EEG and IMRI are what we have been looking for all along: shiny machines that produce simple
answers to complex questions. Better yet, these answers come in the form of vivid arrangements of 3D voxels,
like a huge jar of sweets in a sweet shop window. In the rush for a quick-fix answer to a complex problem did any
neuroeconomist or Newsnight presenter ever think to blame their own limbic system for overpowering the
prefrontal cortex? Why wasn't the way they themselves snatched at simplistic answers symptomatic of short-term
neural reward circuitry? One of several experiments to which neuroeconomists alluded to in the wake of the banking
crash was an investigation into 'neural reward circuitry, which measured blood oxygen levels in different brain areas
when people were offered five dollars now, and when they were offered forty dollars six weeks from now The instant
five-dollar cash offer represents the sub-prime mortgage. But this is the economics of the Wendy house. 'Only a
behavioural economist,' says philosopher and neuroscientist Raymond Tanis, would regard responses to a
simple imaginary choice [$5 now or $40 later] as an adequate model for the complex business of securing a
mortgage. Even the most foolish and 'impulsive' mortgage decision requires an enormous amount of future
planning, persistence, clerical activity; toing and froing, and a clear determination to sustain you through the million
little steps it involves. I would love to meet the limbic system that could drive all that.

I am keen to draw a sharp distinction between MRI's medical applications and its use in waffle about the neural
basis of poor investment decisions and the like. Brain-imaging helps oncologists track the success of different
treatments in halting the spread of brain tumours. MRI can show the rate at which dementia is progressing. It
can be used to assess the extent of damage caused by a stroke and to predict the likely recovery of brain and
body function. I would probably not be able to walk but for MRI. Thanks to the magnetic resonance imaging
machine at London's Royal Free Hospital, surgeons could tell at a glance that they needed to perform an
emergency discectomy and laminectomy on my spine. The Registrar told me there was a two per cent chance
that I would emerge from surgery doubly incontinent, in a wheelchair and in unbearable agony for the rest of my
life. Sign here. But thanks to the skill and expertise of the surgeons, and thanks to magnetic resonance imaging
showing them exactly where to go and what to do when they got there, I am back on my feet.
None of the medical applications just mentioned involve voxels, those 3D pixels made from crunched
numbers. The voxel is a monument to the confusion of mythology with science. The wonderful medical uses of MRI
lend credibility to all the mythologising. It's only twenty-five years since brain-imaging got going. You might think
the novelty of brain-imaging would make us less prone to mythologise the brain, but in fact it makes us more so.
We have been mythologising the heart long enough to know when we are doing it. We don't confuse love hearts
for real ones. With brains it is different.

Bibliografie 8
Russel A. Poldrack; Jeanette A.
Mumford; Thomas E. Nichols – Handbook
of Functional MRI Data Analysis, p.ix; 1-9;
173-178;

Preface

Functional magnetic resonance imaging (fMRI) has, in


less than two decades, become the most commonly used
method for the study of human brain function. FMRI is a
technique that uses magnetic resonance imaging to
measure brain activityby measuring changes in the local
oxygenation of blood, which in turn reflects the amount of
local brain activity. The analysis of fMRI data is exceedingly
complex,requiring the use of sophisticated techniques from
signal and image processing and statistics in order to go
from the raw data to the finished product, which is generally
a statistical map showing which brain regions responded to
some particular manipulation of mental or perceptual
functions. There are now several software packages
available for the processing and analysis of fMRI data,
several of which are freely available.

Since its development in the early 1990s, fMRI has taken the scientific world by storm. This growth is easy to
see from the plot of the number of papers that mention the technique in the PubMed database of biomedical
literature, shown in Figure1.1. Back in 1996 it was possible to sit down and read the entirety of the fMRI
literature in a week, whereas now it is barely feasible to read all of the fMRI papers that were published in the
previous week! The reason for this explosion in interest is that fMRI provides an unprecedented ability to safely
and noninvasively image brain activity with very good spatial resolution and relatively good temporal resolution
compared to previous methods such as positron emission tomography (PET).

Figure 1.2 shows an example of what is known as the hemodynamic response, which is the increase in blood flow
that follows a brief period of neuronal activity. There are two facts about the hemodynamic response that
underlie the basic features of BOLD fMRI and determine how the data must be analyzed.

1.First, the hemodynamic response is slow; whereas neuronal activity may only last milliseconds, the increase in
blood flow that follows this activity takes about 5 seconds to reach its maximum. This peak is followed by a long
undershoot that does not fully return to baseline for at least 15-20 seconds.

2.Second, the hemodynamic response can, to a first approximation, be treated as a linear time-invariant
system (Cohen, 1997; Boynton et al., 1996; Dale, 1999). This topic will be discussed in much greater detail in Chapter
5, but in essence the idea is that the response to a long train of neuronal activity can be determined by adding
together shifted versions of the response to a shorter train of activity. This linearity makes it possible to create
a straightforward statistical model that describes the timecourse of hemodynamic signals that would be expected
given some particular timecourse of neuronal activity, using the mathematical operation of convolution.

1.1.2 Magnetic resonance imaging

Figure 1. A plot of the number of citations in the PubMed database matching the query [“fMRI” OR“functional MRI”
OR “functional magnetic resonance imaging”] for every year since 1992.

The incredible capabilities of magnetic resonance imaging (MRI) can hardly be overstated. In less than 10
minutes, it is possible to obtain images of the human brain that rival the quality of a postmortem examination,
in a completely safe and noninvasive way. Before the development of MRI, imaging primarily relied upon the use
of ionizing radiation (as used in X-rays, computed tomography, and positron emission tomography). In addition to
the safety concerns about radiation, none of these techniques could provide the flexibility to image the broad range
of tissue characteristics that can be measured with MRI. Thus, the establishment of MRI as a standard medical
imaging tool in the 1980s led to a revolution in the ability to see inside the human body.

1.2 The emergence of cognitive neuroscience

Our fascination with how the brain and mind are related is about as old as humanity itself. Until the development
of neuroimaging methods, the only way to understand how mental function is organized in the brain was to
examine the brains of individuals who had suffered damage due to stroke, infection, or injury. It was through these
kinds of studies that many early discoveries were made about the localization of mental functions in the brain
(though many of these have come into question subsequently). However, progress was limited by the many
difficulties that arise in studying brain-damaged patients (Shallice, 1988). In order to better understand how mental
functions relate to brain processes in the normal state, researchers needed a way to image brain function while
individuals performed mental tasks designed to manipulate specific mental processes.

Figure 1.2. An example of the hemodynamic responses evoked in area V1 by a contrast-reversing checker-board
displayed for 500 ms. The four different lines are data from four different individuals, showing how variable these
responses can be across people. The MRI signal was measured every 250 ms, which accounts for the noisiness of
the plots. (Data courtesy of Stephen Engel, University of Minnesota)

In the 1980s several groups of researchers (principally at Washington University in St. Louis and the Karolinska
Institute in Sweden) began to use positron emission tomography (PET) to ask these questions. PET measures the
breakdown of radioactive materials within the body. By using radioactive tracers that are attached to biologically
important molecules (such as water or glucose), it can measure aspects of brain function such as blood flow or
glucose metabolism. PET showed that it was possible to localize mental functions in the brain, providing the first
glimpses into the neural organization of cognition in normal individuals (e.g., Posner et al., 1988). However, the
use of PET was limited due to safety concerns about radiation exposure, and due to the scarce availability of
PET systems.
fMRI provided exactly the tool that cognitive neuroscience was looking for.

1.First, it was safe, which meant that it could be used in a broad range of individuals, who could be scanned
repeatedly many times if necessary. It could also be used with children, who could not take part in PET studies
unless the scan was medically necessary.

2.Second, by the 1990s MRI systems had proliferated, such that nearly every medical center had at least one
scanner and often several. Because fMRI could be performed on many standard MRI scanners (and today on nearly
all of them), it was accessible to many more researchers than PET had been.

3.Finally, fMRI had some important technical benefits over PET. In particular, its spatial resolution (i.e., its
ability to resolve small structures) was vastly better than PET. In addition, whereas PET required scans lasting at
least a minute, with fMRI it was possible to examine events happening much more quickly. Cognitive
neuroscientists around the world quickly jumped on the bandwagon, and thus the growth spurt of fMRI began.

1.3 A brief history of fMRI analysis

Figure 1.3. Early fMRI images from Kwong et al.). The left panel shows a set of images starting with the baseline
image (top left), and followed by subtraction images taken at different points during either visual stimulation or
rest. The right panel shows the timecourse of a region of interest in visual cortex, showing signal increases that
occur during periods of visual stimulation.

When the first fMRI researchers collected their data in the early 1990s, they also had to create the tools to
analyze the data, as there was no "off-the-shelf" software for analysis of fMRI data. The first experimental
designs and analytic approaches were inspired by analysis of blood flow data using PET. In PET blood flow studies,
acquisition of each image takes at least one minute, and a single task is repeated for the entire acquisition. The
individual images are then compared using simple statistical procedures such as a t-test between task and resting
images. Inspired by this approach, early studies created activation maps by simply subtracting the average
activation during one task from activation during another.
For example, in the study by Kwong et al. (1992), blocks of visual stimulation were alternated with blocks of no
stimulation. As shown in Figure 1.3, the changes in signal in the visual cortex were evident even from inspection of
single subtraction images. In order to obtain statistical evidence for this effect, the images acquired during the
stimulation blocks were compared to the images from the no-stimulation blocks using a simple paired t-test.
This approach provided an easy way to find activation, but its limitations quickly became evident.

1.First, it required long blocks of stimulation (similar to PET scans) in order to allow the signal to reach a steady
state. Although feasible, this approach in essence wasted the increased temporal resolution available from fMRI
data.

2.Second, the simple t-test approach did not take into account the complex temporal structure of fMRI data,
which violated the assumptions of the statistics. Researchers soon realized that the greater temporal resolution
of fMRI relative to PET permitted the use of event-related (ER) designs, where the individual impact of relatively
brief individual stimuli could be assessed. The first such studies used trials that were spaced very widely in time (in
order to allow the hemodynamic response to return to baseline) and averaged the responses across a time
window centered around each trial (Buckner et al., 1996). However, the limitations of such slow event-related
designs were quickly evident; in particular, it required a great amount of scan time to collect relatively few
trials. The modeling of trials that occurred more rapidly in time required a more fundamental understanding of the
BOLD hemodynamic response (HRF). A set of foundational studies (Boynton et al., 1996; Vazquez & Noll, 1998;
Dale & Buckner, 1997) established the range of event-related fMRI designs for which the BOLD response behaved
as a linear time invariant system, which was roughly for events separated by at least 2 seconds

The noise in raw fMRI data also was a challenge, particularly with regard to the extreme low frequency
variation referred to as "drift." Early work systematically examined the sources and nature of this noise and
characterized it as a combination of physiological effects and scanner instabilities (Smith et al., 1999; Zarahn
et al., 1997; Aguirre et al., 1997), though the sources of drift remain somewhat poorly understood. The drift was
modeled by a combination of filters or nuisance regressors, or using temporal autocorrelation models (Woolrich et
al., 2001).

Since 2000, a new approach to fMRI analysis has become increasingly common, which attempts to analyze the
information present in patterns of activity rather than the response at individual voxels. Known variously as
multi-voxel pattern analysis (MVPA), pattern information analysis, or machine learning, these methods
attempt to determine the degree to which different conditions (such as different stimulus classes) can be
distinguished on the basis of fMRI activation patterns, and also to understand what kind of information is present
in those patterns. A particular innovation of this set of methods is that they focus on making predictions about new
data, rather than simply describing the patterns that exist in a particular data set.

1.4 Major components of fMRI analysis

The analysis of fMRI data is made complex by a number of factors.

1.First, the data are liable to a number of artifacts, such as those caused by head movement.

2.Second, there are a number of sources of variability in the data, including variability between individuals and
variability across time within individuals.

3.Third, the dimensionality of the data is very large, which causes a number of challenges in comparison to the
small datasets that many scientists are accustomed to working with.

The major components of fMRI analysis are meant to deal with each of these problems.

They include:
• Quality control: Ensuring that the data are not corrupted by artifacts.

• Distortion correction: The correction of spatial distortions that often occur in fMRI images.

• Motion correction: The realignment of scans across time to correct for head motion.

• Slice timing correction: The correction of differences in timing across different slices in the image.

• Spatial normalization: The alignment of data from different individuals into a common spatial framework so that
their data can be combined for a group analysis.

• Spatial smoothing: The intentional blurring of the data in order to reduce noise.

• Temporal filtering: The filtering of the data in time to remove low-frequency noise.

• Statistical modeling: The fitting of a statistical model to the data in order to estimate the response to a task or
stimulus.

• Statistical inference: The estimation of statistical significance of the results, correcting for the large number of
statistical tests performed across the brain.

• Visualization: Visualization of the results and estimation of effect sizes. The goal of this book is to outline the
procedures involved in each of these steps.

Software packages for fMRI analysis

In the early days of fMRI, nearly every lab had its own home-grown software package for data analysis, and
there was little consistency between the procedures across different labs. As fMRI matured, several of these in-
house software packages began to be distributed to other laboatories, and over time several of them came to be
distributed as full-fledged analysis suites, able to perform all aspects of analysis of an fMRI study.

Package Developer Platforms Licensing

SPM University College London MATLAB (Win/Linux) Open-source

FSL Oxford University Linux, Mac Open-source

AFNI NIH Linux Open-source

Brain Voyager Brain Innovation Windows, Mac, Linux Closed-source

Table 1.1 An overview of major fMRI software package

1.5.1 SPM

SPM (which stands for Statistical Parametric Mapping) was the first widely used and openly distributed software
package for fMRI analysis. Developed by Karl Friston and colleagues in the lab then known as the Functional
Imaging Lab (or FIL) at University College London, it started in the early 1990s as a program for analysis of PET data
and was then adapted in the mid-1990s for analysis of fMRI data. It remains the most popular software package for
fMRI analysis. SPM is built in MATLAB, which makes it accessible on a very broad range of computer platforms. In
addition, MATLAB code is relatively readable, which makes it easy to look at the code and see exactly what is being
done by the programs.

Even if one does not use SPM as a primary analysis package, many of the MATLAB functions in the SPM
package are useful for processing data, reading and writing data files, and other functions. SPM is also
extensible through its toolbox functionality, and a large number of extensions are available via the SPM Web site.
One unique feature of SPM is its connectivity modeling tools, including psychophysiological interaction (Section
8.2.4) and dynamic causal modeling (Section 8.3.4). The visualization tools available with SPM are relatively limited,
and many users take advantage of other packages for visualization.

1.5.2 FSL

FSL (which stands for FMRIB Software Library) was created by Stephen Smith and colleagues at Oxford University,
and first released in 2000. FSL has gained substantial popularity in recent years, due to its implementation of a
number of cutting-edge techniques.

1.First, FSL has been at the forefront of statistical modeling for fMRI data, developing and implementing a
number of novel modeling, estimation, and inference techniques that are implemented in their FEAT; FLAME, and
RANDOMISE modules.

2.Second, FSL includes a robust toolbox for independent components analysis (ICA; see Section 8.2.5.2), which
has become very popular both for artifact detection and for modeling of resting-state fMRI data.

3.Third, FSL includes a sophisticated set of tools for analysis of diffusion tensor imaging data, which is used to
analyze the structure of white matter. FSL includes an increasingly powerful visualization tool called FSLView,
which includes the ability to overlay a number of probabilistic atlases and to view time series as a movie. Another
major advantage of FSL is its integration with grid computing, which allows for the use of computing clusters to
greatly speed the analysis of very large datasets.

1.5.3 AFNI

AFNI (which stands for Analysis of Functional Neurolmages) was created by Robert Cox and his colleagues, first
at the Medical College of Wisconsin and then at the National Institutes of Mental Health. AFNI was developed
during the very early days of fMRI and has retained a loyal following. Its primary strength is in its very powerful
and flexible visualization abilities, including the ability to integrate visualization of volumes and cortical
surfaces using the SUMA toolbox. AFNI's statistical modeling and inference tools have historically been less
sophisticated than those available in SPM and FSL. However, recent work has integrated AFNI with the R statistical
package, which allows use of more sophisticated modeling techniques available within R.

1.5.4 Other important software packages

Brain Voyager, produced by Rainer Goebel and colleagues at Brain Innovation, is the major commercial software
package for fMRI analysis. It is available for all major computing platforms and is particularly known for its ease of
use and refined user interface.

FreeSurfer is a package for anatomical MRI analysis developed by Bruce Fischl and colleagues at the Massachusetts
General Hospital. Even though it is not an fMRI analysis package per se, it has become increasingly useful for fMRI
analysis because it provides the means to automatically generate both cortical surface models and anatomical
parcellations with a minimum of human input.

Visualizing, localizing, and reporting fMRI data

The dimensionality of fMRI data is so large that, in order to understand the data, it is necessary to use
visualization tools that make it easier to see the larger patterns in the data. Parts of this chapter are adapted
from Devlin & Poldrack (2007) and Poldrack (2007).
10.1 Visualizing activation data

It is most useful to visualize fMRI data using a tool that provides simultaneous viewing in all three canonical
orientations at once (see Figure 10.1), which is available in all of the major analysis packages. Because we wish to
view the activation data overlaid on brain anatomy, it is necessary to choose an anatomical image to serve as an
underlay. This anatomical image should be as faithful as possible to the functional image being overlaid.

When viewing an individual participant's activation, the most accurate representation is obtained by overlaying the
statistical maps onto that individual's own anatomical scan coregistered to the functional data. When viewing
activation from a group analysis, the underlay should reflect the anatomical variability in the group as well as the
smoothing that has been applied to the fMRI data. Overlaying the activation on the anatomical image from a
single subject, or on a single-subject image, implies a degree of anatomical precision that is not actually
present in the functional data. Instead, the activation should be visualized on an average structural image from
the group coregistered to the functional data, preferably after applying the same amount of spatial smoothing
as was applied to the functional data. Although this appears less precise due to the blurring of macroanatomical
landmarks, it accurately reflects the imprecision in the functional data due to underlying anatomical variability
and smoothing. After choosing an appropriate underlay, it is then necessary to choose how to visualize the
statistical map. Most commonly, the map is visualized after thresholding, showing only those regions that exhibit
significant activation at a particular threshold. For individual maps and exploratory group analyses, it is
often useful to visualize the data at a relatively weak threshold (e.g.,p<.01 uncorrected) in order to obtain a
global view of the data. It can also be useful to visualize the data in an unthresholded manner, using truecolor
maps (see Figure 10.2). This gives a global
impression of the nature of the
statistical map and also provides
a view of which regions are not
represented in the statistical
map due to dropout or artifacts,
as well as showing both positive
and negative effects.

For inference, however, we


always suggest using a corrected
threshold.

Figure 10.1.An example of group


fMRI activation overlaid on
orthogonal sections, viewed using
FSLView. The underlay is the
mean anatomical image from the
group of subjects.

Another way to view activation


data is to project them onto the
cortical surface, as shown in
Figure 10.2. This can be a very
useful way to visualize activation,
as it provides a three-dimensional
perspective that can be difficult to
gain from anatomical slices alone.
In addition, data can be registered
by alignment of cortical surface
features and then analyzed in surface space, which can sometimes provide better alignment across subjects than
volume-based alignment (e.g., Desai et al., 2005). However, these methods often require substantial processing
time and manual intervention to accurately reconstruct the cortical surface. Recently, a method has been
developed that allows projection of individual or group functional activa-tion onto a population-based surface atlas
(Van Essen, 2005). This method, known as multifiducial mapping, maps the activation data from a group analysis
onto the cortical surfaces of a group of (different) subjects and then averages those mappings, thus avoiding the
bias that would result from mapping group data onto a single subject's surface. Although individual
reconstruction will remain the gold standard for mapping activation data to the cortical surface, the
multifiducial mapping technique (implemented in the CARET software package) provides a useful means for
viewing projections of group activation data on a population-averaged cortical surface.

Figure 10.2.Various ways of viewing fMRI activation maps. Left panel: Standard thresholded activation maps,
showing positive activation in red/yellow and negative activation inb lue/green. Middle panel: True color map of
the same data, showing positive signals in red and negative signals in blue, created using the MRIcron software
package. Right panel: Rendering of the same data onto the cortical surface, using the CARET software package.

SPM includes two visualization tools that are often used but can result in mislocalization if used
inappropriately.

1.The first is the "glass brain" rendering that has been used since the early days of PET analysis (see Figure
10.3). This is a three-dimensional projection known as a "maximum intensity projection" as it reflects the strongest
signal along each projection through the brain for the given axis. Using the glass brain figures to localize activation
requires substantial practice, and even then it is difficult to be very precise. In addition, if large amounts of activation
are present, as in the left panel of Figure 10.3, it can be difficult to differentiate activated regions. The SPM
"rendering" tool is often used to project activation onto the brain surface for visualization (see Figure 10.4). It is
important to note, however, that this is not a proper rendering; rather, it is more similar to the maximum intensity
projection used for the glass brain. Our recommendation for visualization of group activation on the cortical surface
would be to project the data into cortical surface space, using CARET or FreeSurfer, which will lead to more precise
localization.

10.2 Localizing activation

One of the most common questions on the mailing lists for fMRI analysis soft-ware packages is some variant of the
following: “I've run an analysis and obtained an activation map. How can I determine what anatomical regions
are activated?"

As outlined by Devlin & Poldrack (2007), there are two approaches to this question.

1.One (which we refer to as "black-box anatomy") is to use an automated tool to identify activation locations,
which requires no knowledge of neuroanatomy.
2.The alternative, which we strongly suggest, is to use knowledge of neuroanatomy along with a variety of
atlases to determine which anatomical regions are activated.

Figure 10.3. An example of SPM’s glass brain visualization, showing maximum intensity projections along each axis.
At a lower threhsold (shown the left panel), there are a relatively large number of regions active, which makes
it relatively difficult to determine the location of activation. At a higher threshold (right panel), it becomes easier
to identify the location of activated regions.

Having overlaid activation on the appropriate anatomical image, there are a number of neuroanatomical atlases
available to assist in localization of activation, as well as some useful Web sites (listed on the book Web site).

10.2.1 The Talairach atlas Probably the best known atlas for human brain anatomy is the one developed by Jean
Talairach (Talairach, 1967; Talairach & Tournoux, 1988). However, as we argued in Devlin & Poldrack (2007), there
are a number of reasons why we do not recommend using this atlas or the various automated tools derived from
it. In short, we believe that localization based on the Talairach atlas is an alluringly easy but bad option. It is easy
because the atlas has an overlaid coordinate system, which makes it trivial to identify an activated location.

It is nonetheless a bad option because this provides a false sense of precision and accuracy, for a number of
reasons:

1. The atlas is based on the single brain of a 60-year-old woman, and therefore is not representative of either
the population as a whole nor any individuals.

2. Almost all major analysis packages use the templates based on the MNI305 atlas (Montreal Neurological
Institute) as their target for spatial normalization, which are population-based (and therefore representative)
templates. An extra step is needed to convert coordinates into Talairach space, and this introduces additional
registration error. Worse still, there is no consensus regarding how to perform this transformation (Brett et
al., 2002; Carmack et al., 2004; Lancaster et al., 2007) and therefore the chosen method biases the results,
introducing additional variation and therefore reducing accuracy.

3. The atlas is based on a single left hemisphere that was reflected to model the other hemisphere. However,
there are well-known hemispheric asymmetries in normal individuals (e.g., location of Heschl's gyrus, length of
precentral gyrus), such that assuming symmetry across hemispheres will result in additional inaccuracy.
4.The Talairach atlas is labeled with Brodmann's areas, but the precision of these labels is highly misleading.
The labels were transferred manually from the Brodmann's map by Talairach, and even according to Talairach the
mapping is uncertain (cf., Brett et al., 2002; Uylings et al., 2005). For all of these reasons, the Talairach atlas is not
a good choice. Likewise, we believe that automated coordinate-based labeling methods based on the
Talairach atlas (such as the Talairach Daemon: Lancaster et al., 2000) are problematic. We believe that it is much
better to take the nominally more difficult, but far more accurate, route of using an anatomical atlas rather than
one based on coordinates.

Figure 10.4. The top panel shows an activation map displayed using the SPM Render function, while the bottom
section shows the same data rendered to the cortical surface using CARET's multifiducial mapping function. The
CARET rendering provides substantially greater anatomical detail.
Bibliografie 9
Christian Jarett – Great Myths of the Brain, p.177-192;
Chapter 6 – Technology and Food Myths

A disproportionate amount of neuroscience press coverage is


focused on revelations from brain scan studies, with frequent
claims about how neuroimaging technology promises to
transform everything from lie detection to marketing. Other
myths have emerged about the potential harm of modern
technology, especially the Internet, for our brains. Paradoxically,
there are also tall tales about the potential for computer-based
brain training and neurofeedback to bypass traditional forms of
exercise and practice and tune our brains to perfection. This
chapter takes an objective look at all this technology hype and
fear mongering. It concludes by examining myths that have
grown up around the notion of “brain foods” that supposedly
boost our IQ or stave off dementia.

Myth 27 - Brain Scans Can Read Your Mind

The mind used to be a black box, welded shut to the outside


world. Clues came from introspection, brain-damaged patients,
and ingenious psychology experiments that probed the mind’s
probable inner workings by testing the limits of perception and
memory. But then in the 1990s, functional magnetic resonance
brain imaging (fMRI) came along and the lid of the box was flung
wide open.

Another form of brain scanning known as PET (positron emission tomography) had been around since the late
1960s but it required injection of radioactive isotopes. With the advent of fMRI, which is noninvasive, psychologists
could more easily recruit participants to perform mental tasks in the scanner and then watch as activity
increased in specific regions of their brains. A staggering 130 000 fMRI-based research papers and counting
have since been published.

The technology continues to attract funding and media interest in equally generous measure. Psychologists
working with these impressive new brain imaging machines have gained the scientific credence the discipline
craved for so long. Now they have technical props for staring into the mind, rivaling the powerful telescopes used
by astronomers for gazing out into space. And, at last, psychological research produces tangible images showing
the brain in action (although it still measures brain activity indirectly, via changes to blood flow). The private
subjective world has been turned public and objective. “Few events are as thrilling to scientists as the arrival of a novel
method that enables them to see the previously invisible,” wrote psychologist Mara Mather and her colleagues in a
2013 special journal issue devoted to the field.

It’s been a dream for newspaper and magazine editors too. Each new brain imaging study comes with a splash of
colorful anatomical images. They look clever and “sciencey,” but at the same time the blobs of color are intuitive.
“There, look!” the pictures call out, “This is where the brain was hard at work.”

The headlines have gushed with all the vigor of blood through the brain. “For the first time in history,” gasped the
Irish Times in 2007, “it is becoming possible to read someone else’s mind with the power of science rather than human
intuition or superstition.” Other times there’s a tone of paranoia in the media reports. “They know what you’re
thinking: brain scanning makes progress,” claimed The Sunday Times in 2009. In the hands of many journalists,
brain imaging machines have metamorphosed into all-seeing mind-reading devices able to extract our deepest
thoughts and desires. Such paranoia is not entirely without basis. Today tech companies are appearing on the scene
claiming the technology can be used for lie detection, to predict shoppers’ choices, and more.

Mind-Reading Hype

The media coverage of brain scanning results often overplays the power of the technology. A review conducted by
Eric Racine and his colleagues in 2005 found that 67 percent of stories contained no explanation of the limitations
of the technique and just 5 percent were critical. At least twice in recent years The New York Times has run op-
ed columns making grand claims about brain scanning results. Both times, the columns have provoked the
opprobrium of neuroscientists and psychologists. In 2006, headlined “This Is Your Brain On Politics” a group led by
Marco Iacoboni reported that they’d used fMRI to scan the brains of swing voters while they looked at pictures and
videos of political candidates.

Based on overall brain activity, and the recorded activity levels in specific areas, Iacoboni and his colleagues made
a number of surprisingly specific claims. For example, because the sight of Republican presidential candidate Mitt
Romney triggered initial high activity in the amygdala – a region involved in emotions – the researchers said this
was a sign of voter anxiety. John Edwards, meanwhile, triggered lots of insula activity in voters who didn’t favor
him, thus showing their emotions “can be quite powerful.” Obama had his own problems – watching him triggered
little overall brain activity. “Our findings suggest that Obama has yet to create an impression,” wrote Iacoboni et
al.

Shortly after the column was published, a group of some of the most eminent neuroimaging experts in the USA and
Europe, including Chris Frith at UCL and Liz Phelps at New York University, wrote to The New York Times, to correct
the false impression given by the column that “it is possible to directly read the minds of potential voters by looking
at their brain activity while they viewed presidential candidates.”

The expert correspondents pointed out that it’s not possible to infer specific mental states from the activity in
specific brain regions. The amygdala, for example, is associated with “arousal and positive emotions,” not just
anxiety (more on this point later). Of course, we also now know, contrary to the signs reported in the flaky op-ed,
that in little over a year Obama would be elected the first African American president of the USA.

A similar misrepresentation of brain imaging occurred on the same op-ed pages in 2011, but this time in relation to
iPhones. Martin Lindstrom, a self-described guru on consumer behavior, wrote that he’d conducted a brain imaging
study, and that the sight and sound of people’s iPhones led to activity in their insular cortex – a sign, he claimed,
that people truly love their phones. “The subjects’ brains responded to the sound of their phones as they would
respond to the presence or proximity of a girlfriend, boyfriend or family member,” Lindstrom pronounced.

Again, the mainstream neuroimaging community was furious and frustrated at this crude presentation of
brain scanning as a trusty mind-reading tool. This time a group of well over forty international neuroimaging
experts led by Russell Poldrack at the University of Texas at Austin wrote to the paper to express their concerns:
“The region that he points to as being ‘associated with feelings of love and compassion’ (the insular cortex) is a brain
region that is active in as many as one third of all brain imaging studies.”

Of course, it’s not just The New York Times that falls for these spurious mind-reading claims. There’s a near
constant stream of stories that extrapolate too ambitiously from the results of brain scanning experiments and
depict them as far-seeing mind-reading machines. “Brain Scans Could Reveal If Your Relationship Will Last”
announced the LiveScience website in time for Valentine’s 2012, before going on to explain that using fMRI,
“scientists can spot telltale regions of your brain glowing joyously when you look at a photograph of your beloved.”
In an article headlined “This Is Your Brain on Shopping,” Forbes explained in 2007 that “scientists have discovered
that they can correctly predict whether subjects will decide to make a purchase.”

Other ways the media (mis)represent brain imaging results include what Racine and his colleagues called “neuro-
realism” – for example, using measures of brain activity to claim that the pain relief associated with acupuncture is
“real,” not imaginary; and “neuro-essentialism,” whereby the imaging results are used to personify and empower
the brain, as in “The brain can do x” or “How the brain stores languages.” Also known as the “mereological fallacy,”
some critics say discussion of this kind is misleading because of course the brain can’t do anything on its own. As
Peter Hankins puts it on his Conscious Entities blog, “Only you as a whole entity can do anything like thinking or
believing.”

It’s reassuring to note that despite all the hype, a survey of public beliefs published in 2011 found that the majority
of the general public aren’t so gullible, at least not in the UK. Joanna Wardlaw and her colleagues obtained
responses from 660 members of the British public and only 34 percent believed fMRI could be used, to some extent,
to find out what they were thinking (61 percent said “not at all”).

The Messy Reality

You wouldn’t know it from the media stories but an ongoing problem for neuroscientists (and journalists eager for
a story) is exactly how to interpret the complex dance of brain activity revealed by fMRI. The brain is always busy,
whether it’s engaged in an experimenter task or not, so there are endless fluctuations throughout the entire
organ. Increased activity is also ambiguous – it can be a sign of increased inhibition, not just excitation. And
people’s brains differ in their behavior from one day to the next, from one minute to the next. A cognition-brain
correlation in one situation doesn’t guarantee it will exist in another. Additionally, each brain is unique – my brain
doesn’t behave in exactly the same way as yours. The challenge of devising carefully controlled studies and knowing
how to interpret them is therefore monumental.

It’s perhaps no surprise that the field has found itself in difficulties in recent years. A scandal blew up in 2009 when
Ed Vul and his colleagues at the Massachusetts Institute of Technology analyzed a batch of neuroimaging papers
published in respected social neuroscience journals and reported that many contained serious statistical errors
that may have led to erroneous results. Originally released online with the title “Voodoo Correlations in Social
Neuroscience” their paper was quickly renamed in more tactful terms in an attempt to calm the fierce controversy
that had started brewing before the paper was even officially published.

The specific claim of Vul and his methodological whistleblowers was that many of the brain imaging studies they
examined were guilty of a kind of “double-dipping” – researchers first performed an all-over analysis to find a brain
region(s) that responds to the condition of interest (e.g. feelings of social rejection), before going on to test their
hypothesis on data collected in just that brain region. The cardinal sin is that the same data were often used
in both stages. Vul’s team argued that by following this procedure, it would have been nearly impossible for the
studies not to find a significant brain-behavior correlation. Some of the authors of these criticized papers
published robust rebuttals, but the damage to the reputation of the field had been done.

Using more flawed statistics of the kind that have been deployed in a minority of poor brain imaging studies,
another provocative paper published in 2009. (and the winner of an IgNobel Award in 2012) showed evidence of
apparently meaningful brain activity in a dead Atlantic salmon! Then in 2013, a landmark review paper published in
the prestigious journal Nature Reviews Neuroscience made the case that most studies in neuroscience, including
structural brain imaging (and it’s likely the same is true for functional brain imaging), are woefully underpowered,
involve too few subjects to have a good chance to detect real effects, and therefore risk the discovery of
spurious findings.

In relation to the mind-reading claims made in newspapers – be that love for our spouse or for the latest gadget we
wish to buy – two more specific points are worth outlining in more detail.
1. The first was alluded to in those letters from experts to The New York Times. It’s the problem of reverse
inference. Many times, journalists (and many scientists too), jump to conclusions about the meaning of
brain imaging results based on past evidence concerning the function of specific brain areas. For
example, because the amygdala has many times been shown to be active when people are anxious,
it’s all too tempting to inter-pret signs of extra amygdala activity in a new experiment as evidence of
anxiety, even if the context is completely different. But as Poldrack and his fellow correspondents
explained in their letter to The New York Times, this is dubious logic. Activity in the amygdala is
associated with lots of other flavors of emotion and mental experience besides anxiety. And extra
anxiety can manifest in increased activity in lots of other brain areas besides, and not always including, the
amygdala. The same argument applies to other brain regions often cited in mind-reading style newspaper
reports, including the insula, the anterior cingulate, and subregions of the frontal lobes.
2. The second key point to bear in mind when considering mind-reading newspaper stories is that brain
scanning results are often based on the average activity recorded from the brains of several
participants. On average, a group of people may show increased activity in a particular brain region when
they look at a picture of someone they really love, but not when they look at someone they don’t. And yet
this peak of activity might not be evident in any one person’s brain. Mood, tiredness, personality, hunger,
time of day, task instructions, experience and countless random factors can all interfere with the specific
patterns of activity observed at any one time in a single individual. Averaging across many participants
helps cancel out all this noise. Crucially, this important averaging process makes a nonsense of the way
that findings are often presented in the news, as if scientists can see inside your brain and tell what
you are thinking.

Protecting the Neuroimaging Baby

We’ve seen how the media tends to hype and oversimplify brain imaging science but it’s important that we don’t
throw the baby out with the bath water and undervalue the contribution of this exciting field. Let’s be clear,
virtually all experts agree that fMRI machines are valuable tools that provide a window into the workings of
the mind. When parts of the brain work harder, more blood rushes to that area to resupply it with oxygenated
blood. The scanning machine can detect these changes, so when the mind is busy, we get to see what the brain is
doing at the same time. A generation ago this would have seemed little short of a miracle. The technology can
be used to test, complement, and refine psychological models of mental processes. It’s also improved our
understanding of functional brain anatomy and the dynamic interplay of brain networks.

A 2013 special issue of the journal Perspectives on Psychological Science was devoted to the contribution made by
fMRI to psychological theory. Among the essays, Denise Park and Ian McDonough said the technology had raised
an important challenge to traditional views of aging as a process of purely passive decline (in fact, the brain
responds and adapts to lost function). Michael Rugg and Sharon Thompson-Schill pointed to the way brain imaging
has furthered our understanding of the distinct but overlapping ways color is represented in the brain when we see
it, compared with when we remember it – important results that feed into our understanding of perception and
memory more generally.

It is also true that there are exciting laboratory examples of brain scan data being decoded as a way to estimate
the likely content of people’s thoughts and feelings. For example, in 2011, researchers at the University of
California, Berkeley, showed that they could descramble visual cortex activity recorded by fMRI from a person’s
brain as he or she viewed movie clips, thereby deducing what bit of film was being shown at the time the brain
activity was recorded. Or consider a paper from 2013, in which researchers at Carnegie Mellon University showed
they could identify the emotion a person was experiencing based on the patterns of brain activity shown by that
person when they’d experienced the emotion previously.

These results are amazing, there’s no doubt about that. But they don’t yet mean a neuroscientist can put you
in a brain scanner and read your thoughts straight off. As the lead researcher of the California movie study told
PBS NewsHour, “We’re not doing mind-reading here. We’re not really peering into your brain and reconstructing
pictures in your head. We’re reading your brain activity and using that brain activity to reconstruct what you
saw. And those are two very, very different things.” It’s worth reiterating – the researchers didn’t have insight into
the content of the movie viewers’ minds. Rather, they could tell from a portion of brain activity what was being
shown to the viewers at that time.

Regarding the Carnegie Mellon emotion study, this was performed with actors simulating emotions and the
decoding technique had to be trained first to create the data to allow later emotions to be interpreted from brain
activity. Again, this is brilliant science worth celebrating and sharing, but it’s not out and out mind reading of
genuine emotion. We need to strike a balance between recognizing progress in this field without overstating
what’s been achieved. As the Neuroskeptic blogger said on Twitter recently, “The true path of neuroscience, between
neurohype and neurohumbug, is a narrow one, and you won’t find easy answers on it.”

“Mind-Reading” Applications

It’s in the rush to extrapolate from the kind of exciting lab findings mentioned above to potential real-life
applications that brain mythology often overtakes fact. Despite the caveats and caution that are needed in brain
imaging, there is a growing band of entrepreneurs and chancers/visionaries (depending on your perspective) who
have already started to look for ways to use fMRI brain-scanners as mind-reading tools in the real world. Let’s take
a look at three of these developments in turn and see if we can separate fact from fiction.

Lie detection

Nowhere is brain imaging hype proving more controversial than in the context of the law, and the idea of the fMRI
scanner as a lie-detection device. Here the mainstream media is probably more gullible than the general public. In
the British public survey I mentioned earlier, 62 percent of respondents said they thought fMRI could detect lies “to
some extent” – which is probably a fair assessment (although 5.6 percent said “very well,” which is an
exaggeration).

One of the earliest companies to move into this territory was No Lie MRI, based in California, which started offering
its services commercially in 2006. The company website (www.noliemri.com) boasts that its “technology …
represents the first and only direct measure of truth verification and lie detection in human history!”

That’s quite a claim and it’s perhaps little surprise that many people have already been tempted to use the
company to resolve private disputes, paying around $10 000 a go according to a New Yorker article published in
2007. One of the first clients was apparently a deli owner in South Carolina who wanted proof that he hadn’t lied
about not setting fire to his own establishment. Jealous spouses and paranoid governments too have supposedly
expressed an interest in the technology.

Still, at the time of writing, fMRI-based lie detection has not been admitted in a criminal court. An important test
case took place in the summer of 2012 in Montgomery County. This was prior to the trial of Gary Smith, who stood
accused of shooting his roommate to death in 2006. Smith claimed that his friend killed himself and he turned to
No Lie MRI to help prove his case. The scans suggested he was telling the truth and the judge Eric M. Johnson
allowed both the prosecution (who objected) and defense to debate whether the evidence could be admitted to
the trial. Intriguingly, both sides cited the same academic paper to argue their side. No Lie MRI was represented
in court by the psychiatrist Frank Haist; the prosecution called on the New York University psychologist Liz Phelps.

In the end the judge ruled that the evidence was inadmissible. In his published comments, he revealed that the
defense claimed that 25 studies had been published on fMRI lie detection, none of them finding that the technology
does not work. However, the judge noted that “the tepid approval of a few scholars through 25 journal articles does
not persuade this Court that such acceptance exists.” The prosecution on the other hand argued that the validity of
fMRI lie detection was not accepted by the mainstream neuroimaging community, and moreover, that only
9 percent of the studies cited by the defense actually used fMRI. Summing up, the judge writes that “The
competing motions, expert testimonies, and submitted articles reveal a debate that is far from settled in the scientific
community. This scientific method is a recent development which has not been thoroughly vetted.”

The paper by Giorgio Ganis that both sides cited to support their case involved 26 participants having their brains
scanned while they looked at six dates appear successively on a screen. The participants’ task was to press a button
to indicate whether any of the dates was their birth date. For those in a truth-telling condition, none of the dates
was their birth date, so they simply indicated “no” each time. For the liars, one of the dates was their birth date, but
their task was to pretend it wasn’t and to answer “no” throughout. The process was repeated several times. The
experimental set up was designed to mimic a real-life situation akin, for example, to a suspect looking at a selection
of knives and indicating which, if any, he owned.

The result that pleased No Lie MRI and the defense was that participants in the lying condition exhibited
heightened activity across the front of their brains when they lied. A simple computer algorithm could process
the brain scan data and indicate with 100 percent accuracy, based on this heightened activity, which participants
were liars.

But there’s a twist that pleased the prosecution, who cited this study as a reason to throw out the fMRI evidence
in the Montgomery trial. In a follow up, Ganis and his colleagues taught lying participants a simple cheating
technique – to wiggle their toe or finger imperceptibly whenever three of the irrelevant dates appeared on-
screen. Doing this completely messed up the results so that liars could no longer be distinguished from truth
tellers.

The reason the cheating strategy worked is that looking out for specific dates for toe wiggling lent those dates
salience, just as for the true birth dates. Experts say that the same cheating strategy could probably work simply by
calling to mind a particular thought or memory when irrelevant dates appeared, without the need to move toes or
fingers. Ganis and his colleagues concluded that it was important for more research on “counter-measures” like this
to be conducted before brain imaging can be used as a valid lie-detection method. Incidentally, exactly the same
kind of problems bedevil the standard polygraph test.

A more recent paper published in 2013 also showed that participants involved in a mock crime were able to scupper
a lie-detection method based on brainwaves (recorded from the scalp by EEG) simply by suppressing their
memories of the crime. “Our findings would suggest that the use of most brain activity guilt detection tests in legal
settings could be of limited value,” said co-author Jon Simons at Cambridge University. “We are not saying that all
tests are flawed,” he added, “just that the tests are not necessarily as good as some people claim.”

It’s important that the neuroscience community get its facts in order about the reliability of brain-scanning
techniques in legal settings, not least because there’s tentative evidence that judges and jurors are unusually
persuaded by brain-based evidence. For instance, a 2011 study by David McCabe and his colleagues showed that
mock jurors were more persuaded by evidence from a brain scan that suggested a murder suspect was lying, as
compared with similar evidence from the traditional polygraph or from technology that measures heat
patterns on the face. The good news is that a simple message about the limitations of brain scanning appeared to
be enough to reduce the allure of the technology.

Regardless of whether brain scan lie detection is ever allowed in court, it’s worth noting that there are already signs
that neuroscience evidence is becoming more common. According to an analysis in 2011, by the Royal Society, of
documented US court cases in which neurological or behavioral genetics evidence have been submitted in defense,
199 and 205 took place in 2008 and 2009, respectively, compared with 101 and 105 in 2005 and 2006 – a clear sign
of an upward trend.

A breakthrough Italian case from 2011 gives us some further insights into the way things could be headed. On this
occasion, the judge scored a first for her country by allowing the submission of brain imaging and genetic evidence,
which was used by the defense to argue that a convicted female murderer had brain abnormalities, including
reduced gray matter in frontal regions involved in inhibition, and in the insula, which they said was implicated in
aggression (you’ll recall that this is the same brain region that Martin Lindstrom in The New York Times linked with
love when discussing iPhones). A biologist also claimed that the woman had a version of the supposed MAO-A
“warrior gene.” If true, this would mean that her body produces lower levels of an enzyme involved in regulating
neurotransmitter levels, potentially leading her to have a proclivity for aggression. However recent research has
challenged the notion of a warrior gene – variants of MAO-A have been linked with many behavioral and
psychological outcomes, many unrelated to violence. Nonetheless, based on the aforementioned expert
neurobiological testimony, the judge commuted the woman’s sentence from 30 to 20 years.

More recently, late in 2013, a US federal jury failed to come to a unanimous decision over whether the killer John
McCluskey should receive a death sentence. This might be because the defense had presented brain imaging and
other evidence to suggest that McCluskey had various neurological abnormalities, including small frontal lobes,
that interfered with this level of “intent.” The trial outcome led Wired magazine to ask: “Did Brain Scans Just Save
a Convicted Murderer From the Death Penalty?”

Also in 2013, a new study even raised the specter of brain imaging being used to predict the likelihood of a person
committing a crime in the future. A group led by Kent Kiehl at the Mind Research Network in New Mexico scanned
the brains of 96 male prisoners shortly before they were released. In the scanner, the men completed a test of
inhibitory control and impulsivity. Four years later, Kiehl’s team looked to see what had become of the prisoners.
Their key finding was that those prisoners who had displayed less activity in the anterior cingulate cortex (ACC), at
the front of the brain, were twice as likely to have re-offended over the ensuing years.

Kiehl told the press the result had “significant ramifications for the future of how our society deals with criminal justice
and offenders.” However, The Neurocritic blog pointed out that had the prisoners been dealt with based on their
brain scans, 40 percent of those with low activity in the ACC would have been wrongly categorized as re-
offenders, while 46 percent of the real re-offenders would have been released. “It’s not all that impressive and
completely inadmissible as evidence for decision-making purposes,” said The Neurocritic.

Neuromarketing

A completely different area where the supposed mind-reading capabilities of fMRI is generating excitement is in
marketing and consumer behavior. According to an article published in the Daily Telegraph in 2013,
“neuromarketing, as it’s known, has completely revolutionized the worlds of advertising and marketing.” But contrast
this hype with the verdict of a review of neuromarketing published in the prestigious review journal Nature Reviews
Neuroscience in 2010. “It is not yet clear whether neuroimaging provides better data than other marketing methods,”
wrote the authors Dan Ariely and Gregory Berns.

A specific claim of neuromarketing enthusiasts is that they can use brain scanning to find out things about consumer
preferences and future behavior that are beyond the reach of conventional marketing practices, such as the focus
group, audience screening, taste trial or product test. The aforementioned Daily Telegraph article, which was based
around the work of Steve Sands, chairman of the neuromarketing firm Sands Research, put it like this: “Only when
he places an EEG cap on the head of his test subjects … can Sands really tell whether they like what they’re seeing.”

Could he not ask them? Or follow up their behavior to see if they consume the relevant products? This is the
problem that neuromarketing faces – a lot of the time it simply isn’t clear that it is providing information that
wouldn’t be available far more cheaply through conventional means. Let’s consider an example cited by The
Telegraph. Apparently, research by Sands showed that when people’s brains were scanned as they watched a
recent hit advertisement for Volkswagen’s VW Passat, involving a young boy dressed as Darth Vader, the results
(sorry, the “neuroengagement scores”) went off the chart. This advert later proved to be hugely popular after it was
shown at the Super Bowl and the neuromarketing company chalked this up as a major success for its techniques.
What we’re not told is whether traditional audience research – showing the advert to a small pilot audience –
would have come to the exact same conclusion. Ariely and Berns were certainly far more conservative in their
review: “it is still unknown whether neuroimaging can prospectively reveal whether an advertisement will be effective.”

A further problem with this field is that many of its claims appear in newspaper articles or magazines, rather
than in peer-reviewed scientific journals. One pertinent example that garnered lots of press attention was an
investigation by Innerscope Research that purportedly used viewers’ physiological responses to movie trailers to
predict successfully which of the movies would take more earnings at the box office. The finding was reported on
the Fast Company business magazine website under the headline: “How your brain can predict blockbusters.”

Why be skeptical?

1. First off, the research didn’t measure brain activity at all! It involved “biometric belts” that measured
sweat, breathing, and heart rate.
2. Another problem from a scientific perspective is that the research is not available for scrutiny in a
scholarly journal. And once again, the doubt remains – could the same finding not have emerged simply
by asking people how much they enjoyed the different trailers. In their 2013 book Brainwashed, Sally
Satel and Scott Lilienfeld call this the problem of “neuroredundancy.”

I’ve expressed a great deal of skepticism, but it would be unfair to say that neuromarketing doesn’t have promise.
Although it’s early days, and there’s been an inordinate amount of hype, there are ways in which brain scanning
techniques could complement the traditional marketing armamentarium.

For instance, let’s say a new food product is a hit with consumers taking part in a conventional taste test. A potential
problem for the food company and its researchers is not knowing why the new product has been such a hit. Brain
scanning and other physiological measures could potentially identify what makes this product distinct – for
example, higher fat content tends to be associated with extra activity in one brain area (the insula), while an
enjoyable taste is usually associated with more activity in the orbitofrontal cortex. Of course there’s a need to
beware the risks of reverse inference discussed earlier. But picking apart these patterns of activity could perhaps
give unique insight into the reasons behind the popularity of a new product. As Ariely and Berns put it, using
neuromaging this way “could provide hidden information about products that would otherwise be unobtainable.”

Communicating with vegetative patients

Leaving the commercial world aside, a completely different potential application of fMRI as a mind-reading tool is
in healthcare, as a means of communicating with brain-damaged patients in a persistent vegetative state (PVS; that
is, they show no outward signs of awareness) or a minimally conscious state (they show fleeting signs of awareness
but have no reliable ability to communicate). Again, the media have tended to hype the story – “Scientists read the
minds of the living dead” was The Independent’s take in 2010, but at least this time there does appear to be solid
justification for the excitement.

Most of the work using fMRI in this way has been conducted by Adrian Owen’s lab at the Brain and Mind Institute
at the University of Western Ontario. The general approach the researchers have taken is to first contrast the
patterns of brain activity in healthy volunteers when they engage in two distinct mental activities – such as
imagining walking around their house versus playing tennis. Next, the same instructions have been given to PVS
patients, with several having shown the same contrasting patterns of brain activity as the healthy controls, as if
they too followed the imagery instructions.

The most exciting evidence to date has then come from using these imagery tasks as a form of communication tool.
In a study published in 2010, Owen and his colleagues appeared to have conducted a conversation with a male PVS
patient (later his diagnosis was changed to minimally conscious), by having him imagine playing tennis to answer
“no” to questions, and to imagine navigating his house to answer “yes.” Despite having no outward ability to
communicate, the man appeared to manipulate his own brain activity in such a way as to answer five out of six
questions correctly, such as “Is your father’s name Alexander?”

It’s not yet clear how many PVS patients this approach will work with, and there’s still mystery regarding the nature
of these patients’ level of conscious experience, but the implications are enormous. The technique may offer a
way to communicate with patients who might have been presumed lost forever. An obvious next step is to ask
vegetative patients whether they would like to go on living (on life-support) or not, but this is an ethical minefield.
Back in 2010 Owen told me that this is something he and his colleagues plan to look at in the future, subject to the
“appropriate ethical frameworks being set up.” He added that “only further research will tell us what kind of
consciousness these patients have.”

It’s also important to note that this line of research has not been immune from criticism. For instance, in 2012,
two weeks after Adrian Owen’s work was featured in a BBC Panorama documentary The Mind Reader: Unlocking
My Voice, a group of physicians led by Lynne Turner-Stokes at King’s College London School of Medicine wrote a
critical letter to the BMJ. The doctors challenged the claim made in the program that 20 percent of PVS patients
show signs of mental activity when their brains are scanned, and they suggested that one of the two supposedly
PVS patients in the documentary was in fact minimally conscious. This distinction between PVS and minimally
conscious is crucial because signs of awareness in a PVS patient would suggest they’d been misdiagnosed by
the usual bedside tests, whereas signs of awareness in a minimally conscious patient would be expected (see
more on coma myths on p. 273). However, Owen’s group and the TV show producers wrote an angry response
pointing out that the PVS patient in question, named Scott, had been consistently diagnosed as having PVS for
over a decade. “The fact that these authors [led by Turner-Stokes] took Scott’s fleeting movement, shown in the
program, to indicate a purposeful (‘minimally conscious’) response shows why it is so important that the diagnosis is
made in person, by an experienced neurologist, using internationally agreed criteria.”

We’ve seen that functional brain imaging is a fantastically exciting technology that is revolutionizing psychology
and cognitive neuroscience. But it’s a challenging field of science, so new findings should be treated with caution.
The media tend to exaggerate brain imaging results and oversimplify the implications of new findings.
Entrepreneurs, pioneers, and chancers are also looking for ways to exploit the technology in real life, from
neuromarketing to lie detection. This is an incredibly fast moving area and today’s hype could well be tomorrow’s
reality.

Myth: Brain Scans Can Diagnose Psychiatric Illness

A field where the promise of brain scanning is most brazenly over-reaching itself is psychiatric diagnosis. This
practice is not currently sanctioned by the American Psychiatric Association (APA) nor any other medical body and
for good reason – most psychiatric disorders are officially diagnosed based on behavioral symptoms and people’s
subjective feelings and beliefs. The neural correlates of mental disorder are not distinct enough to distinguish
between conditions or to tell a troubled brain from a healthy state. Moreover, any useful differences that do
exist are only reliable at a group level, not when studying an individual.

Unfortunately, this hasn’t stopped commercial brain scanning enterprises stepping into the market place.
“Images of the brain give special insight that is extraordinarily valuable and which leads to more targeted treatment
and better treatment success,” claims the Amen Clinics, an organization that boasts of having conducted brain scans
of 80 000 people across 90 countries. Other copy-cat clinics include Dr SPECT Scan and CereScan.

The lucrative Amen Clinics are led by the psychiatrist and TV-regular Daniel Amen and they use a form of brain
scanning called single photon emission computed tomography (SPECT). According to neuroscientist and brain
imaging expert Matt Wall based at Royal Holloway, University of London, the choice of SPECT is apt – “it’s the
cheapest of the 3D imaging techniques that can show functional activation,” but Wall adds: “the spatial resolution of
the images is crap, and no one uses it for clinical or research work that much anymore.”
Bibliografie 10
Mihai Ioan Botez – Neuropsihologie clinică și neurologia comportamentului,
p.69-78;
Examenul neuropsihologic

Muriel D. Lezak

Dezvoltarea și chiar avântul luat de examinarea


neuropsihologică în ultimele două decenii reflectă aprecierea
crescândă a valorii sale în scopuri diagnostice, de îngrijire a
bolnavilor, ca şi în domeniul cercetării în neurologia clinică, în
psihiatrie, în psihologia clinică și în reabilitare. În practică,
examenul neuropsihologic este utilizat în unul sau mai multe
scopuri. De exemplu, informaţiile care contribuie la studiile
diagnostice, sau datele care provin din cercetări, pot
furniza celor care îngrijesc bolnavul o serie de date
descriptive, pe baza cărora se poate schiţa un program de
readaptare. În aceste cazuri, aceleaşi informaţii pot furniza
datele necesare pentru a determina întinderea leziunilor
cerebrale.

Primele aplicaţii sistematice ale examenului neuropsihologic


priveau diagnosticul, pentru că identificarea unei leziuni
cerebrale organice reprezenta preocuparea primordială a
neuropsihologilor în acel moment. Pe măsură ce
neuropsihologii au devenit mai sofisticaţi, ei au acordat o
atenție mai mare localizării leziunilor cerebrale si chiar necesității de a determina natura acestor leziuni pe criterii
neuropsihologice. Progresele recente ale tehnicilor neuroimagistice au redus totuşi mult numărul de cazuri
în care examenul neuropsihologic este util în diagnosticul de localizare. Cu toate acestea, examenul
neuropsihologic continuă şi azi să furnizeze importante contribuții în diagnosticul diferențial al demențelor şi
depresiilor la bolnavii vârstnici, ca şi în identificarea tulburărilor de comportament asociate bolilor cerebrale la
debut. La mai mulți ani după un traumatism cranio-cerebral sau o intoxicație, deficiențele neuropsihologice
pot fi singurele fenomene reziduale susceptibile de a indica prezența unei leziuni cerebrale.

În zilele noastre, motivul cel mai frecvent de a apela la examenul neuropsihologic este de a ne furniza
descrieri precise şi sensibile ale comportamentului, care pot fi utilizate pentru a îngriji mai bine bolnavul,
pentru tratament şi pentru planificarea activităților de readaptare. De exemplu, scopurile tratamentului de
readaptare, ca şi procedeele acestuia, trebuie să se bazeze pe evaluări detaliate şi pertinente ale
comportamentului. Evaluarea tratamentului medicamentos sau a unei eventuale intervenții chirurgicale necesită
examinări detaliate si adecvate. Examenele neuropsihologice repetate sunt utile în estimarea gradului şi calității
ameliorării sau agravării.

Datele neuropsihologice descriptive permit pacienților sau celor care îi îngrijesc să ia decizii deseori dificile, atât
din punct de vedere practic cât şi din punct de vedere legal şi financiar, decizii care se impun deseori în cazul bolilor
neurologice. Contribuțiile examenului neuropsihologic la cercetarea neurologică clinică s-au dovedit a fi foarte
valoroase. Acestea includ evaluarea tratamentului, punerea la punct a unor criterii de clasificare a
comportamentului ca şi descrierea datelor comportamentale din bolile neurologice. Examenul
neuropsihologic are un rol fundamental în înțelegerea ştiințifică a comportamentului uman, ca şi în înțelegerea
organizării funcționale a sistemului nervos. Mai mult, informațiile obținute din examenul neuropsihologic au o
importanță din ce în ce mai mare în domeniul legal, pentru că, din ce în ce mai mult, avocații şi judecătoria sunt
puşi în situația de a identifica acele cazuri în care se pune problema funcției cerebrale a clientului sau acuzatului.
Chestiunea litigioasă cea mai frecvent întâlnită priveşte evaluarea incapacității pacientului, cu alte cuvinte în ce
măsură deficiența comportamentală poate limita capacitatea pacientului de a-şi câştiga existența, sau. poate crea
inconveniente familiei bolnavului după un traumatism cranian sau după o altă leziune cerebrală survenită
accidental sau la locul de muncă. Datele neuropsihologice sunt de asemenea pertinente pentru o serie întreagă
de alte proceduri legale, cum ar fi cele care privesc capacitatea intelectuală de a scrie un testament, custodia
copiilor ca şi drepturile privitoare la muncă sau la pensie

1. COMPONENTE FUNCȚIONALE ALE COMPORTAMENTULUI

Epistemologia occidentală tradițională a furnizat o schemă tridimensională de conceptualizare a componentelor


psihologice ale comportamentului. În cadrul acestei scheme, funcțiile intelectuale — adică funcțiile cognitive —
sunt diferențiate în raport cu două alte categorii inajore de activități non-intelectuale: emotivitatea şi motivația,
precum şi de funcțiile executorii ce presupun capacități de a iniția efectiv si de a transforma în realitate
sau de a efectua un comportament dirijat spre un scop precis. Aceste concepte au facilitat cercetătorilor
identificarea și clasificarea fenomenelor comportamentale. Aceste construcții conceptuale au dobândit
astăzi o valoare empirică, în sensul că cercetarea neuropsihologică a demonstrat că aceste concepte
corespund sistemelor structurale majore ale creierului. Acest sistem clasic de organizare a observațiilor
comportamentale furnizează o abordare preconcepută a conceptelor comportamentale ce aparțin examenului
neuropsihologic.

1.1. Dimensiunea intelectuală

Prelucrarea informației reprezintă activitatea funcțiilor intelectuale. Funcțiile perceptive selecționează,


organizează şi clasifică stimulii primiți de organism. Funcțiile memoriei codifică şi înmagazinează aceste
informații, pe care gândirea le poate prelucra conceptual, sau care pot determina o reacție sub forma
unei activități. Ambele emisfere cerebrale, la majoritatea indivizilor sunt specializate în prelucrarea şi
înmagazinarea informației, conform caracteristicilor liniare sau configuraționale ale acestei informații. Mai mult,
emisferele sunt organizate bilateral, în scopul de a furniza prelucrarea selectivă şi înmagazinarea informației
în fiecare din modalitățile perceptive, ca şi în combinațiile acestora. Această organizare multidimensională
a creierului favorizează prelucrarea informației înalt specializate în cadrul mai multor subsisteme distincte,
ce privesc percepția si memoria.

Mai multe capacități diferite şi independente contribuie la activitatea conceptuală şi la modelele de reacție.
Varietatea funcțiilor mentale şi independența lor relativă, una față de alta, devin evidente când se compară
diversele tipuri de deficite. De exemplu, un bolnav care nu mai este capabil de a face clasificări conceptuale
poate fi capabil încă să efectueze operații aritmetice combinate; mulți bolnavi afazici reuşesc să-şi facă
cunoscute intențiile şi sentimentele, cu ajutorul expresiei faciale şi a gesturilor. Putem considera că
examenul neuropsihologic al funcțiilor intelectuale majore presupune examenul mai multor sisteme funcționale
diferite şi distincte.

Cu mult înainte ca relația dintre anumite activități mentale discrete şi organizarea structurală cerebrală să fie
cunoscută, funcțiile cognitive au fost identificate şi măsurate cu multă exactitate. Numai după al doilea război
mondial, posibilitatea de a evalua comportamentul cognitiv a fost aplicată prin observații sistematice ale funcției
mentale la subiecții cu leziuni cerebrale, ca şi prin studii psihometrice pe scară largă, care corelau funcțiile
cerebrale cu comportamentul. Aceste studii au constituit punctul de plecare al bazelor empirice şi metodologice
în evaluările comportamentale ale funcțiilor cerebrale.

1.2. Dimensiunile non-intelectuale

Este mult mai dificilă individualizarea şi măsurarea emoțiilor si a funcțiilor de execuție, decât a
comportamentului cognitiv. Prin faptul că mai multe tulburări cerebrale pot modifica comportamentul
emoțional, deseori de o manieră caracteristică, înțelegem de ce avem nevoie de tehnici precise şi fiabile pentru a
individualiza şi a cuantifica aspectele specifice ale comportamentului emoțional. Cu toate acestea, multe
răspunsuri emoționale sunt multideterminate, ceea ce face dificilă, dacă nu imposibilă, diferențierea precisă între
efectele leziunii, predispozițiile personalității şi reacțiile situaționale, cu atât mai mult cu cât deseori aceşti factori
interacționează.

Testul cel mai larg utilizat, pentru studiul efectelor leziunilor cerebrale asupra caracterului sau tulburărilor
emoționale, este un chestionar "adevărat" sau "fals", adică Inventarul Multifazic al Personalității Minnesota
(MMPI) care a fost pus la punct în anii '30, în scopul de a stabili un diagnostic diferențial la pacienții psihiatrici.
De când a fost conceput, o parte a clasificărilor diagnostice pe care se bazează acest test s-a demodat (isterie,
deviație psihopatică, inversiune sexuală masculină, psihastenie); altele au dobândit o semnificație
suplimentară sau mai sofisticată (schizofrenie, hipomanie). Aceste clasificări au fost revizuite dar clasificarea
demodată n-a fost schimbată. Cu toate acestea, obiecția cea mai serioasă la utilizarea acestui test în
neuropsihologie este că, fiind dezvoltat plecând de la o populație de pacienți psihiatrici, nu există o bază
valabilă pentru a face interpretări ale modelelor de reacție la pacienții cu leziuni cerebrale. Cu toate că studiile
asupra MMPI au furnizat câteva descrieri generale ale răspunsurilor emoționale şi ale trăsăturilor personalității
asociate unor tulburări cerebrale specifice, aceste studii nu au îmbogățit înțelegerea comportamentului
emoțional asociat leziunilor cerebrale. Tulburările funcțiilor executorii privesc capacitățile necesare pentru
inițierea, planificarea şi execuția activităților, ca şi autoreglarea acestora.

Funcțiile executorii sunt funcții de bază pentru orice activitate independentă dirijată către un scop. Mai mult,
unele tulburări ale funcțiilor executorii specifice se asociază unor leziuni focale (majoritatea interesând arii bine
definite din lobii frontali). Cu toate acestea, din mai multe motive, funcțiile executorii sfidează incă o definiție
riguroasă plecând de la examenul clinic. De exemplu, atingerile funcțiilor cognitive, mai evidente şi mai familiare,
au tendința de a masca tulburările funcțiilor executorii. Mai mult, natura structurii examenului dă destul de rar
pacientului ocazia de a avea o activitate de execuție. Destul de des, examinatorii care nu sunt la curent cu
tulburările de comportament ce pot însoţi o leziune cerebrală, riscă să confunde simptomele mai grave
ale disfuncţiei de execuţie, ca apatia sau irnpulsivitatea, cu problemele emoţionale sau de personalitate.
.
2.EVALUAREA ATINGERII NEUROPSIHOLOGICE

2.1. Defectele cognitive şi deficitele cognitive

Dată fiind experiența de lungă durată a psihologiei în indentificarea şi măsurarea funcțiilor cognitive, nu este
surprinzător că examinarea neuropsihologică s-a concentrat întotdeauna asupra acestor funcții. Evaluarea
atingerii cognitive priveşte gravitatea acesteia, maniera în care această atingere perturbă comportamentul
bolnavului în general, interrelațiile diverselor defecte şi deficite, ca şi corelațiile dintre aspectele particulare ale
atingerii cognitive şi neuropatologice. Atingerea cognitivă poate apărea fie sub formă de deficite ale unor abilități,
ale cunoaşterii sau ale inteligenței, fie sub formă de defecte specifice.

Fenomene patognomonice (ca inatenția la ceea ce se petrece la stânga liniei mediane a pacientului) sau
simptome comportamentale caracteristice (ca limbajul telegrafic din afazia Broca) sunt defecte cognitive.
Numeroase dintre ele sunt evidente pentru orice observator, majoritatea vor fi identificate la un examen
neurologic atent. Evaluări psihologice precise le pot documenta gravitatea şi importanța. Un examen
neuropsihologic meticulos va revizui sistematic funcțiile ce pot evidenția defecte subtile, inaccesibile unui
observator neexperimentat.

Deficitele cognitive apar sub formă de pierderi relative ale abilităților sau cunoştințelor. Nu sunt necesare evaluări
detaliate pentru a identifica prezența de deficite atunci când acestea sunt foarte evidente, cum ar fi
imposibilitatea de a relata ce s-a întâmplat în urmă cu cinci minute. Trebuie să menționăm că deseori deficitele
cognitive semnificative nu sunt evidente, pentru că bolnavul nu are nevoie să utilizeze capacitatea afectată, cum
este cazul a numeroase persoane vârstnice care nu au ocazia să facă desene sau să efectueze alte activități
vizuospațiale. De asemenea, mulți bolnavi au de altfel tendința să evite activitățile ce reclamă utilizarea abilităților
afectate. Mai mult, atingerile funcţionale secundare leziunilor cerebrale pot diminua nivelul la care bolnavul poate
efectua activităţi cognitive complexe, dar nu într-un grad în care deficitul să poată deveni evident pentru un
observator naiv. De exemplu, după o leziune cerebrală, o persoană cu abilităţi matematice superioare sau chiar
medii-superioare, care este capabilă să rezolve probleme aritmetice la un nivel semnificativ scăzut pentru ea,
poate avea încă rezultate în cadrul mediei. În astfel de cazuri, deficitele izolate nu pot fi diagnosticate, iar acuzele
pacientului privind tulburările mentale pot fi greşit interpretate până în momentul în care examinatorul - utilizând
istoricul cazului ca şi performanţele cele mai bune la teste - este capabil să facă o evaluare adecvată a nivelului
funcţiilor cognitive premorbide ale bolnavului, în raport cu nivelul actual, mai curând scăzut.

Evaluarea nivelului cognitiv premorbid poate ajuta, ca standard de comparaţie, pentru a determina dacă
există o afectare relativil a activităţilor cognitive complexe. Ar fi mult mai ușor dacă ar exista o documentaţie
a funcţionării intelectuale premorbide pentru toţi cei cu leziuni cerebrale cunoscute sau suspicionate, pentru că,
în această situaţie, performanța premorbidă ar putea sluji drept standard de comparaţie pentru performanţa
actuală. Dat fiind că foarte puţini indivizi îşi fac examenul neuropsihologic complet când sunt adulţi tineri şi
sănătoşi, în majoritatea cazurilor standardul de comparaţie va trebui evaluat în raport cu informaţiile disponibile
la momentul în care leziunea cerebrală era deja cunoscută sau suspicionată. Cu toate că unii examinatori au
utilizat medii populaţionale pentru standardul de comparaţie, acest gen de practică este inadecvat şi poate fi
înşelător, pentru că, prin definiţie, jumătate din populaţie are rezultate în cadrul mediei capacităţilor sale;
un sfert din populaţie are rezultate superioare; restul populaţiei are rezultate sub medie.

Compararea rezultatelor testelor la acelaşi individ poate da cel mai adecvat standard de comparaţie, atunci când
cele mai bune rezultate ale individului sunt considerate cele mai bune indicatoare ale testului, în privinţa
capacităţii premorbide. Desigur, un nivel profesional premorbid înalt sau marile realizări personale ale bolnavului
permit mai bine examinatorului să aibă criterii directe de identificare a unui standard de comparaţie valabil, în
raport cu performanţa actuală. Existenţa unui deficit funcţional este demonstrată atunci când un deficit relativ
interesează anumite funcţii specifice în raport cu altele. De exemplu, dacă testele de organizare vizuospaţială au
rezultate inferioare faţă de testele de limbaj şi de raţionament verbal, se poate vorbi de un deficit neuropsihologic.
Cercetarea modelelor de deficit presupune familiarizarea cu expresia comportamentală a mai multor varietăţi de
leziuni şi boli cerebrale. Abilităţile clinice şi experienţa neuropsihologică a examinatorului au un rol important în
interpretarea corectă a modelelor de deficit, întrucât deficitele cognitive pot fi determinate şi de alte circumstanţe
decât afectarea cerebrală.

Astfel, situaţii ce ţin de cultură, de factori educaţionali sau de alţi factori sociali pot amplifica dezvoltarea anumitor
funcţii în detrimentul altora, iar deprivările au de asemenea, tendinţa de a diminua capacitatea cognitivă.

2.2. Rolul examenului formal prin teste

Sursele datelor examenului neuropsihologic sunt observaţiile, istoricul bolii, diferitele rezultate ale explorărilor,
ca şi teste psihometrice şi tehnici de examinare structurate. Cu ajutorul observaţiei, examinatorul poate afla multe
despre comportamentele emoţionale şi sociale ale pacientului, ca şi despre funcţionarea deficitară a funcţiilor
cognitive şi executorii. Istoricul şi explorările sunt indispensabile pentru crearea unui context în cadrul căruia să
poată fi formulate ipoteze de evaluare; istoricul bolii şi explorările sunt, de asemenea, necesare examinatorului
pentru a interpreta observaţiile şi performanţele la teste.

Testele pot servi mai multor scopuri.

• Ele pot fi utilizate pentru a stabili un standard de comparaţie.

• Ele pot indica prezenţa unei tulburări cerebrale.


• Ele sunt indispensabile pentru a face comparaţii precise între performanţe, atât în ceea ce priveşte
diferite funcţii, cât şi în privinţa aceloraşi funcţii, la momente de timp diferite. Ele pot revela
caracteristicile disfuncţiei neuropsihologice.

În practica clinică, dacă testele sunt organizate în baterii, ele pot servi la ghidarea examenului. Seria formală
de teste poate garanta standardizarea şi reproductibilitatea cercetării neuropsihologice. În neuropsihologia
clinică, numeroase teste îşi au originea în testele psihometrice care măsurau abilităţile mentale, funcţiile
academice şi diversele aptitudini. Tehnologia psihometrică a contribuit, de asemenea, şi la introducerea
conceptelor şi tehnicilor de standardizare a testelor, a furnizat date normative, intabulări şi o eşalonare a
unităţilor, ca şi metode de evaluare a fidelităţii şi validităţii examenului neuropsihologic. Toate acestea fac posibilă
existenţa unei metodologii a testelor obiective pentru evaluarea funcţiilor cognitive în cadrul unei tehnologii
psihometrice bine dezvoltate. Aceste avantaje ale tehnologiei psihometrice reprezintă motivele majore pentru
care scalele de inteligenţă Wechsler sunt utilizate atât de frecvent şi sunt atât de respectate de neuropsihologi.

Câteva teste şi tehnici de măsurare s-au adăugat repertoriului de teste neuropsihologice, provenind din surse
complet diferite. Punerea la punct a testului labirintului al lui Porteus şi al matricilor progresive al lui Raven a
răspuns nevoii de teste care măsoară nivelul intelectual independent de nivelul cultural. Loretta Bender a construit
testul Bender-Gestalt pentru a măsura dezvoltarea percepţiei la copii. Programele de teste ce măsoară
aptitudinile, pentru selecţionarea sau avansarea de angajaţi în industrie, au contribuit la testul lui MacQuarrie
pentru aptitudini mecanice şi la Purdue Pegboard Test. Mai mult, neuropsihologii au elaborat noi teste specifice
care vizează expresia comportamentală în funcţie de organicitatea cerebrală.

2.3. Selecția testelor

Sunt disponibile sute de teste, ce permit examinarea funcţiilor cognitive sub diversele lor aspecte, combinări şi
moduri de prezentare. Mai mult, în fiecare an, în literatură şi în cataloagele fabricanţilor apar zeci de noi teste.
Această abundenţă de teste permite experimentatorilor inovatori să identifice majoritatea tulburărilor cognitive
de origine organică, şi deseori să le descrie cu o precizie rafinată. Dimpotrivă, pentru a putea stabili gradul
deficitului sau a stabili toţi parametrii stării de funcţionare cognitive, sunt necesare date de bază. Aceste date
trebuie să includă informaţii asupra tuturor funcţiilor, inclusiv cele puţin afectate de tulburările neuropsihologice.
Din acest motiv, mulţi examinatori utilizează întotdeauna aceeaşi baterie pusă la punct pentru uz general,
deoarece majoritatea acestor baterii au avantajul de a măsura un anumit număr de funcţii şi permit astfel o
evaluare globală a subiectului.

Mai multe din aceste baterii au fost puse la punct de alte persoane decât cele care le utilizează, cum ar fi bateria
Halstead-Reitan. Altele au fost asamblate de înşişi examinatorii. Un alt avantaj al utilizării unei baterii compuse
este acela că faptul de a avea o experienţă particulară cu o serie de teste creşte capacitatea examinatorului de a
se servi de aceste teste; aceasta îi aprofundează cunoştinţele, facilitatea de a administra testele în condiţii
maximale, precizia sa în interpretarea testelor, abilitatea de a percepe răspunsuri rare sau subtile, ca şi valoarea
subtestelor. Bateriile compuse au şi limite importante în ceea ce priveşte examenul neuropsihologic. Astfel,
atunci când sunt utilizate invariabil, într-o manieră mecanică, examinatorul poate supratesta sau subtesta
subiectul. El poate pierde timp preţios pentru a aduna date pe care istoricul cazului, simpla observaţie sau
examenul deja efectuat de un alt examinator le-au demonstrat deja.

Pe de altă parte, examinatorul care aderă strict la o anumită baterie de teste ajunge astfel să ignore deficite
semnificative, dacă testele care măsoară tulburările pacientului nu fac parte din bateria sa favorită. De
exemplu, forma standard a BHR nu conţine teste de învăţare, nici verbale, nici vizuale, dar două din sub-teste
măsoară viteza bilaterală a motricităţii, chiar dacă este deja sigur că forţa unui braţ este diminuată. Majoritatea
neuropsihologilor examinatori au găsit un compromis între o examinare individuală şi utilizarea unei baterii rigide.
De obicei, în practică se alege un set-nucleu de teste care permite o trecere generală în revistă a funcţiilor
cognitive. Datele de bază, obţinute printr-un examen destul de standardizat şi extins, permit de obicei indicarea
problemelor care cer o evaluare mai detaliată, mai individualizată, cu ajutorul unor teste cunoscute şi bine
stabilite. În evaluarea specifică a domeniilor disfuncţionale, examinatorul poate apela şi la teste experimentale
sau nonstandardizate, care sunt deosebit de pertinente pentru tulburarea studiată.

2.4. Interpretarea datelor

2.4.1. Aplicarea şi limitele rezultatelor testelor în măsurarea deficitelor

Rezultatele testelor sunt esenţialmente abstractizări ale observaţiilor. Rezultatul obţinut la un test matematic ce
comportă rezultatul unei performanţe evaluate după performanţa unui grup de referinţă, pe baza unei sarcini bine
definite. De aici se vede că rezultatul serveşte drept referinţă obiectivă pentru a face evaluări ce au puncte de
referinţă de-a lungul unei singure dimensiuni. Testele obiective reclamă o uniformitate a unităţilor, procedurilor
de administrare, standarde ce determină rezultatele şi grupele normative. Rezultatele standardizate permit
comparaţia între toate performanţele la diverse teste şi între toate celelalte teste ale aceluiaşi subiect, ca şi între
rezultatele obţinute de subiecţi diferiţi sau de grupuri normative particulare.

Aceste caracteristici ale rezultatelor testelor le fac deosebit de utile în examenul neuropsihologic. Obiectivitatea
lor le creşte validitatea în obţinerea unui standard de comparaţie pentru pacienţii care nu suferă de o deteriorare
globală sau al căror istoric nu sugerează că performanţa lor ar putea fi alterată de factori sociali.

Comparaţii intraindividuale ale rezultatelor permit examinatorului să facă o diferenţiere între punctele forte şi
punctele slabe ale capacităţilor cognitive. Mai mult, o serie de suferinţe cerebrale pot afecta diferit funcţiile
cognitive, producând modele de rezultate diferite şi deseori foarte particulare. De exemplu, la debutul bolii
Alzheimer, rezultatele testelor cu bloc de desen, de substituţie a simbolurilor şi al facilităţilor verbale diminuează,
cu toate că testul memoriei verbale imediate ca şi performanţa în privinţa mai inultor abilităţi verbale — chiar
sarcini de raţionament — rămân nemodificate sau apropiate de nivelul premorbid. Pe de altă parte, cu toate că
leziunile ariei irigate de artera cerebrală medie stângă afectează şi abilitatea de a îndeplini comenzi de secvenţe
inversate, este totuşi mai probabil să fie implicate memoria verbală imediată şi abilităţile verbale, decât abilitatea
de construcție.

Totuşi, pentru că este vorba de abstractizări, rezultatele testelor nu prezintă decât o descriere artificial atenuată
a performanţei pacientului. Rezultatele izolate pot fi fals interpretate. Când rezultatele sunt prezente fără o
informație calitativă privind modul în care subiectul a emis răspunsurile, ele sunt uneori mai bune decât ar merita
subiectul, sau, cunoştinţe bune pot fi omise printr-un sistem restrictiv de notare sau prin procedura de
administrare a testelor.

Factorii calitativi ai performanței unui pacient adaugă informaţiile necesare pentru o evaluare precisă şi o utilizare
prudentă a rezultatelor testelor. Modul în care pacientul răspunde la sarcinile examenului relevă aspecte
importante ale comportamentului, cum ar fi strategiile de rezolvare a problemelor, comportamentele
autodirective, raimificaţiile practice ale problemelor de atenţie sau învăţare şi alte caracteristici ale performanţei
dirijate. Maniera în care pacientul rezolvă testele poate fi semnificativă pentru prezicerea ajustării sale sociale şi
pentru planificarea readaptării având şi un interes diagnostic.

Un grup de referinţă adecvat este, de asemenea, necesar pentru interpretarea rezultatelor testelor. Cu toate
că, din punct de vedere demografic, vârsta este cea mai importantă variabilă care afectează funcţionarea
cognitivă, multe teste nu au un sistem de notare care să ia în considerare factorul vârstă. Eroarea cea mai
frecventă apare când vârsta nu este luată în considerare şi când performanţele persoanelor vârstnice sunt evaluate
după norme stabilite pentru persoane mai tinere. În acest caz, încetinirea normală şi pierderile flexibilităţii
rnentale care însoţesc imbătrânirea produc rezultate care apar anorrnale. Aşa cum diferitele funcţii reacţionează
diferit la procesul de îmbătrânire, aşa fiecare test trebuie să aibă propriile sale norme adaptate la diferite vârste.
Educaţia contribuie la exprimarea multor funcţii cognitive. Efectele educaţiei asupra testelor verbale si a altor
abilităti academice sunt cunoscute de multă vreme. Recent, cercetările au demonstrat, de asemenea, că şi teste
care erau considerate fără legătură cu abilităţile academice, sunt afectate de educaţie: de exemplu, testele de
percepţie, testele de abilitate de a desena și testele de memorie.
2.4.2. Metodele de interpretare a rezultatelor testelor

Trei metode si-au dovedit valoarea în examenul neuropsihologic. Întrucât fiecare metodă convine mai bine unor
probleme diferite sau cu teste diferite, toate cele trei metode pot fi utilizate pentru interpretarea datelor unei
singure evaluări.

2.4.2.1. Rezultatele-limită

Au fost puse la punct câteva ansainbluri de teste pentru a putea detecta organicitatea cerebrală sau o disfuncţie
cerebrală specifică din punct de vedere neuropsihologic. Aceste teste sunt bazate pe observaţii conform cărora
pacienţii cu o anumită afecţiune sunt susceptibili de a demonstra unele caracteristici patologice, rareori observate
la persoanele normale (de exemplu: majoritatea indivizilor pot găsi precis mijlocul unor linii orizontale, dar
pacienţii cu hemineglijență au tendinţa de a indica mijlocul liniei către un punct situat la dreapta centrului, testul
numindu-se line bisection task).

În scopuri diagnostice, rezultatul limită care permite o distincţie eficace între subiecţii care au caracteristica
patologică de studiat şi cei care nu o au, trebuie determinat pentru fiecare test sau serie de teste, cu o evaluare
adecvată în ceea ce priveşte vârsta şi diferenţele de educaţie. Unii subiecţi care au boala studiată pot să nu
prezinte caracteristica patologică şi obţin deci rezultate în limitele normalului, iar unii subiecţi martor normali
demonstrează caracteristicile studiate într-un grad anormal, alternând deci rezultatele în direcţia patologică.
Astfel, rezultatele-limită nu pot decât să indice probabilitatea ca o leziune cerebrală sau o diminuare a
funcţiei studiate să fie prezentă sau absentă. Mai mult, această abordare nu poate fi utilizată decât pentru o
evaluare brută, întrucât rezultatele unui singur test — oricare ar fi depărtarea faţă de rezultatul normal — pot
numai să indice un domeniu general de perturbare, dar nu pot nici să descrie natura acestuia, nici să evidenţieze
tulburările asociate.

2.4.2.2. Disocierea dublă

Această procedură utilizează trei sau mai multe teste, pentru a identifica bazele funcţionale ale unei tulburări
complexe. Performanţele la majoritatea testelor cognitive implică două sau mai multe funcţii diferite; dimpotrivă,
numai una din aceste funcţii poate fi responsabilă de deficit. Pentru a putea determina care funcție a contribuit la
esecul execuţiei unui test, se pot cornpara rezultatele la diferite teste care măsoară un element în comun. De
exemplu, un rezultat slab la un test de fluiditate verbală, în cursul căruia subiectul trebuie să denumească unităţi
dintr-o categorie dată într-un timp limitat, poate reflecta o disfuncţie a abilităţii verbale de a-și aminti sau o
încetinire mentală la un pacient non-afazic. Pentru a putea delimita care din acești doi factori contribuie la
diminuarea fluidităţii verbale, examinatorul trebuie să compare unul sau mai multe teste ce implică viteză, în
privinţa funcţiilor, pe care pacientul le-a efectuat bine când nu era cronometrat, precum şi teste non-
cronometrate care depind de abilitatea verbală de a-şi aminti. Dacă bolnavul suferă de o încetinire mentală, în
acest moment şi alte teste cronometrate vor fi, de asemenea, diminuate; dacă problema se datorează unei
disfuncţii a abilităţii verbale de a-şi aminti, relatarea unei povestiri sau terminarea frazelor vor fi slabe, chiar
în absenţa limitelor de timp pentru îndeplinirea testării.

2.4.2.3. Analiza modelelor

Această abordare ia în considerare toate rezulta tele testelor. Examinatorul compară modelul de puncte slabe şi
forte, indicate de rezultatele testelor, cu tipuri de deficite caracteristice pentru anumite boli sau sedii ale leziunilor,
sau evaluează datele, pentru a şti dacă au o semnificaţie neuropsihologică sau neuroanatomică. Folosirea
competentă a analizei modelelor presupune o sofisticare neuropsihologică apreciabilă.

2.4.3. Interpretarea integrată


În evaluarea performanței pacientului în funcție de rezultatele testelor, majoritatea neuropsihologilor iau în
considerare antecedentele cazului şi informațiile privind personalitatea pacientului şi situația sa actuală. Ei acordă
atenție si caracteristicilor neuropsihologice, ca idiosincraziile de limbaj, emotivitatea inadecvată, neglijența sau
precauțiile excesive, distractibilitatea, perseverarea şi confabulația. Aceste caracteristici pot furniza
observatorului avizat tot atâtea sau mai inulte informații privind starea neuropsihologică a pacientului, ca şi
rezultatele testelor. Fiind dat că procedurile testingului formal constituie examenul de bază al majorității
evaluărilor neuropsihologice, o interpretare validă a datelor presupune o integrare precisă a tuturor
informațiilor culese.

2 EVALUAREA ATINGERII NEUROPSIHOLOGICE

3.1. Nucleul bateriei de teste: scalele de inteligență Wechsler

Majoritatea evaluărilor neuropsihologice conțin cel puțin câteva subteste ale scalelor de inteligență Wechsler.
Cele unsprezece teste examinează mai multe aspecte ale funcțiilor cognitive, incluzând

• gama de vocabular și informaţii de bază;


• raţionamentul practic, aritmetic şi vizual (în imagini);
• abstractizări vizuale şi vizuospaţiale;
• recunoaşterea imaginilor;
• analiza şi sinteza vizuo-spaţială şi direcţia motorie;
• capacitatea imediată de atenţie;
• depistarea vizuomotorie şi
• viteza reacţiilor.

Scalele Wechsler nu au bază teoretică sau empirică în neuropsihologie. Cu toate acestea, cei care le utilizează
beneficiază de o bogăţie de informaţii clinice şi statistice care sunt produsul a zeci de ani de cercetări pe aceste
instrumente şi care perrnit "inferenţe" neuropsihologice valide decurgând din performanţele pe scalele Wechsler.
La origine, scalele Wechsler au fost puse la punct pentru a măsura inteligenţa generală la adulţi. Wechsler a reţinut
valorile "QI", derivate din vârsta mentală corespunzător, pentru a măsura inteligenţa copiilor; acest lucru se baza
pe însumarea testelor individuale. S-a dovedit că vârsta mentală este superfluă în evaluarea la adult, şi că
însumarea rezultatelor maschează rezultatele detaliate ale testelor individuale. Dacă recunoaştem că rezultatele
QI-ului Wechsler trebuie să se bazeze pe rezultatele distincte ale subtestelor, evaluate în funcţie de vârsta
subiectului şi pe factorii calitativi ai performanţei, mai mult decât pe rezultatul global al QI, atunci subtestele
Wechsler devin măsuri neuropsihologice destul de sensibile.

3.2. Funcţiile perceptuale

Numeroase teste adecvate examinărilor neuropsihologice au fost puse la punct pentru examinarea celor trei
modalităţi perceptuale majore: văzul, auzul şi atingerea. Diversitatea tehnicilor de studiu al tulburărilor
perceptuale reflectă modurile diferite în care subfuncţiile contribuie la competenţa perceptuală.

3.2.1. Văzul

Printre diferitele abordări ale examinării funcţiilor vizuoperceptuale, se află teste de recunoaştere a culorilor;
pentru inatenţia spaţială unilaterală (hemineglijență); pentru aprecierea relativității mărimii, distanţei şi
unghiurilor; pentru organizarea vizuală; pentru rezoluția iluziilor; pentru baleiajul vizual; pentru recunoaşterea de
fotografii sau imagini cu obiecte sau scene familiare, într-o perspectivă obişnuită sau particulară, pentru
recunoaşterea figurilor familiare sau nu, cu sau fără expresie emoţională, cu unităţi camuflate sau cu fragmente
de unităţi printre alte unități. Se examinează înţelegerea relaţiilor spaţiale, mai ales cerând estimarea vizuală a
distanţelor sau unghiurilor, rotaţii mentale sau reconstrucţii vizuale.
3.2.2. Auzul

Examenul tulburărilor afazice a favorizat dezvoltarea de tehnici înalt rafinate şi discriminatorii pentru a evalua
percepţia auditivă. Deci, bateriile de afazie examinează diferite aspecte ale percepţiei auditive, cum ar fi
discriminarea sunetelor, a ritmului şi a tonalităţii vocilor, aprecierea combinaţiilor de sunete, de silabe sau cuvinte
şi înţelegerea verbală. Au fost elaborate teste specifice, care explorează percepţia elementelor muzicale şi alte
fenomene auditive, pentru cercetarea neuropsihologică, dar şi cu utilitate clinică.

3.2.3. Atingerea

Funcţiile spaţiale pot fi, de asemenea, evaluate numai prin atingere. Alte funcţii tactile pe care le măsoară testele
sunt recunoaşterea obiectelor şi formei, diferenţierea, textura si presiunea şi identificarea degetelor.

3.3. Memoria

Este evident că examenul clinic al memoriei trebuie să evalueze capacitatea de memorie imediată, memoria de
scurtă durată, reţinerea în urma interferenţei şi intervalul de timp necesar (adică învăţarea); acest examen va
facilita diferenţierea între eficacitatea stocării si abilitatea de a-și aminti.

Deficitele de memorie pot fi împărţite pe de o parte în

• amnezii axiale (care efectuează structurile sistemului limbic), şi care sunt supramodale, adică implică
toate canalele perceptive şi toate sistemele de prelucrare a informaţiei și pe de altă parte în

• amnezii corticale, în care leziunea este lateralizată, în sensul de a afecta fie memoria verbală, fie
memoria non-verbală.

Au fost elaborate mai multe teste de memorie verbală, pentru a studia probleme de cercetare foarte limitate, sau
pentru a măsura tulburările de memorie specifice asociate unei entităţi morbide particulare. Acest lucru a adus
după sine o varietate de teste de memorie verbală, fiecare măsurând câteva aspecte diferite ale memoriei verbale.
Există teste care utilizează cifre, litere sau consoane; liste lungi sau scurte de cuvinte abstracte sau concrete, de
cuvinte asociate sau neasociate, sau cuvinte perechi, fraze scurte sau lungi sau fragmente de povestire. Acestea
cer răspunsuri imediate sau întârziate, care sunt măsurate prin rapel sau prin recunoaştere cu sau fără repetare
sau interferentă.

Examenul memoriei non-verbale nu este uşor de făcut, pentru că prezintă probleme specifice. Pacienţii cu
tulburări motorii la mâna dominantă nu pot efectua teste la care trebuie să deseneze răspunsul. Mai mult,
întrucât deseori imaginile, desenele şi chiar structurile de sunete utilizate în testele de memorie non-verbală pot
fi codificate verbal, mai ales dacă este vorba de pacienţi alerţi şi/sau înalt motivaţi care au învăţat să aplice strategii
verbale pentru a rezolva probleme vizuospaţiale, testele care dau măsura pură a mernoriei non-verbale sunt
relativ rare. Problerna verbalizării stimulului a fost ocolită prin tehnici interesante, utilizând imagini sau desene pe
o grilă bidimensională.

Aceste dificultăţi de măsurare a memoriei non-verbale au făcut dificilă posibilitatea de a obţine teste de memorie
şi de învăţare în serii paralele, pentru a putea examina aspecte comparabile ale memoriei şi învăţării verbale şi
non-verbale. De exemplu, din cele şapte teste ale scalei de memorie Wechsler revizuite care măsoară funcţiile
memoriei, se consideră că numai trei reprezintă măsurători ale memoriei vizuale; dar unul dintre ele (învăţarea
asociată vizuală) se traduce uşor în idei verbale. Majoritatea altor baterii de memorie conţin, de altfel, mai multe
unităţi verbale ce pot fi verbalizate, decât unităţi strict non-verbale. Pentru o evaluare detaliată a memoriei, mai
ales când se presupune existenţa unor disfuncţii lateralizate, examinatorul nu trebuie să utilizeze o singură baterie
de teste de memorie, ci să selecţioneze teste adecvate problemelor fiecărui individ.

3.4. Gândirea

Deşi capacităţile de raţionament şi de abstractizare (formarea de concepte) tind să varieze împreună, în anumite
circumstanțe, aceste funcţii pot fi afectate diferit. Aceasta face ca ele să trebuiască a fi examinate separat.
Raționamentul poate fi examinat cu stimuli verbali sau vizuali. Testele cel mai frecvent utilizate pentru
raţionament solicită judecăţi de bun simţ sau soluţii la probleme practice (de exernplu, testul "înţelegere" al
scalelor de inteligenţă Wechsler). În anumite cazuri, aceste teste măsoară mai curând cunoştinţele şi experienţa
decât abilitatea de a raţiona, aşa cum este cazul la pacienţi cu demenţe uşoare, care nu mai pot face judecăţi şi
nici rezolva probleme cotidiene, dar pot încă să-şi amintească o anume înţelepciune populară. În testele de
raţionament, un răspuns corect poate fi mai puţin interesant decât maniera de tratare a problemei de către
pacient. Problemele care implică relaţii matematice, propoziţionale, secvenţiale sau spaţiale demonstrează
procese logice sau deficitul lor. În ciuda faptului că mai multe teste foarte frecvent folosite pentru raţionamentul
logic nu necesită un răspuns verbal, majoritatea acestor teste pot fi exploatate verbal. Este cazul testului seriilor
de imagini (asamblarea imaginilor) Wechsler, testului matricilor progresive Raven şi testului categoriilor.

3.5. Funcţiile expresive

3.5.1. Cuvântul şi limbajul

Disfuncţiile afazice, adică tulburările funcţiilor de comunicare care sunt rezultatul lezării capacităţilor de formulare
a limbajului, sunt responsabile de majoritatea deficitelor de înţelegere a cuvintelor vorbite, scrise şi a altor forme
simbolice şi mentale, cum ar fi producerea de material verbal.

O evaluare precisă şi utilă a tuturor dimensiunilor afecţiunilor afazice necesită un examen multidimensional al
funcţiilor cuvântului şi limbajului, pentru a determina în mod precis caracterul, deseori individualizat, al
tulburărilor de comunicare. Au fost puse la punct mai multe baterii de afazie, pentru a permite examinatorului să
facă o trecere în revistă completă a tuturor funcţiilor diferite, care pot fi afectate de afazie şi de tulburările
asociate. Testele de triere a afaziei pot semnala prezenţa unei tulburări de comunicare, dar nu permit o descriere
succintă detaliată, nici măsuri progresive speciale necesare unei evaluări clinice utile a limbajului și funcţiilor
acestuia.

3.5.2. Scrisul

Tulburările de scris sunt deseori asociate afecţiunilor afazice, dar pot apărea si în alte afecţiuni cerebrale, cum ar
fi leziunile emisferei drepte sau demenţa. De exemplu, perseverările sau intruziunile în scris pot fi primele semne
ale unei leziuni cerebrale organice evolutive. Din acest motiv, examenul neuropsihologic ar trebui să includă un
eşantion de scris adecvat; nu pot fi decelate defecte subtile analizând numai câteva cuvinte.

3.5.3. Abilităţile practice

Apraxiile sunt dificultăţi de efectuare a mişcărilor învăţate, cum ar fi gesturi sau acţiuni rafinate. Au tendinţa de a
fi asociate cu afazia. Examenul controlului voluntar al gesturilor şi actelor voluntare de fineţe face parte din mai
multe baterii de afazie şi ar trebui aplicat de rutină în examinările bolnavilor cu tulburări de comunicare

3.5.4. Construcţia

Cu toate că incapacitatea de a desena, de a copia desene sau de a combina cuburi sau bastonaşe într-o construcţie
precisă a fost tradiţional inclusă printre apraxii, multe din tulburările construcţionale nu sunt datorate unei
deficienţe de performanţă în efectuarea mişcărilor fine. Ele au mai curând tendinţa de a fi rezultatul unolr
dificultăţi vizuospaţiale de percepţie sau de organizare, sau dificultăţi de a transforma un concept vizuospaţial
într-un obiect real. Evaluarea calitativă, a manierei în care pacientul îndeplineşte o sarcină de construcție, poate
clarifica natura problemei vizuospațiale a pacientului. Gradul de severitate al tulburărilor construcționale este
măsurat prin sarcini specifice, cu dificultate gradată. Tulburările construcționale minore, de exemplu, pot să nu
apară în cazul îndeplinirii unor sarcini simple, mai ales când acestea pot fi rezolvate verbal. Varietatea disfuncțiilor
construcționale impune o evaluare bazată pe utilizarea mai multor testări. Unii pacienți pot copia precis un desen
de cuburi, dar nu pot asambla cuburile în lipsa unui model; alții pot fi capabili să îndeplinească o testare de
construcție când sunt prezente toate elementele soluției (de exemplu, cuburile sau bucățile unui puzzle), fără să
fie capabili să deseneze un obiect din memorie.

3.6. Eficacitatea intelectuală

Există câteva activități mentale importante care nu sunt funcții cognitive în sine, dar care sunt necesare pentru
funcționarea cognitivă. Printre acestea se află atenția, orientarea si funcțiile executorii.

3.6.1. Funcțiile atenției

Evaluarea funcțiilor atenției trebuie să includă teste simple de atenție, de concentrare şi de depistare mentală.
Testele de atenție măsoară cantitatea de informație care poate fi reținută într-un timp dat şi utilizează ca
material stimul serii de cifre, de cuvinte sau de fraze. Testele de concentrare cer subiectului să-şi dirijeze şi să-şi
mențină atenția pentru o perioadă (de exemplu, PASAT); se utilizează si teste în care se bifează anumite
cifre sau litere care se găsesc printre rânduri de alte cifre, litere sau mici desene. Depistarea mentală şi abilitatea
de a menține două sau mai multe gânduri pot fi examinate printr-o serie de scăderi sau alte teste ce presupun
efectuarea unor comenzi invers.

Testul Armitage (trail making) B, de exemplu, este un test de depistare larg utilizat, în care subiectul trebuie să
traseze o linie printr-o serie alternativă de cifre şi litere, dispersate pe o pagină. Teste ca testul lui Stroop
evaluează capacitatea de a rezista distragerii, cerând subiectului să denumească culorile diferitelor tuşuri cu care
sunt imprimate numele câtorva culori (de exemplu, cuvântul "verde" poate fi imprimat în albastru, roşu, galben şi
uneori verde). Forma cuvântului imprimat creează o puternică interferență cognitivă, pe care subiectul trebuie s-
o ignore pentru a putea denumi corect culoarea scrisului.

3.6.2. Orientarea

Orientarea are mai multe aspecte ce devin evidente când eficacitatea cognitivă este întreruptă şi rezultă
dezorientarea. Orientarea personală este în general evaluată examen al stării mentale, punând întrbări informale.
Scalele de măsurare a dezorientării în funcție de timp sunt utile mai ales în cercetare, întrucât dezorientarea într-
un caz individual devine evidentă punând subiectului întrebări simple. Pentru varietățile de orientare spațială
există mai multe teste care evoluează orientarea dreapta-stânga, orientarea privind părțile corpului, orientarea în
spațiul peri-corporal şi memoria topografică. Testele cantitative de orientare spațială sunt utile în cercetare;
utilizarea lor clinică constă în potențialul de a evidenția defectele subtile.

3.6.3. Functiile executorii sau de control

Întrucât examinatorul clinic planifică, dirijează şi administrează examene clinice şi deseori le inițiază, evaluarea
tipică prezintă un paradox pentru cei care doresc să evalueze funcțiile executorii. Cum poate examinatorul să
examineze, să planifice, să dirijeze şi să mențină controlul asupra examenului într-o manieră care să poată permite
inițierea, planificarea şi comportamentul dirijat către un scop din partea subiectului? Puținele teste care vizează
funcțiile executorii presupun ca subiectul să creeze ceva ce trebuie planificat, definit şi structurat chiar de subiect,
cum ar fi un desen, o povestire sau o construcție. Dovezi ale deficitului autoreglării apar deseori în lucrările scrise
şi în desene, cum ar fi perseverări, omisiuni, confuzii în elementele desenului, sau alte erori. Comportamentul
autoreglator şi autocorector, sau absența sa, pot fi observate în orice moment al examenului.

3.6.4. Funcțiile motorii


Testele funcţiilor motorii pot pune în valoare semnele subtile ale unei slăbiciuni lateralizate sau ale unei încetiniri
motorii, ca şi probleme de coordonare şi de control motor. Câteva teste ale funcţiilor motorii sunt deseori incluse
în examenul neuropsihologic şi măsoară coordonarea manuală fină (Grooved Pegboard), secvenţele complexe ale
mișcărilor, viteza răspunsului motor (de exemplu, oscilaţia digitală şi forţa de prindere, adică dinamometrul).

3.7. Bateriile

Neuropsihologii care preferă avantajele standardizării şi familiaritatea cu un eşantion înalt selecţionat de teste,
faţă de provocarea şi riscurile unui examen individualizat, ce serveşte necesităţilor particulare şi capacităţilor
fiecărui subiect, vor prelua toate sau cele mai multe proceduri de examen din una sau mai multe baterii de teste,
care sunt deja disponibile. O baterie familiară şi deseori utilizată în cercetare poate deveni o puternică unealtă de
investigare pentru un exarninator dotat, care cunoaşte întrebările la care poate da răspuns bateria, întrebările care
impun un test suplimentar, comoditatea măsurătorilor standardizate şi aplicabilitatea normelor disponibile. Din
nefericire, neuropsihologii examinatori care nu dispun de un antrenament clinic suficient pentru a efectua această
muncă, au tendinţa să administreze şi să interpreteze bateriile preformate într-un mod mecanic, fără să se
întrebe dacă bateria pe care au ales-o este adecvată si fără să cunoască munca de cercetare care stă la baza
acestor proceduri, norme sau forme de interpretare. Există multe baterii neuropsihologice şi teste gata de a fi
utilizate. Ele diferă ca format, metodă, bază teoretică şi aplicabilitate. Aceste diferenţe sunt ilustrate de bateriile
cel mai frecvent utilizate astăzi.

3.7.1. Bateria Halstead-Reitan (BHR)

Această baterie a fost imaginată la origine de neurochirurgul Ward Halstead, pentru a investiga disfuncţiile
cognitive asociate leziunilor de lobi frontali. Adăugând o scală de inteligenţă Wechsler, teste de depistare vizuală
şi mentală, de integritate tactilă şi de afazie bateriei Halstead-Reitan a creat o baterie sensibilă la efectele
neuropsihologice ale îmbătrânirii şi ale diverselor leziuni si procese patologice cerebrale localizate, atunci când
este interpretată de un expert. Dimpotrivă, când această baterie este utilizată pentru a identifica bolnavii
psihiatrici cu o leziune cerebrală, valoarea sa (de altfel a majorităţii testelor neuropsihologice şi combinațiilor
acestora) devine problematică. BHR a inspirat mai multe variante elaborări pentru a standardiza administrarea,
punctajul şi procedurile de interpretare, pentru a-i creşte sensibilitatea pentru crizele epileptice şi pentru a o face
accesibilă unor examene frecvente şi repetate. Întrucât BHR nu conţine teste de învăţare vizuală sau verbală,
deseori examinatorii administrează odată cu bateria unul sau mai multe teste din scala de memorie Wechsler.

3.7.2. Contribuţiile la examenul neuropsihologic

Această baterie publicată conţine douăsprezece teste puse la punct de profesorul Arthur Beuton, colaboratorii şi
studenţii săi. Această baterie nu face atât o evaluare cuprinzătoare, cât reprezintă o resursă de evaluare ce conţine
teste proiectate să depisteze deficite specifice. Fiecare din aceste teste standardizate poate fi utilizat singur sau
în combinaţie cu alte teste standardizate. Acest set de teste este adresat nevoilor examinatorului care preferă să
facă un examen individualizat pentru fiecare bolnav.

3.7.3. Investigaţia neuropsihologică a lui Luria

Manualul acestui examen a fost pregătit de Christensen şi ne furnizează un sumar sistematizat al metodei
profesorului Luria, cu mai multe tehnici şi inovaţii interesante pentru folosirea testelor standardizate, şi materialul
pe care l-a pus la punct pentru a explora disfuncţiile neuropsihologice. Manualul încearcă să expliciteze abordarea
ipotetico-deductivă a profesorului Luria, furnizând instrucţiuni pentru conducerea unei evaluări generale a
sistemelor funcţionale majore. Ca şi Luria, Christensen se ocupă de disfuncţiile verbale şi de afecţiunile
asociate lor, cu relativă profunzime, dar acordă mai puţină atenţie problemelor non-verbale de percepţie,
reacţie şi memorie.

3.7.4. Bateria neuropsihologică Luria-Nebraska (BNLN)


Nici investigaţiile neuropsihologice individualizate de Luria, nici cele două sute de tehnici specifice de examen,
compilate de Christensen, nu sunt de dorit pentru o baterie rigidă sau pentru o analiză a datelor psihometrice.
Totuşi, acest material, care este eterogen şi deci puţin cuantificabil într-o manieră corespunzătoare a fost
prezentat sub formă de scale care sunt în general echivalente cu subsecţiile ce corespund organizării pe care Luria
o dădea tehnicilor sale. Ca şi majoritatea altor teste care evaluează mai multe funcţii cognitive, perceptuale,
motorii, această baterie va identifica, cu o frecvenţă mai mare decât cea probabilistică, prezenţa leziunilor
cerebrale. Dimpotrivă, din cauza mai multor erori psihometrice (de exemplu, compoziţia eterogenă a grilelor,
atribuirea unor valori arbitrare pe scale, validitatea discutabilă a conţinutului mai multor scale), s-au pus
numeroase întrebări privind valoarea sa clinică în sine, şi felul în care este cuantificată si interpretată.

4.VIITORUL EXAMINĂRII NEUROPSIHOLOGICE

4.1. Dezvoltarea testelor care măsoară funcțiile

Scalele de inteligenţă Wechsler sunt testele cel mai frecvent utilizate astăzi în neuropsihologie, cu toate că nu au
o bază neuropsihologică raţională. Wechsler le-a construit pe baza experienţei sale anterioare şi pe baza bunului
simţ; datele privind modul lor de asociere cu funcţiile cerebrale au apărut mai târziu. Din păcate, revizuirea din
1981 a bateriei Wechsler (WAIS-R) nu ia în considerare cunoştinţele actuale despre funcţiile cerebrale şi nu
produce rezultate echivalente cu cele ale WAIS. Toate scalele de inteligenţă Wechsler conţin aceleaşi teste
existente deja în prima scală de inteligenţă Wechsler-Bellevue din 1955. Aceste teste furnizează date
neuropsihologice complexe (adică măsoară mai multe funcţii în acelaşi timp) ce necesită o interpretare sofisticată
pentru că sunt foarte susceptibile de a fi greşit interpretate. Bateria, foarte populară, Halstead-Reitan propune
teste ale funcţiilor neuropsihologice mai pure decât testele lui Wechsler. Totuşi, aceste teste implică şi ele
analiza simultană a mai multor funcţii neuropsihologice şi, la fel ca testele lui Wechsler, sunt vulnerabile la
efectele vârstei, educaţiei şi altor variabile demografice. De fapt, pertinenţa funcţională a numeroase teste
utilizate azi în neuropsihologie este slab definită şi face din interpretarea testelor mai mult o artă decât o ştiinţă.
Desigur că teste bine standardizate având o pertinenţă funcţională bine definită şi bine demonstrată, ar creşte
mult validitatea şi fidelitatea interpretărilor testelor neuropsihologice. Testele noii baterii a lui Benton reprezintă
un mare pas înainte, dar mai rămân foarte multe de făcut.

5.CINE AR TREBUI SĂ EFECTUEZE EXAMENUL NEUROPSIHOLOGIC?

O evaluare neuropsihologică responsabilă necesită cunoştințe de psihologie si cunoştințe asupra funcționării


creierului. Examinatorul în neuropsihologie trebuie să fie antrenat substanțial în privința relațiilor normale dintre
creier şi comportament, şi a modificărilor comportamentale datorate neuropatologiei. El trebuie să cunoască în
profunzime toate psihopatologiile. Mai mult, trebuie să dispună de un antrenament avansat în administrarea
testelor şi în teoria măsurătorilor şi să fie foarte familiarizat cu diferitele forme şi tehnici de evaluare
psihologică. Competența în examenul neuropsihologic nu poate fi dobândită nici printr-un curs de vară, nici
făcând câteva lucrări de neurologie, nici participând la un seminar de sfârşit de săptărnână sau chiar la o
duzină de mici seminarii. Ea nu poate fi dobândită nici numai prin lectură, nici lucrând exclusiv cu pacienți
psihiatrici sau în clinici de sănătate mentală, cu toate că familiarizarea cu literatura şi experiența cu populația
psihiatrică sunt necesare pentru dobândirea expertizei neuropsihologice. Neuropsihologia clinică nu este
numai o tehnică (precum biofeedback-ul sau hipnoza) care să poată fi adăugată practicii unui psiholog clinician
sau a unui psihoterapeut.

Nici neurologii nu pot deveni experți în examinarea neuropsihologică cu aceeaşi uşurință cu care învață să
utilizeze tehnicile specifice de diagnostic. Neuropsihologia clinică este o disciplină înalt tehnică şi ştiințifică, ce
necesită cunoştințe şi stăpânirea unor tehnici specializate. Examenul neuropsihologic este una din abilitățile
dobândite după o activitate clinică în neuropsihologie. Mulți neuropsihologi au debutat ca psihologi clinicieni şi au
efectuat unul sau mai mulți ani de studii superioare sau postdoctorale în neuropsihologie, dobândind astfel o
experiență substanțială în neurologia clinică şi în ştiințe neurologice. Alții au venit în neuropsihologie grație
experienței în psihologia cognitivă, a dezvoltării, experirnentală, fiziologică sau psihofiziologică. Ei şi-au îmbogățit
cunoştințele de bază prin lucrări postdoctorale elaborate sub supravegherea unor specialişti în psihologia clinică
şi în neuropsihologie. Medicii, majoritatea lor neurologi sau psihiatri, care au devenit neuropsihologi şi-au
consacrat mai mulți ani învățării teoriilor şi tehnicilor psihologice. Conştiința valorii neuropsihologiei clinice şi a
necesității examinărilor neuropsihologice s-a impus mult mai repede decât resursele neuropsihologilor, grație
unei specializări adecvate. Ca rezultat, mulți clinicieni naivi, ignorând implicațiile şi complexitatea funcțiilor
cerebrale şi a neuropatologiei, dar aflați sub presiunea patronilor şi colegilor lor, sau chiar a potențialilor lor clienți,
care îi împing să le fumizeze evaluări neuropsihologice, au început să facă examinări neuropsihologice fără să țină
seama de riscuri sau de răul potențial pe care îl puteau face clienților lor. Răul cel mai evident este făcut atunci
când clinicianul neantrenat adecvat încearcă să pună un diagnostic. Eroarea diagnostică cea mai frecventă
este confuzia între o afecțiune potențial tratabilă şi una fără tratament, cum ar fi diagnosticarea bolii
Alzheimer la un pacient cu hidrocefalie normotensivă. Situația erorii inverse — când se dă ca diagnostic o
afecțiune absentă poate avea de asemenea repercusiuni serioase pentru pacient si familia sa. Acest lucru este
evident mai ales când un copil imatur sau perturbat emoțional este diagnosticat ca având o leziune cerebrală şi,
în consecință, este lipsit de o educație normală si de un tratament adecvat. În ambele situații au fost stabilite
proceduri legale de inițiere a jurisprudenței împotriva practicării incorecte a neuropsihologiei.

Consecințele cele mai subtile ale examinării neuropsihologice, efectuate de persoane insuficient pregătite, apar
când examinatorul ignoră stările emoționale ale pacientului său, problemele sale sociale şi problemele cotidiene,
pentru că a interpretat o rugăminte de a-i fi efectuată o evaluare neuropsihologică drept o cerere de asistență
diagnostică. În aceste cazuri, procesul de evaluare se opreşte când examinatorul novice ajunge la concluzia
diagnostică, în ciuda faptului evident că pacienții au deficite neuropsihologice, ca şi familiile lor, au nevoie de o
planificare comportamentală, de informații şi explicații utile pentru a compensa deficitele pacientului şi pentru a
ameliora calitatea vieții lor. Evaluarea planificată şi îngrijirile pe care le necesită pacientul cer o examinare mai
mult sensibilă şi avizată decât o simplă identificare a simptomelor evidente ale disfuncțiilor neuropsihologice.

Numeroase tulburări de comportament ale pacienților cu leziuni cerebrale pot fi ameliorate prin sfaturi înțelepte.
Din păcate, clinicienii inferior echipați nu sunt conştienți de existența unor astfel de tulburări, iar pacientul nu
poate deci beneficia de o asistență eficace. Foarte conştienți de răul produs de o instruire inadecvată,
neuropsihologii din Canada şi Statele Unite au elaborat norme de calificare pentru formarea şi
practicarea neuropsihologiei clinice. Primul pas a fost făcut în 1980 şi constă în cerința de a fi
recunoscut de către Asociația psihologilor americani (APA), unde există un departament de neuropsihologie
clinică (divizia 40), care este acum parte integrantă a APA şi care se preocupă direct de punerea în practică a
programelor necesare. În 1984, American Board of Clinical Neuropsychology, care a fost creată pentru a
evalua competența practicienilor în neuropsihologie, a devenit o comisie oficială specializată sub egida American
Board of Professional Psychology (ABPP), recunoscută de APA. Astfel, aşezămintele psihologice din Statele
Unite şi Canada fac efortul de a răspunde problemelor create de o disciplină nouă şi căutată.
Cursul 3 – Epistemologie: metaștiință și open science
Bibliografie video

Toate pasajele albastre din text sunt linkuri pe care se poate da click (și, Doamne-ferește,
citi și în plus)
https://www.youtube.com/watch?v=1uzsuCFUQ68 – The Royal Institution – De ce știința NU e “doar o teorie”.

https://www.youtube.com/watch?v=ySC-3ov9V88 – Matt Parker – Nu e doar o teorie?

https://www.youtube.com/watch?v=0Rnq1NpHdmw – John Oliver – Studii științifice

https://www.youtube.com/watch?v=v778svukrtU – Dorothy Bishop – Criza reproductibilității

https://www.youtube.com/watch?v=SPtWPBapyF0 – Nick Brown – Limitele psihologiei științifice

https://www.youtube.com/watch?v=tufAPd1NITQ – Research Methods and Statistics – Practici problematice de


cercetare (QRPs)

https://www.youtube.com/watch?v=2MDNvKXdLEM – Michael Aranda – De ce întreaga psihologie are probleme.

https://www.youtube.com/watch?v=uF1AF20R5as – Brian Nosek – Despre criza reproductibilității și open science


în psihologie

https://www.youtube.com/watch?v=uN3Q-s-CtTc – Uri Simohnson în cadrul BITSS – Psihologia fals-pozitivelor.


Cum să scoți orice rezultat vrei din grade de libertate neconstrânse – discuție despre Simmons, Nelson, &
Simohnson (2011)
https://www.youtube.com/watch?v=V7pvYLZkcK4 – Uri Simohnson în cadrul BITSS – Curba P – Cum ar trebui să
arate valorile lui p într-o literatură onestă? Cum arată în psihologie? – discuție despre Simohnson, Nelson, &
Simmons (2014)

https://www.youtube.com/watch?v=DXJsKpGY-dI – Uri Simohnson în cadrul BITSS - Open data și detectarea


cercetărilor frauduloase – introducere

https://www.youtube.com/watch?v=SbfJTa6O7IM – Uri Simohnson în cadrul BITSS - Open data și detectarea


cercetărilor frauduloase – Exemplul 1 - discuție despre Sanna et al., 2011

https://www.youtube.com/watch?v=cY_u5Al9lcs - Uri Simohnson în cadrul BITSS - Open data și detectarea


cercetărilor frauduloase – Exemplul 2 - discuție despre Smeesters & Liu (2011)

Neuroștiință:

https://www.youtube.com/watch?v=rJjHjnzmvDI – 2 Minute Neuroscience - RMNf

https://www.youtube.com/watch?v=b64qvG2Jgro – Molly Crocket – Atenție la “neuro-gunoi”

https://www.youtube.com/watch?v=_AqTwAoT6dw – BPM Biosignals – Artefacte EEG – “Minte” sau doar


activitate musculara?

https://www.youtube.com/watch?v=8thDuVfqCCM – Britt Garner – Există o criză a RMN-ului funcțional?

https://www.youtube.com/watch?v=bcoK3ZokPV8 – Andrew Jahn – Sunt toate rezultatele RMNf greșite?

https://www.youtube.com/watch?v=M8UQcnJevdo – Russ Poldrack – De la inferențe inverse la clasificarea


tiparelor

https://www.youtube.com/watch?v=nVLeMY6TLkk – Andrew Jahn – Bias statistic în RMNf. Circularitate și cum s-o


eviți

https://www.youtube.com/watch?v=fT63aEhnAYI – Tor Wager – Principiile RMNf – Modulul 4 – Probleme


psihologice și inferențe

https://www.youtube.com/watch?v=PzzZa_E8wzo – Tor Wager – Principiile RMNf – Modulul 4b – Inferențe


inverse

https://www.youtube.com/watch?v=lwy2k8YQ-cM – Tor Wager – Modulul 11 - Design experimental I – principii


psihologice

https://www.youtube.com/watch?v=ofl4p1Pyqes – Tor Wager – Modulul 12a Design experimental II – Tipuri de


design

https://www.youtube.com/watch?v=5w5S46n4CPg - Tor Wager – Modulul 12b - Design experimental II – Tipuri de


design

https://www.youtube.com/watch?v=4QY3YEUSwB4 – Tor Wager – Principiile RMNf – Crize în neuroștiință

https://www.youtube.com/watch?v=WOWRGrZGays – Tor Wager – Principiile RMNf – Crize în neuroștiință 2

https://www.youtube.com/watch?v=1yeekmfxAM4 – Tor Wager – Principiile RMNf – Utilizarea meta-analizelor


pentru inferență
localizaționism vs. holism vs. dualism vs. materialism
echipotențialism monism vs. idealism

glanda pineală. CUI PRODEST? Cine are câtă


MITURI putere politică în ce epocă și cum
frenologie
decide “adevărul științific”?

de la neurologie la neurobullshit (conflicte de interese și (in)accesibilitatea creierului –


neuropsihologie la referințe gratuite la neuroimagistică cutia craniană, bariera
psihologie cognitivă la irelevantă afirmațiilor făcute) hematoencefalică, LCR, religie,
neuroștiință cognitivă politică, epistemologie, finanțe

neuroanatomie organic vs. funcțional devine organic vs. psihanaliza ca dominantă a


localizaționistă vs. “psihologic” în absența accesibilității psihologiei clinice, abandonarea
neurofiziologie holistă neurofiziologiei (dualism de facto) neuroștiinței în psihiatrie

anatomo-clinic microscopie EEG PET CT RMN RMNf MEG tM/DCs

publish or perish conflicte de interese nedeclarate (financiare + faimă academică) neofilia jurnalelor

biasul împotriva infirmării ipotezelor industria profitabilă a formărilor în psihoterapie și psihologie clinică

bias-ul acordării granturilor de cercetare preferențial studiilor neofile care fac afirmații implauzibil de generale

publication bias p-hacking HARK-ing putere statistică obsesia ritualică pentru NHST

grade de libertate l-hacking (cherry picking prin citarea selectivă a literaturii care susține ipoteza)

confuzia corelație cauzalitate afirmații făcute direct în presă, nepublicate în jurnale

confirmation bias vs. falsificabilitate neurorealism naiv inferențe inverse absurde

corectarea pentru comparații multiple analiză circulară (voodoo correlations)


Bibliografie 1
Daniel David – Castele de nisip: Știință și pseudoștiință în psihopatologie, p.5-6;
20-21; 229-240; 242-248;
Motto: “Mori și devino!” (Goethe: Selige Sehnshucht)

Dedicație: Lucrarea este dedicată acelor colegi și profesioniști


pentru care știința este oportunitatea de a te mira critic și activ
față de spectacolul lumii și de a te bucura din plin de propriile
contribuții la acest spectacol care au supraviețuit încă încercării
tale continue de a le sabota.

Mulțumire: apariția acestei lucrări a fost impulsionată de oameni


cărora țin să le mulțumesc în mod deosebit. Parafrazând o frază
celebră, Guy Montgomery și Dana Bovbjerg de la Mount Sinai School
of Medicine, SUA, m-au trezit din somnul dogmatic și m-au învățat
ceea ce înseamnă știința, cercetarea și practica științifică, atunci
când credeam că știu acest lucru și că nu mă mai poate surprinde
nimic.

Albert Ellis, de la Albert Ellis Institute, SUA, a fost omul care mi-a
inspirat atitudinea critică și reflexivă asupra a tot ceea ce fac și care
m-a învățat să nu îmi pierd niciodată încrederea și să continuu să
construiesc critic atunci când multă lume este împotriva mea și a ceea
ce fac. “AR FI BINE ca lucrurile să stea altfel, dar niciunde nu scrie că
lucrurile NU TREBUIE să stea așa prost cum stau, deci continuă să
construiești și să fii critic cu tine însuți.” ar spune el. Steven Lynn, de la SUNY at Binghamton, SUA și Irving Kirsch,
de la Connecticut University, SUA, au fost prieteni și modele de atitudine critică și constructivă asupra domeniului
clinic, care m-au influențat în ceea ce am făcut aici.

Paradoxal, trebuie să mulțumesc și acelor “profesioniști” care m-au dat oportunitatea să îmi exersez simțul
critic prin sofismele, solemnitățile banale și truismele pe care le-au produs și pe care le-au ridicat la nivelul unor
“opere” și discursuri “științifice”.

Prezentarea autorului: Daniel David a obținut licența în psihologie în 1996 la Universitatea Babeș Bolyai și
doctoratul în psihologie (1999) la aceeași universitate. A făcut studii postuniversitare (1999-2002) în psihologie
clinică și psihopatologie la Mount Sinai School of Medicine, în Statele Unite ale Americii și a parcurs un program de
formare (1998-2002) în psihoterapia cognitivă și comportamentală la Albert Ellis Institute și la Academy of
Cognitive Therapy, SUA.

CUVÂNT CĂTRE CITITOR

Doresc să menţionez explicit că abordarea critică va fi una onestă din punctul meu de vedere. Adeseori am văzut
situaţii in care lucrări bune au fost demolate prin atacuri ipocrite şi situaţii in care lucrări slabe au fost ridite la rangul
de "opere"in baza unor interese și motivații personale ale recenzorilor. Cum se poate face acest lucru? Simplu. Deşi
ştim că actuala metodologie ştiinţifică are o serie de limite şi probleme, am asumat-o ca fiind cea mai bună
metodologie pentru avansarea cunoaşterii pe care o avem în acest moment, fără a spune că este şi perfectă. Ei
bine, a invoca limitele generale ale metodologiei ştiinţifice pentru a critica o lucrare sau un domeniu ţintă este
ipocrizie (astfel sunt adesea criticate pe nedrept luciări valoroase!). O lucrare ţintă trebuie criticată prin
problemele suplimentare pe care le are faţă de problemele generale ale metodologiei științifice, altfel ajungem
să criticăm la fel fiecare lucrare. De asemenea, a include greșelile neforțate de metodologia științifică ale unei
lucrări slabe în cadrul "limitelor metodei ştiinţifice" este iarăşi incorect (astfel, adesea se salvează lucrări slabe!).

PARTEA III. REALITĂȚI ROMÂNEȘTI. CONCLUZII ȘI DISCUȚII.

Capitolul 6. Psihologia clinică în România sau fuga din Turnul Babel. Un pamflet fundamentat științific.

Într-un studiu extensiv asupra lucrărilor de psihologie clinică, am trecut în revistă toate publicațiile autorilor români
disponibile în bazele de date internaționale (PSYCHINFO și MEDLINE) pentru a investiga vizibilitatea și impactul
internațional ale psihologie clinice românești. De asemenea, am comparat între ele principalele centre universitare
românești prin prisma dezvoltării psihologiei clinice. Am pornit de la ideea că lucrările valoroase de psihologe clinică
și psihoterapie trebuie să fie publicate în jurnale științifice importante, accesibile prin PSYCHINFO și MEDLINE. Într-
o țară în care contribuția științifică este evaluată cantitativ – număr mare de cărți de autor! – instituirea unor
standarde internaționale care valorizează calitatea produselor științifice este necesară. Cred că cercetarea în
domeniul psihologiei clinice în România este de o calitate slabă. Menționez în continuare câteva din paradoxurile
cercetării românești din acest domeniu, percepute prin experiențele proprii, care m-au condus la această ipoteză,
urmând apoi să investighez empiric validitatea acestei ipoteze pentru a depăși limita dată de propria percepție.

• Standardele de promovare cer în continuare un număr mare de cărți de autori (pentru detalii vezi normele
Consiliului Național de Atestare a Titlurilor, Diplomelor și Certificatelor Universitare). Spre exemplu, în
general, pentru titlul de profesor universitar se cer peste 4 volume de autor. La noi aceasta se întâmplă
când la nivel internațional știința este interdisciplinară și când lucrările fundamentale în domeniu sunt
scrise de mai mulți autori în colaborare. Se pare că noi credem încă faptul că un autor poate scrie 4
volume singur și să și aibă o contribuție la literatura științifică! Adesea aceste volume sunt doar
sumarizării ale altor lucrări fără să aducă nimic nou din punct de vedere științific, ci doar satisfacerea
unor standarde de promovare. Atunci când nu sunt “prelucrări” ale altor lucrări internaționale, unele
dintre aceste volume de autor promovează solemnități banale și truisme. Făcând o analiză la câteva
universități americane (SUNY at Binghamton, New School University), am observat că mulți profesori
nu ar satisface criteriile pentru a fi profesori în România! Unii profesori americani nici nu au cărți de
autor în domeniu. În schimb, ei au publicat zeci de articole în colaborare publicate în jurnale de
prestigiu. În știința serioasă, accentul se pune pe articole, ierarhizate ca importanță în funcție de
conținut (articole experimentale, studii empirice nonexperimentale și calitative, sinteze teoretice și
abia apoi lucrări cu caracter local) și în funcție de factorul de impact al jurnalului în care s-au publicat.
Se procedează astfel deoarece acești indicatori reflectă dinamica științei; cărțile de autor care adesea,
imediat după publicare sunt deja depășite prin noile dezvoltări în domeniu care au avut loc în timpul
îndelungat în care se scria și tipărea cartea nu reflectă eficient dinamica științei. Un om de știință
serios adesea publică un volum de autor după ce a publicat un număr mare de cercetări personale prin
care are ceva de spus în domeniu, sumarizând apoi ideile sale într-o carte. La noi adesea lucrurile stau
invers: cunosc autori români care au un număr de cărți apropiat de cel al articolelor științifice!

• Obsesia pentru numărul mare de cărți de autor are influențe negative asupra procesului de predare. În
străinătate, în fiecare domeniu țintă sunt câteva manuale scrise de liderii din domeniul respectiv, care
sunt folosite în aproape toate centrele universitare; adesea, liderii sunt de la universități de prestigiu,
precum Columbia, Duke, Harvard, Stanford, Yale. Având cultul valorii, profesorii de la celelalte
universități utilizează manuale ca elemente de start pe care, sigur, le îmbunătățesc cu contribuțiile și
lucrările proprii. La noi în țară, însă, fiecare titular de curs de psihologe clinică își scrie propriul
manual, forțându-și studenții să-l învețe. Adesea, ei nu își citează colegii din țară care au scris anterior
pe această temă sau, dacă le preiau totuși ideile, “uită” să-i citeze. Adesea am auzit replici de genul
următor la vestea că un coleg român a publicat un articol într-o revistă prestigioasă din străinătate: “sigur, a
avut timp, ăc ci ...”. Altfel spus, diferențele de valoare, de inteligență, de calitate a gândirii nu sunt
recunoscute ca exprimându-se în performanță (când aceasta se recunoaște într-un târziu), ci ele sunt puse
pe seama unor factor care nu țin cu necesitate de calitățile intelectuale ale celui care a făcut performanța
(a avut timp, a avut materiale, nu face activități administrative, nu are copii, nu are grija banilor, etc.) .
Aceasta arată lipsa unui cult pentru valoare și un “eu” dilatat, fiecare considerând că poate să fie la fel
de bun și chiar mai bun ca oricine din domeniu. În acest context, adesea m-am întrebat care este
statutul plagiatului în literatura de specialitate din țară. Eu cred că o mare parte din ceea ce numim
plagiat nu ar trebui să fie considerat ca atare, căci adesea nu are elementul intențional; plagiatul la noi
este expresia faptului că avem o stimă de sine pozitivă și un “eu” fără limite! Cele descrise aici sunt
similare cu situația din Aventurile lui Alice în Țara Minunilor de Lewis Carroll, în care, deși cursanții
care alergau într-o întrecere erau la distanțe diferite unii de alții, verdictul păsării Dodo, un personaj
interesant din poveste a fost: “toți au câștigat, deci toți trebuie să primească premii”. Dacă distrugem
prin discursuri similare cu cel al păsării Dodo elita științifică, distrugem știința! Știința este elitistă!
Deși mulți fac știință, puțini produc lucruri științifice! Cercetările au arătat că știința care contează este
produsă de un număr mic de cercetători care publică în câteva jurnale cu factor de impact în domeniu
(Garfield, 1990; 1996). Restul este zgomot produs de reminiscențele unui mediu academic socialist
care, ca și o fabrică, își îndeplinește funcția socială pentru că dă de lucru unor simpli lucrători care se
autointitulează “oameni de știință”, dar care fabrică produse vandabile pe o piață creeată artificial –
promovarea academică!

• Adesea viața științifică românească pare să fie o conspirație constantă a pigmeilor, pe fundalul mitului
mioritic. Profesioniști de talie internațională sunt tratați în țară ca niște curiozități care trebuie
marginalizate sau izolate pentru a nu ne aminti mereu de micimea noastră. Deși adesea ambiția
feroce a pigmeilor depășește capacitatea lor intelectuală și profesională, solidaritatea lor este
extraordinară, iar mijloacele folosite sunt de o ingeniozitate bolnvă atunci când au de blocat valori.
Evitând să dau exemple identificabile pentru a nu afecta vieți și cariere individuale, voi spune doar că
probabil fiecăruia dintre noi (mie cu certitudine!) îi vin în minte cazuri în care oameni de valoare nu au avut
loc la catedrele din România, dar au ajuns să predea la universități de prestigiu din străinătate sau
cazuri în care proiecte cu finanțări mari, specialiștii în domeniu au fost excluși prin efortul concertat al
pigmeilor, specialiști de mâna a doua. Cum să avem în aceste condiții o știință de impact internațional
și de ce să ne mirăm că profesioniștii români din străinătate nu se mai definesc uneori ca români, fiind
fideli țării care i-a valorizat?!

• În plus, în literatura științifică de profil lipsește un adevărat proces de recenzie. Aceasta permit fiecăruia să
mutileze imaginea domeniului cu intuițiile personale, erori conceptuale și metodologice grosolane
sau cu “revoluțiile științifice” pe care le inițiază. Nu am văzut în literatura internațională cât am văzut
în litratura românească “o nouă perspectivă”, ”schimbare de paradigmă”, ”trebuie să
reconceptualizăm”, ”o abordare sintetică și integrativă a literaturii de profil”, etc. Citind astfel de
lucrări, ai impresia că autorii lor sunt profesioniști și lideri de opinie la nivel internațional în domeniu;
trist însă, aceștia sunt lideri locali, amăgindu-și studenții și colegii care depind de ei, făcându-i să
creadă că ei sunt Marele Vrăjitor din Oz. Acestea sunt ”gălăgii în iarmaroc” făcute de baronul aflat în
vizită pe moșia lui și care sperie iobagii locali (studenți, doctoranzi, etc.), nu revoluții științifice cu
impact în domeniu făcute de profesioniști. Am văzut autori care necitind temeinic lucrările originale
ale autorilor pe care îi citau sau pe care îi criticau, ci adesea citindu-i pe aceștia din surse mediate, au
completat golurile în lectură cu un text propriu cu o nonșalanță uimitoare. Au pus acolo ceea ce
credeau ei că ar pune autorul sau ceea ce le convenea lor pentru a-și justifica ”revoluția”. Această
strategie de a construe oameni de paie numai pentru a-I putea apoi distruge și a-ți arăta astfel
contribuția ste e tipică pentru literatura științifică românească. Spre exemplu, abordarea
comportamentală în psihologie sau abordarea pozitivst-empirică în lucrări de metodologie calitativă au fost
atacate astfel. Abordarea comportamentală în psihologie, ca să dau un singur exemplu, a fost adesea
(și este încă) prezentată caricatural în literatura românească de profil (Ralea, 1954). Adesea se induce
eronat ideea că ea este istorie sau că este o abordare limitată, fără valențe euristice, care trebuie înlocuită
cu o abordare interacționist-sistemică. Noi, românii, am devenit specialiști în abordările enervant de
interacționist-sistemice prin care integrăm tot ceea ce există în literatura de specialitate, având astfel o
perspectivă mai “completă” asupra fenomenului! Din păcate, aceste abordări complexe nu sunt
cunoscute sau luate în serios de nimeni, iar abordarea comportamentală nu este istorie ci, dimpotrivă,
ea este în plină expansiune (Dobson, 2002).

Mai mult, condițiile minime de desfășurare ale unei cercetări științifice în domeniu la standarde internaționale sunt
adesea uitate în cercetarea românească. Ele asigură diferențe minime între modul în care ai făcut cercetarea și
modul în care spui că ai făcut-o atunci când publici rezultatele. În plus, ele asigură evitarea fabricării faptelor
favorizând verificabilitatea și falsificabilitatea teoriilor investigate. Dacă nu respectăm aceste criterii inducem în
eroare comunitatea științifică deoarece sugerăm atunci când publicăm un articol că le-am respectat; constrânși de
cerințele de publicare, prezentăm demersul nostru ordonat logic deși în practică nu l-am respectat. Altfel spus,
ne fabricăm dovezi și suport empiric pentru ceea ce vrem să arătăm, nu după fenomenul real pe care l-am
studiat, după modelul descris de Goodman (1978):

• Am formulat sau reformulate ipotezele după ce am obținut datele;


• Am eliminat ipoteze care nu au corespuns datelor;
• Am eliminat date care nu ne convin, etc. (toate acestea nu apar în articolul publicat);

Uneori se publică lucrări (ex. de doctorat) considerate științific importante în ciuda unor erori metodologice
flagrante. Nu vorbesc aici de erori în prelucrarea statistică matematică (ele se pot datora programului statistic
sau unor erori de tipărire), ci utilizarea eronată a testului statistic (ex. utilizarea unui test statistic elaborat
pentru eșantioane independente în cazul unor eșantioane dependente) sau utilizarea cu obstinație a
semnificației statistice (adeseori, utilizarea este și ea greșită!; vezi pentru detalii Cohen, 1994) fără a lua în
considerare și mărimea efectului și comparațiile normative (vezi pentru metodologia adecvată Kendall și
colab., 2000).

Un alt caz tipic este utilizarea într-un mod complet neprofesionist a constructelor de mediere și moderare. Deși
Baron și Kenny au arătat din 1986 cum se diferențiază aceste concepte la nivel teoretic și în analiza statistică,
autorii români încă le utilizează ca sinonime producând astfel erori conceptuale importante. Apoi, sunt
frecvente erorile în cazul comparațiilor multiple. Astfel, dacă avem trei comparații cu testul t pragul limită de
semnificație statistică nu mai este 0,05, ci 0,05:3 = 0,015 (pentru detalii referitoare la modul în care se aplică
această corecție Bonferroni, vezi David și colab, 2003).

Oare câte concluzii susținute pe astfel de analize sunt de fapt vorbe goale fără suport științific?!

Invers, neluarea în calcul a puterii statistice (ex. numărul minim de subiecți în eșantioanele construite pentru
a putea detecta efectul existent) a dus la interpretarea unor rezultate semnificative ca fiind nesmnificateive.
Oare ce concluzii interesante au murit astfel?

Din punct de vedere teoretic, este înfricoșător pentru acest domeniu că se perpetuează în practică un dualism
cartezian ortodox. Colegii medici afirmă adesea că dacă există leziune biochimic, atunci este clar că nu avem
ce să facem cu o intervenție psihologică. Se uită atât de ușor că intervenția psihologică are consecințe
biochimice (Fumark și colab., 2002; Kirsch, 1990). Unii colegi psihologi se “îmbată” de plăcere când descoperă
că anumite fenomene psihologice au corespondent biochimic; unii dintre ei au ajuns să-și facă un titlu de
profesionist chestionând mereu substratul biochimic al unor fenomene psihologice. Probabil că prin această
atitudine au certitudinea că ceea ce studiază există (în sens fizic)! Este păcat că mai au nevoie de această
certitudine când psihologia ca și știință s-a justificat de zeci de ani. Aceștia sunt aceia pe care eu îi numesc cei
mai buni psihologi dintre medici și cei mai puțin psihologi dintre psihologi!
Prin prisma acestui dualism feroce, am văzut renăscând în România cercetări care au murit în alte țări odată cu
evoluția la un model bio-psiho-social, precum cele în care se arată că fenomenele hipotice sunt acompaniate de
modificări cerebrale. Aceste cercetări care vizează substratul biologic al unor fenomene psihologice sunt de
încurajat doar dacă urmăresc să descopere tipare neurobiologice și biochimice specifice pentru fenomene
psihologice specifice. În această direcție, ele pot contribui la dezvoltarea unor noi tehnologii în psihopatologie.
Dacă însă aceste cercetări arată simplist, cum adesea se întâmplă, că anumite fenomele, cum ar fi cele
hipnotice, efectul placebo, stări emoționale, pur și simplu au un corespondent neural, atunci ele sunt
solemnități banale, create de semidocți.

Așa am ajuns astăzi să legăm lobul prefrontal de aproape orice fenomen psihologic, de la durere la emoție și
gândire. Păi nu știm de zeci de ani că psihicul este o funcție a creierului?! De ce vă mirați și vă bucurați atunci,
dragi colegi, când “descoperiți” că fenomenele psihologice specifice pe care le studiați au corespondent
neurocerebral?! Doar nu ați crezut că aceste fenomene pe care le experențiem cu toții există în aer!

Fără a avea pretenția că voi clarifica aici relația dintre psihic și biologic (ea nu este clarificată nici în literatura de
specialitate!), voi prezenta în continuare, succint, modul în care trebuie văzută această relație pe baza datelor pe
care le avem în acest moment. Prin urmare, așa cum Muhammad Ali și Cassius Clay sunt nume diferite pentru
aceeași persoană, așa și limbajul psihologic și cel biomedical sunt descrieri ale aceluiași fenomen. După cum este
absurd să spunem că Muhammad Ali este cauza lui Cassius Clay sau invers, nu are sens să ne punem mereu
problema dacă biologicul cauzează psihologicul sau invers, factorii psihologici cauzează modificări biologice.

Spre exemplu, referindu-ne la schemele cognitive pesimiste observate în depresie, e evident că ele nu există
în aer. Probabil că le corespunde o anumită realitate neurobiologică – spre exemplu, ea s-ar putea referi la un
deficit de serotonină. Schema cognitivă este cauză pentru o trăire depresivă. Acestei trăiri îi corespunde,
evident, o realitate neurobiologică (ipotetic: o amplitudine scăzută a potențialului postsinaptic excitator).
Așadar, teoria psihologică a depresiei și cea biologică nu sunt în opoziție sau alternative. Ele sunt descrieri ale
aceluași fenomen folosind limbaje diferite. În acest context, discursul cauzal trebuie menținut în cadrul
aceluiași limbaj. Afirmația “schemele cognitive cauzează trăirea depresivă” ar putea corespunde în limbaj
biomedical, urmând ipoteza noastră anterioară, afirmației “deficitul de serotonină determină o amplitudine
redusă a potențialului postsinaptic excitator”. Deficitul de serotonină nu “ne face” să avem scheme cognitive
negative, ci este o descriere neurobiologică a acestora.

Așadar, dragi colegi care “studiați” substratul neurobiologic al unor fenomene psihologice, trebuie să mai
spunem și acum, după atâta vreme, că orice fenomen psihologic are corespondent neurocerebral și deci, dacă
nu arătați în cercetările voastre că această corespondență este una specifică, eventual pe baza căreia să putem
face predicții ale eficacității tratamentelor medicamentoase sau psihologice (nemaivorbind de clarificarea
relației cauză/efect), nu produceți decât truisme și lucrări utile numai promovării profesionale?

Probabil că va trebui să așteptăm un timp până când lucrurile se vor așeza. Stelele de iarmaroc au tendința să-și
aducă lângă ele și să prmoveze (numai până la un anumit nivel la care nu se simt amenințați) iobagi și muze proaste
numite fraudulos oameni de știință și experți, nu profesioniști (care nu sunt, din păcate pentru ei, impresionați de
strălucirea de armaroc). Cum spunea un bun prieten de-al meu, trebuie să așteptăm ca o nouă generație, sper mai
modestă, mai critică și mai autoreflexivă, să ajungă în pârghiile de putere și să schimbe ceva, aducând normalitatea
în domeniu.

Nu mai continuu cu exemplele și mă opresc aici. Pentru a fi constant cu mine, va trebui să argumentez toate cele
afirmate aici. Le voi argumenta atât printr-o analiză de caz, cât și printr-o analiză empirică din care putem deduce
caracterul lor generalizat pentru mediul academic românesc. Dacă eu greșesc și cele afirmate de mine aici sunt
false, atunci ne așteptăm ca psihologia clinică românească să fie vizibilă în exterior și performantă, separând
valoarea de nonvaloare. Să urmărim cum stau lucrurile.
Pentru a evita conflicte cu alți colegi de breaslă, îmi voi analiza critic propria activitate științifică. Sper că voi da
astfel un exemplu de autoanaliză critică și de modalitate de a sparge egocentrismul care duce la un ego
nejustificabil expandat. În primele mele lucrări de psihologie clinică, conform modelelor existente în România la
vremea respectivă, în anii ’90, aproape în fiecare am produs “revoluții necunoscute”. Dacă aș fi lucrat într-un mediu
științific cu adevărat performant în ceea ce privește psihologia clinică, probabil că un adevărat proces de recenzie
m-ar fi împiedicat să fac acest lucru. Aș fi meritat, ca orice tânăr cercetător, un proces de recenzie serios, dar
ori nu l-am primit, ori l-am primit atât de distorsionat și emoțional încât nu am avut ce să învăț din el.

Am fost adus cu picioarele pe pământ și mi-am redimensionat revoluțiile și stilul de a face știință după ce am
petrecut o perioadă lungă la Mount Sinai School of Medicine, în Statele Unite, începând cu sfârșitul anilor ’90. Acolo
am învățat că un articol revoluționar se poate scrie doar dacă ai demonstrat prin activitatea anterioară că tu
ești cel care trebuie să-l scrii și dacă el trece un proces de recenzie serios și ajunge în jurnale științifice cu
impact pentru a fi discutat serios de comunitatea științifică. Altfel, el este doar o expresie a sentimentului vieții,
cum ar spune Rudolf Carnap, nu un demers științific.

Prin prisma acestor criterii, mi-am redimensionat întreaga activitate științifică. Sub aspect stilistic, nu apreciez
lucrările mele de dinainte de 2000. Ca să le salvez și să le verific valoarea de conținut, le-am redimensionat
stilistic de la revoluții la simple contribuții și am încercat să le public în străinătate. Cele care au trecut această
probă sunt astăzi lucrări științifice, iar restul sunt produsele mele din perioada romatică de expresie a lipsei
unui model riguros și serios de a face știință în domeniul meu de interes. Lucrările scrise după 2000 sunt cele
care mă definesc și care exprimă un raționalism critic entuziast în tot ceea ce fac.

Cu toată responsabilitatea afirm că o lucrare publicată în România care se dorește una revoluționară, de
impact în domeniu, nu poate fi considerată ca atare până când nu trece proba unor recenzii internaționale. Să
fiu clar înțeles: nu consider că o lucrare bună este numai aceea publicată în străinătate. Apreciez autorii români
onești și sinceri care au adus contribuții importante la dezvoltarea psihologiei în România prin publicații în țară;
aceștia însă și-au dimensionat corect lucrările și nu au pretins că au contribuții revoluționare în domeniu.
Comunitatea științifică românească și regulile publicării în domeniul psihologiei clinice în acest moment sunt
atât de naive încât ele nu pot separa valoarea de nonvaloare. În termenii lui Paul Feyerabend, psihologia clinică
românească este ghidată încă după legea celui care strigă cel mai tare, nu a valorii științifice.

Revenind însă la investigația noastră – care de altfel va confirma cele afirmate mai sus – rezultatele obținute
prin analiza bazelor de date menționate anterior sunt prezentate în tabelele 5 și 6 (pentru detalii privind aceste
rezultate și metodologia lor de obținere, vezi David și colab., 2002). Rezultatele arată că psihologia clinică
românească este vizibilă pe plan internațional, lucru remarcabil ținând cont de interdicțiile pe care psihologia le-a
avut de traversat în perioada comunistă. Universitatea Babeș Bolyai, prin publicațiile autorului și ale unui grup
restrâns de colegi este cea care a contribuit semnificativ la această vizibilitate, datele arătând că psihologia clinică
de la Cluj este cea mai bine dezvoltată din țară. Menționez că la data când am făcut aceste analize, jurnalul
Asociației de Științe Cognitive din România, Cogniție Creier Comportament nu era încă inclusă în PSYCHINFO.
Grupul de la Cluj are tendința să publice rezultatele științifice în această revistă, așa că faptul că psihologia
clinică de la Cluj are cea mai mare vizibilitate internațională fără ca ea să fie determinată de publicațiile
autorilor de la Cluj în revista proprie este un lucru de apreciat.Din păcate, această vizibilitate este redusă dacă
facem comparații cu psihologia americană sau cu unele semnificativ mai modeste, cum ar fi cea
finlandeză (David și colab., 2002). Pentru a crește vizibilitatea psihologiei clinice românești, este necesară
publicarea lucrărilor în jurnale vizibile și influente din
literatura științifică internațională. Probabil că o mai clară condiționare a promovării academice pe baza acestui
criteriu ar duce la o mai mare vizibilitate a psihologiei clinice românești pe scena internațională.

Tabelul 5. Vizibilitatea internațională a psihologiei clinice românești

Centrele universitare PSYCHINFO MEDLINE

Universitatea Alexandru Ioan Cuza, Iași VD – 44 VD-2

VR – 2,3

Universitatea Babeș-Bolyai, Cluj- VD – 84 VD - 12


Napoca
VR – 9,3

Universitatea București VD – 76 VD - 6

VR – 8,4

Universitatea de Vest, Timișoara VD – 3 VD - 0

VR – 0,27

LEGENDĂ

VD – vizibilitatea directă. S-a calculat prin numărul de articole publicate de către membrii catedrelor de psihologie
(în funcție în 2001) până în 2001 (inclusiv), articolele disponibile pe bazele de date menționate.

VR – vizibilitatea raportată la resursele catedrelor. Ea s-a calculat împărțind VD la numărul membrilor fiecărei
catedre de psihologie (în funcție în anul 2001)

Analiza factorului de impact al psihologiei clinice românești arată o imagine similară. Factorul de impact a fost
analizat cuantificând numărul de articole sau lucrări publicate în reviste cotate ISI și anume de către Institutul
pentru Informare Științifică (Institute for Scientific Information – ISI). Jurnalele cotate de către ISI constituie
elita jurnalelor științifice. Spre exemplu, MEDLINE include peste 4000 de jurnale, iar PSYCHINFO în jur de
1300. Cu toate acestea, cercetările recente au arătat că doar 150 de jurnale acoperă jumătate din toate citările
în domeniu și un sfert din tot ceea ce este publicat (Garfield, 1990; 1996). În plus, 2000 de jurnale acoperă 85%
din toate publicațiile și 95% din toate articolele citate. Fiecare jurnal cotat ISI are calculat un factor de impact,
care arată influența jurnalului respectiv în domeniu. Într-o descriere simplă, factorul de impact al jurnalului X
se calculează împărțind numărul de citări pe care le-au primit în alte jurnale toate articolele publicate în jurnalul
X într-un an, la numărul total de articole publicate în jurnalul X pe parcusul anului respectiv. Cecetările au
arătat că factorul de impact este un bun predictor al citărilor viitoare ale articolelor publicate într-un jurnal.

Deși Universitatea Babeș Bolyai, prin lucrările autorului, are cel mai mare impact din țară în psihologia clinică
internațională, psihologia clinică românească nu este un jucător important în știința internațională (David și
colab., 2002). Așadar, chiar dacă psihologia clinică românească este totuși vizibilă, produsele ei nu sunt, încă,
competitive și, în consecință, nu jucăm un rol semnificativ pe scena științifică internațională.

Probabil că o strategie eficientă pentru a crește impactul psihologiei clinice românești pe scena internațională ar fi:

• Condiționarea publicărilor academice de publicații în jurnale cotate ISI.


• Alocarea preferențială de resurse financiare și granturi de cercetare celor care deja și-au dovedit
competența prin contribuții științifice de valoare internațională

Menționez aici un caz pe care eu îl consider ilustrativ. Un grant de cercetare care a fost considerat excelent – “state
of the art” de către recenzorii internaționali și care a primit finanțare internațională pentru cinci ani a fost trimis,
pentru a verifica și modul de alocare al granturilor științifice din România, la Consiliul Național al Cercetării Științifice
din Învățământul Superior (CNCSIS). Acest grant a fost evaluat cu 83,33 de puncte din 100 posibile și s-a clasat
undeva în a doua treime din totalitatea aplicațiilor. În plus, această decizie s-a luat cu o lipsă totală de transparență,
fără a ni se comunica comentariile recenzorilor pe baza cărora am primit această evaluare, practică normală în
orice instituție serioasă care oferă fonduri pentru cercetare. Ei bine, aceasta înseamnă fie că criteriile CNCSIS
sunt extrem de riguroase, iar recenzorii sunt lideri de cercetare în psihologie clinică și psihopatologie la nivel
internațional, iar astfel ei au identificat probleme ale grantului neidentificate de experții internaționali, fie că
alocare resurselor se face după alte criterii decât cele ale valorii științifice a aplicației. Care dintre cele două
posibilități o fi cea reală, nu știu, știu doar că psihologia clinică românească este abia vizibilă și fără impact la
nivel internațional!

Confirmând criticile enunțate mai sus, recent, la a cincea conferință CNCSIS (2003), președintele acestei instituții a
recunoscut probleme referitoare la rigoarea evaluărilor făcute de experții CNCSIS și a militat pentru consolidarea
normelor de evaluare a granturilor și de selecție a experților evaluatori.

Bibliografie 2

Ben Goldacre – I Think You’ll Find It’s a Bit


More Complicated Than That, p. 3-13;

HOW SCIENCE WORKS

Why won’t professor Susan Greenfield Publish This Theory in a


Scientific Journal?

Guardian, 22 October 2011

This week Baroness Susan Greenfield, Professor of Pharmacology


at Oxford, apparently announced that computer games are
causing dementia in children. This would be very concerning
scientific information; but it comes to us from the opening of a
new wing at an expensive boarding school, not an academic
conference. Then a spokesperson told a gaming site that’s not
really what she meant. But they couldn’t say what she does
mean.

Two months ago the same professor linked internet use with
the rise in autism diagnoses (not for the first time), then pulled back when autism charities and an Oxford
professor of psychology raised concerns. Similar claims go back a very long way. They seem changeable, but
serious.

It’s with some trepidation that anyone writes about Professor Greenfield’s claims. When I raised concerns, she said
I was like the epidemiologists who denied that smoking caused cancer. Other critics find themselves derided
in the media as sexist. When Professor Dorothy Bishop raised concerns, Professor Greenfield responded: ‘It’s
not really for Dorothy to comment on how I run my career.’

But I have one, humble, question: why, in over five years of appearing in the media raising these grave worries,
has Professor Greenfield of Oxford University never simply published the claims in an academic paper?

A scientist with enduring concerns about a serious widespread risk would normally set out their concerns clearly, to
other scientists, in a scientific paper, and for one simple reason. Science has authority, not because of white
coats or titles, but because of precision and transparency: you explain your theory, set out your evidence, and
reference the studies that support your case. Other scientists can then read it, see if you’ve fairly represented
the evidence, and decide whether the methods of the papers you’ve cited really do produce results that
meaningfully support your hypothesis.

Perhaps there are gaps in our knowledge? Great. The phrase ‘more research is needed’ has famously been banned
by the British Medical Journal, because it’s uninformative: a scientific paper is the place to clearly describe the
gaps in our knowledge, and specify new experiments that might resolve these uncertainties.

But the value of a scientific publication goes beyond this simple benefit of all relevant information appearing,
unambiguously, in one place. It’s also a way to communicate your ideas to your scientific peers, and invite them to
express an informed view.

In this regard, I don’t mean peer review, the ‘least-worst’ system settled on for deciding whether a paper is worth
publishing, where other academics decide if it’s accurate, novel, and so on. This is often represented as some kind
of policing system for truth, but in reality some dreadful nonsense gets published, and mercifully so: shaky
material of some small value can be published into the buyer-beware professional literature of academic science;
then the academic readers of this literature, who are trained to critically appraise a scientific case, can make their
own judgement.

And it is this second stage of review by your peers – after publication –that is so important in science. If there
are flaws in your case, responses can be written, as letters to the academic journal, or even whole new papers.
If there is merit in your work, then new ideas and research will be triggered. That is the real process of science.

If a scientist sidesteps their scientific peers, and chooses to take an apparently changeable, frightening and
technical scientific case directly to the public, then that is a deliberate decision, and one that can’t realistically
go unnoticed. The lay public might find your case superficially appealing, but they may not be fully able to judge
the merits of all your technical evidence.

I think these serious scientific concerns belong, at least once, in a clear scientific paper. I don’t see how this
suggestion is inappropriate, or impudent, and in all seriousness, I can’t see an argument against it. I hope it won’t
elicit an accusation of sexism, or of participation in a cover-up. I hope that it will simply result in an Oxford science
professor writing a scientific paper, about a scientific claim of great public health importance, that she has made
repeatedly – but confusingly – for at least half a decade.
Cherry-Picking Is Bad. At Least Warn Us When You Do It

Guardian, 24 September 2011

Last week the Daily Mail and Radio 4’s Today programme took some bait from Aric Sigman, an author of popular-
sciencey books about the merits of traditional values. ‘Sending babies and toddlers to daycare could do untold
damage to the development of their brains and their future health,’ explained the Mail.

These news stories were based on a scientific paper by Sigman in the Biologist. It misrepresents individual studies,
as Professor Dorothy Bishop demonstrated almost immediately, and it cherry-picks the scientific literature,
selectively referencing only the studies that support Sigman’s view. Normally this charge of cherry-picking
would take a column of effort to prove, but this time Sigman himself admits it, frankly, in a PDF posted on his
own website.

Let me explain why this behaviour is a problem. Nobody reading the Biologist, or its press release, could
possibly have known that the evidence presented was deliberately incomplete. That is, in my opinion, an act
of deceit by the journal; but it also illustrates one of the most important principles in science, and one of the
most bafflingly recent to emerge.

Here is the paradox. In science, we design every individual experiment as cleanly as possible. In a trial comparing
two pills, for example, we make sure that participants don’t know which pill they’re getting, so that their
expectations don’t change the symptoms they report. We design experiments carefully like this to exclude bias:
to isolate individual factors, and ensure that the findings we get really do reflect the thing we’re trying to measure.

But individual experiments are not the end of the story. There is a second, crucial process in science, which is
synthesising that evidence together to create a coherent picture.

In the very recent past, this was done badly. In the 1980s, researchers such as Celia Mulrow produced damning
research showing that review articles in academic journals and textbooks, which everyone had trusted, actually
presented a distorted and unrepresentative view when compared with a systematic search of the academic
literature. After struggling to exclude bias from every individual study, doctors and academics would then
synthesise that evidence together with frightening arbitrariness.

The science of ‘systematic reviews’ that grew from this research is exactly that: a science. It’s a series of
reproducible methods for searching information, to ensure that your evidence synthesis is as free from bias as your
individual experiments. You describe not just what you found, but how you looked, which research databases
you used, what search terms you typed, and so on. This apparently obvious manoeuvre has revolutionised the
science of medicine.

What does that have to do with Aric Sigman, the Society of Biologists, and their journal, the Biologist? Well, this
article was not a systematic review, the cleanest form of research summary, and it was not presented as one. But
it also wasn’t a reasonable summary of the research literature, and that wasn’t just a function of Sigman’s
unconscious desire to make a case: it was entirely deliberate. A deliberately incomplete view of the literature, as
I hope I’ve explained, isn’t a neutral or marginal failure. It is exactly as bad as a deliberately flawed experiment,
and to present it to readers without warning is bizarre.
Blame is not interesting, but I got in touch with the Society of Biology, as I think we’re more entitled to have high
expectations of them than of Sigman, who is, after all, some guy writing fun books in Brighton. They agree that
what they did was wrong, that mistakes were made, and that they will do differently in future.

Here’s why I don’t think that’s true. The last time they did exactly the same thing, not long ago, with another
deliberately incomplete article from Sigman, I wrote to the journal, the editor, and the editorial board, setting out
these concerns very clearly. The Biologist has actively decided to continue publishing these pieces by Sigman,
without warning. They get the journal huge publicity; and fair enough. I’m no policeman. But in the two-actor
process of communication, until it explains to its readers that it knowingly presents cherry-picked papers without
warning – and makes a public commitment to stop – it’s for readers to decide whether they can trust what the
journal publishes.

Kids Who Spot Bullshit, and the Adults Who Get Upset About It

Guardian, 28 May 2011

Brain Gym is a schools programme I’ve been writing about since 2003. It’s a series of elaborate physical movements
with silly pseudoscientific justifications: you wiggle your head back and forth, because that gets more blood
into your frontal lobes for clearer thinking; you contort your fingers together to improve some unnamed ‘energy
flow’. They’re keen on drinking water, because ‘processed foods’ – I’m quoting the Brain Gym Teacher’s Manual –
‘do not contain water’. You pay hundreds of thousands of pounds for Brain Gym, and it’s still done in hundreds of
state schools across the UK.

This week I got an email from a science teacher about a thirteen-year-old pupil. Both have to remain anonymous.
This pupil wrote an article about Brain Gym for her school paper, explaining why it’s nonsense: the essay is
respectful, straightforward, and factual. But the school decided this article couldn’t be printed, because it would
offend the teachers in the junior school who use Brain Gym.

Now, this is weak-minded, and perhaps even vicious. More interesting, though, is how often children are able to
spot bullshit, and how often adults want to shut them up.

Emily Rosa is the youngest person ever to have published a scientific paper in the Journal of the American Medical
Association, one of the most influential medical journals in the world. At the age of nine she saw a TV programme
about nurses who practise ‘Therapeutic Touch’, claiming they can detect and manipulate a ‘human energy field’ by
hovering their hands above a patient.

For her school science-fair project, Rosa conceived and executed an experiment to test if they really could detect
this ‘field’. Twenty-one experienced practitioners put their palms on a table, behind a screen. Rosa flipped a coin,
hovered her hand over the therapist’s left or right palm accordingly, and waited for them to say which it was:
the therapists performed no better than chance, and with 280 attempts there was sufficient statistical power
to show that these claims were bunk.

Therapeutic Touch practitioners, including some in university posts, were deeply unhappy: they insisted loudly
that JAMA was wrong to publish the study.

Closer to home is Rhys Morgan, a schoolboy with Crohn’s Disease. Last year, chatting on www.crohnsforum.com,
he saw people recommending ‘Miracle Mineral Solution’, which turned out to be industrial bleach, sold with a dreary
conspiracy theory to cure Aids, cancer, and so on.
At the age of fifteen, he was perfectly capable of exploring the evidence, finding official documents, and
explaining why it was dangerous. The adults banned him. Since then he’s got his story on The One Show, while
the Chief Medical Officer for Wales, the Food Standards Agency and Trading Standards have waded in to support
him.

People wring their hands over how to make science relevant and accessible, but newspapers hand us one answer
on a plate every week, with their barrage of claims on what’s good for you or bad for you: it’s evidence-based
medicine. If every school taught the basics – randomised trials, blinding, cohort studies, and why systematic reviews
are better than cherry-picking your evidence – it would help everyone navigate the world, and learn some of the
most important ideas in the whole of science.

But even before that happens, we can feel optimistic. Information is more easily accessible now than ever before,
and smart, motivated people can sidestep traditional routes to obtain knowledge and disseminate it. A child can
know more about evidence than their peers, and more than adults, and more than their own teachers; they can tell
the world what they know, and they can have an impact.

So the future is bright. And if you’re one of the teachers who stopped a child’s essay from being published
because it dared to challenge your colleagues for promoting the ludicrousness of Brain Gym, then really:
shame on you.

How Myths Are Made

Guardian, 8 August 2009

In a formal academic paper, every claim is referenced to another academic paper: either an original research
paper, describing a piece of primary research in a laboratory or on patients; or a review paper which summarises an
area. This convention gives us an opportunity to study how ideas spread, and myths grow, because in theory you
could trace who references what, and how, to see an entire belief system evolve from the original data. Such an
analysis was published this month in the British Medical Journal, and it is quietly seminal.

Steven Greenberg from Harvard Medical School focused on an arbitrary hypothesis: the specifics are irrelevant to
us, but his case study was the idea that a protein called β amyloid is produced in the skeletal muscle of patients who
have a condition called ‘inclusion body myositis’. Hundreds of papers have been written on this, with thousands
of citations between them. Using network theory, Greenberg produced a map of interlocking relationships, to
demonstrate who cited what.

By looking at this network of citations he could identify the intersections with the most incoming and outgoing
traffic. These are the papers with the greatest ‘authority’ (Google uses the same principle to rank webpages in
its search results). All of the ten most influential papers expressed the view that β amyloid is produced in the
muscle of patients with IBM. In reality, this is not supported by the totality of the evidence. So how did this
situation arise?

Firstly, we can trace how basic laboratory work was referenced. Four lab papers did find β amyloid in IBM patients’
muscle tissue, and these were among the top ten most influential papers. But looking at the whole network, there
were also six very similar primary research papers, describing similar lab experiments, which are isolated from the
interlocking web of citation traffic, meaning that they received no or few citations. These papers, unsurprisingly,
contained data that contradicted the popular hypothesis. Crucially, no other papers refuted or critiqued this
contradictory data. Instead, those publications were simply ignored.
Using the interlocking web of citations, you can see how this happened. A small number of review papers funnelled
large amounts of traffic through the network, with 63 per cent of all citation paths flowing through one review
paper, and 95 per cent of all citation paths flowing through just four review papers by the same research group.
These papers acted like a lens, collecting and focusing citations – and scientists’ attention – on the papers
supporting the hypothesis, in testament to the power of a well-received review paper.

But Greenberg went beyond just documenting bias in what research was referenced in each review paper. By
studying the network, in which review papers are themselves cited by future research papers, he showed how these
reviews exerted influence beyond their own individual readerships, and distorted the subsequent discourse, by
setting a frame around only some papers.

And by studying the citations in detail, he went further again. Some papers did cite research that contradicted
the popular hypothesis, for example, but distorted it. One laboratory paper reported no β amyloid in three of five
patients with IBM, and its presence in only a ‘few fibres’ in the remaining two patients; but three subsequent papers
cited these data, saying that they ‘confirmed’ the hypothesis. This is an exaggeration at best, but the power of
the social network theory approach is to show what happened next: over the following ten years, these three
supportive citations were the root of 7,848 supportive citation paths, producing chains of false claim in the
network, amplifying the distortion.

Similarly, many papers presented aspects of the β amyloid hypothesis as a theory – but gradually, through
incremental misstatement, in a chain of references, these papers came to be cited as if they proved the
hypothesis as a fact, with experimental evidence, which they did not.
This is the story of how myths and misapprehensions arise. Greenberg might have found a mess, but instead he
found a web of systematic and self-reinforcing distortion, resulting in the creation of a myth, ultimately
retarding our understanding of a disease, and so harming patients. That’s why systematic reviews are important,
that’s why incremental misstatement matters, and that’s why ghost writing should be stopped.

Publish or Be Damned

Guardian, 3 June 2006

I have a very long memory. So often with ‘science by press release’, newspapers will cover a story even though
the scientific paper doesn’t exist, assuming it’s around the corner. In February 2004 the Daily Mail was saying
that cod liver oil is ‘nature’s superdrug’. The Independent wrote: ‘They’re not yet saying it can enable you to stop
a bullet or leap tall buildings, but it’s not far short of that.’ These glowing stories were based on a press release
from Cardiff University, describing a study looking at the effect of cod liver oil on some enzymes – no idea which –
that have something to do with cartilage – no idea what. I had no way of knowing whether the study was significant,
valid or reliable. Nobody did, because it wasn’t published. No methods, results, conclusions to appraise.
Nothing.

In 1998 Dr. Arpad Pusztai announced through the telly that genetically modified potatoes ‘caused toxicity to rats’.
Everyone was extremely interested in this research. So what had he done in his lab? What were they fed? What
had he measured? A year later the paper was published, and it was significantly flawed. Nobody had been able
to replicate his data and verify the supposed danger of GM, because we hadn’t seen the write-up, the academic
paper. How could anyone examine, let alone have a chance to rebut, Pusztai’s claims? Peer review is just the start;
then we have open scrutiny by the scientific community, and independent replication.

So anyway, I wrote at the time that these cod liver oil people at Cardiff University were jolly irresponsible, that
patients would worry, GPs would have no answers for them, and so on. This week I contacted Cardiff and said:
This is what I said last year, now where’s the paper? Prof John Harwood responded through the press office: ‘Mr
Goldacre is quite right in asserting that scientists have to be very certain of their facts before making public
statements or publishing data.’

I’m a doctor, but it’s good to know we agree. If puzzling.

‘Because of that,’ continued Prof Harwood, ‘Professor Caterson and my laboratory are continuing to work on
samples.’

Right …

‘I’m afraid this takes a long time and much longer than journalists or public relations firms often realise. So, I
regret he will have to be patient before Professor Caterson or myself are prepared to comment in detail.’

How kind. And only slightly patronising. I don’t want them to comment on fish oil. It’s seventeen months after
‘nature’s superdrug’: I just want to know where the published paper is. In 2014, after being patient for a decade
as requested, I contacted Prof Caterson and the Cardiff Press Office again. They confirmed that the research has
never been published in a journal. Nobody can read or critique the methods or results, and the only public trace
is a skeletal description describing a brief conference presentation. This document is four paragraphs long. The
press release was seventeen paragraphs long. I’ll try them again in a decade.
Bibliografie 3
Joseph P. Forgas; Roy F. Baumeister – The Social Psychology of Gullibility, p.279-
280; 284-293; 295-297;
Scientific Gullibility

Lee Jussim, Sean T. Stevens, Nathan Honeycutt, Stephanie M.


Anglin, Nicholas Fox

“Gullible” means easily deceived or cheated. In this chapter, we


focus on the deception aspect of gullibility. What does gullibility have
to do with social psychology? Scientific gullibility occurs when
individuals, including scientists, are “too easily persuaded that some
claim or conclusion is true, when, in fact, the evidence is inadequate
to support that claim or conclusion.” In this chapter, we review
evidence of the sources and manifestations of scientific gullibility in
(mostly) social psychology, and also identify some potential
preventatives.

Before continuing, some clarifications are necessary. We have no


insight into, and make no claims about, what any scientist “thinks” or
“believes.” What we can address, however, are statements that have
appeared in scholarship. In this chapter, when a paper is written as if
some claim is true, we take that to mean that it is “accepted,”
“believed,” “assumed to be valid,” and/or “that the scientist was
persuaded that the claim was valid and justified.” When we do this, we refer exclusively to written statements in
the text, rather than to someone’s “beliefs,” about which we have no direct information. Issues of whether and why
scientists might make claims in scientific scholarship that they do not truly believe are beyond the scope of this
chapter, though they have been addressed elsewhere (e.g., Anomaly, 2017).

Furthermore, we distinguish scientific gullibility from being wrong. Scientists are human, and make mistakes. Even
fundamental scientific methods and statistics incorporate uncertainty, so that, sometimes, a well-conducted study
could produce a false result – evidence for a phenomenon, even though the phenomenon does not exist, or evidence
against the exist-ence of some phenomenon that does. Thus, scientific gullibility is more than being wrong; error is
baked into the nature of scientific exploration.

We define scientific gullibility as being wrong, in regards to the strength and/or veracity of a scientific finding,
when the reasons and/or evidence for knowing better were readily available. Thus, demonstrating scientific
gullibility means showing that

(1) scientists have often believed something, although it was untrue, and

(2) there was ample basis for them to have known it was untrue.

Overview

Why should scientists be interested in better understanding their own gullibility? We think it is because most of
us do not want to be gullible (see Cooper & Avery, Chapter 16 this volume). Although there may be a small number
who care more about personal success, they are likely rare exceptions. Most researchers genuinely want to know
the truth and want to produce true findings. They want to be able to critically understand the existing literature,
rather than believe that false claims are true. A better understanding of scientific gullibility then, can
(1) reduce the propensity to believe scientific claims that are not true; and

(2) increase awareness of the logical, evidentiary, methodological, and statistical issues that can call attention
to claims that warrant increased skeptical scrutiny.

In this context, then, we suggest the following five flags of gullibility as a starting point, we also welcome
suggestions for additional symptoms of gullibility:

Criteria 1. Generalization of claims that are based on data obtained from small, potentially unrepresentative
samples.

Criteria 2. Causal inference(s) drawn from correlational data.

Criteria 3. Scholarship offering opposing evidence, an opposing argument, or a critical evaluation of the claim
being presented as fact is overlooked (e.g., not cited).

Criteria 4. Claims, and possibly generalized conclusions, are made without citing empirical evidence
supporting them.

Criteria 5. Overlooking (e.g., not citing and/or engaging with) obvious and well-established (in the existing
scientific literature) alternative explanations.

Glorification of p < .05: “It Was Published, Therefore It Is a Fact”

Scientism refers to exaggerated faith in the products of science (Haack, 2012; Pigliucci, 2018). One particular
manifestation of scientism is reification of a conclusion based on its having been published in a peer-reviewed
journal. These arguments are plausibly interpretable as drawing an equivalence between “peer-reviewed
publication” and “so well established that it would be perverse to believe otherwise” (for examples, see, e.g.,
Fiske, 2016; Jost et al., 2009). They are sometimes accompanied with suggestions that those who criticize such
work are either malicious or incompetent (Fiske, 2016; Jost et al., 2009; Sabeti, 2018), and thus reflect this sort of
scientism. Especially because ability to cite even several peer-reviewed publications in support of some
conclusion does not make the conclusion true, this is particularly problematic (see, e.g., Flore & Wicherts, 2015;
Jussim, 2012; Jussim, Crawford, Anglin, Stevens et al., 2016; Simonsohn et al., 2014).

One of the most important gatekeepers for an article entering a peer-reviewed journal is a statistically
significant result, or p < .05 (Simmons, Nelson, & Simonsohn, 2011). The undue reification of “peer reviewed” as
“fact” itself implies a reification of p <0.05, to the extent that p < 0.05 is a necessary finding to get some empirical
work published (Nuijten, Hartgerink, van Assen, Epskamp, & Wicherts, 2016). Here is a list of conclusions that are
not justified by p < .05:

• The researcher’s conclusion is an established fact.


• The main findings are reliable or reproducible.
• The difference or relationship observed is real, valid, or bona fide.
• The difference or relationship observed cannot be attributed to chance.

In fact, the only thing p < .05 might establish, as typically used, is that the observed result, or one more extreme,
has less than a 5% chance of occurring, if the null is true. Even that conclusion is contingent on both the underlying
assumptions not being too severely violated, and on the researcher not employing questionable research
practices to reach p < .05 (Simmons et al., 2011).

It gets worse from there. P-values between .01 and .05 are improbable if the effect under study is truly nonzero
(Simonsohn et al., 2014). When a series of studies produces a predominance of p-values testing the key hypotheses
in this range, it is possible that the pattern of results obtained (despite reaching p < .05) is more improbable than
are the obtained results under the null for each study.
Consider a three-experiment sequence where one degree of freedom F-tests of the main hypothesis, with error
degrees of freedom of 52, 50, and 63, have values of 5.34, 4.18, and 4.78, respectively, and correspond to effect sizes
ranging from d = .55 to .64. The corresponding p-values are .025, .046, and .033, respectively. If we assume an
average underlying effect size of d = .60, the probability of getting three values between .01 and .05 is itself .014
(this probability can be easily obtained from the website http://rpsychologist.com/d3/pdist).

In other words, the likelihood of getting this pattern of results, with a true effect size of d = .60, is even more
improbable than are obtaining those results under the null. This is not some concocted hypothetical. It is
exactly the results reported in one of the most influential papers in all of social psychology, the first paper to
produce evidence that stereotype threat undermines women’s math performance; a paper that, according to
Google Scholar, has been cited over 3,000 times (Spencer, Steele, & Quinn, 1999).

There are two bottom lines here.

1.Treating conclusions as facts because they appear in peer-reviewed journals is not justified.

2.Treating findings as “real” or “credible” simply because they obtained p < .05 is not justified.

Some claims in some peer-reviewed articles are justified and some statistical findings do provide strong evidence
in support of some claim. Excess scientism occurs, however, when the quality of the evidence, and the strength of
the conclusions reached on the basis of that evidence, are not critically evaluated, and, instead, the mere fact of
publication and p < .05 are pre-sented as or presumed to be a basis for believing some claim is true.

Status Quo and Status Biases

Status Quo Biases

Laypeople are prone to metacognitive myopia (see Fielder, Chapter 7 this volume), and are often biased toward
maintaining the current scientific consensus on a topic (Samuelson & Zeckhauser, 1988). Moreover, people often
hold a false belief in small numbers, erroneously believing that a sample is representative of the population
and that a study is more likely to replicate than the laws of chance would predict (Tversky & Kahneman, 1971).
Seminal studies may thus be perceived as holding an exaggerated level of truth.

Does metacognitive myopia impact social psychologists? There are good reasons to think it does. When a
paper, or finding, achieves canonical status it may be widely accepted by social psychologists as “truth.” It can
be quite difficult to change the canon once some finding has been published and integrated into common discourse
in the field (Jussim, Crawford, Anglin, Stevens et al., 2016). This is so even when stronger contradictory evidence
emerges (Jussim, 2012). Papers that challenge accepted or preferred conclusions in the literature may be held to a
higher threshold for publication. For example, replication studies regularly report samples much larger than the
original study (see Table 15.1), suggesting they have been held to a higher methodological and evidentiary
standard.

This may, in part, result from repeated citations of the “canonical” finding, as mere repetition can increase the
subjective truth of a message (see e.g., Myers, Chapter 5 this volume; Unkelbach & Koch, Chapter 3 this volume).
This repetition-truth could be particularly potent in cases where the “canonical” finding is consistent with a
preferred narrative in the field. Indeed, there may be a number of zombie theories remaining in psychology
despite substantial and sustained criticism (e.g., Meehl, 1990), as even when an article is retracted, scientists
continue to cite it (Greitemeyer, 2014). When the original authors acknowledge that new evidence invalidates their
previous conclusions, people are less likely to continue to believe the overturned findings (Eriksson & Simpson,
2013). However, researchers do not always declare they were wrong, even in the face of evidence to the
contrary.
Table 15.1 Social psychology bias for the status quo?

Publication Narrative Key Aspects of Citations


Methods
Total: Since failed replication was published:

Darley and Gross Stereotypes lead to People judge targets


(1983) their own with vs. without
confirmation; relevant
stereotype bias in the individuating 1355 1154
presence but not information. Single
absence of experiment.
individuating
N = 59–68,
information
depending on
analysis

Baron, Albright, Failed replication of


and Malloy (1995) Darley and Gross
(1983). Positive
result in opposite Close replication
direction: stereotype (and extension) of
bias in the absence of 75 72
Darley and Gross
individuating 1983). Two
information; experiments.
individuating
information Total N = 161.
eliminated
stereotype bias

Spencer et al. Stereotype threat for


(1999) women and math;
apprehension of
being judged by the Three experiments. 3023 294
negative stereotype Total N = 177.
leads to poorer
math performance

Finnigan and Failed replication of Pre-registered.


Corker (2016) the stereotype threat Close replication of
effect in Chabalaev, Chalabaev et al.
Major, Sarrazin, & (2012), and
Cury (2012) modeled extension from
closely on Spencer Spencer et al. 9 9
et al. (1999). No (1999). Single
significant main experiment.
effect or interaction
Total N = 590
effect for threat or
performance
avoidance goals.
Bargh, Chen, and Automatic effects of Two experiments. 4387 1570
stereotypes on Total N = 60
Burrows (1996)
behavior

Failed replication of Two close replication


Bargh et al. (1996). and extension
Doyen, Klein,
No effects of experiments.
Pichon, and
stereotypes on 404 386
Total N = 170
Cleeremans (2012) behavior except
when experimenters
were not blind to
condition

Four experiments.
Total N = 198. People
Snyder and Swan People seek to
chose among
(1978) confirm their
confirmatory or 1152 1060
interpersonal
disconfirmatory
expectations
leading questions (no
option was provided
for asking diagnostic
questions).

Three experiments.
Conceptual
People rarely seek to
replication.
confirm their
interpersonal Total N = 342. People
Trope and Bassok expectations. could seek 166 161
(1983) Instead, they seek information varying
diagnostic in the extent to
information which it was
diagnostic vs.
confirmatory

Note: Citation counts were obtained from Google Scholar (January 28, 2017).

Table 15.1 shows how studies that have been subject to critiques and failed pre-registered replications
continue to be cited far more frequently than either the critiques or the failed replications, even after those
critiques and failures have appeared. Although blunt declarations that situations are more powerful than
individual differences are no longer common in the social psychological literature, the emphasis on the power of
the situation manifests as blank slatism and as a belief in “cosmic egalitarianism” – the idea that, but for situations,
there would be no mean differences between any demographic groups on any socially important or valued
characteristics (Pinker, 2002; Winegard et al., 2018). Thus, the examples presented here are not historical
oddities; they reflect a state of scientific gullibility in social psychology.

Status Biases

One of the great arguments for the privileged status of science is universalism (Merton, 1942/1973); scientific
claims are supposed to be evaluated on the basis of the quality of the evidence rather than the status of the
person making the claim. The latter can be referred to as a status bias and it may play a role in influencing
scientists’ perceptions and interpretations of research. Sometimes referred to as an eminence obsession (Vazire,
2017), or the “Matthew Effect” (Merton, 1968), the principle underlying status bias is that the “rich get richer.”

Having a PhD from a prestigious university, currently being employed by a prestigious university, and/or
having an abundance of grant money, awards, publications, and citations, are used as a heuristic for
evaluating work. That is, the work of scientists fitting into one or more of these categories frequently may get a
pass, and be evaluated less critically (Vazire, 2017).

Empirically, status biases have been demonstrated in a variety of academic contexts. Peer reviewers for a
prominent clinical orthopedic journal were more likely to accept, and evaluated more positively, papers from
prestigious authors in their field than identical papers evaluated under double-blind conditions (Okike, Hug,
Kocher, & Leopold, 2016). In the field of computer science research, conference paper submissions from famous
authors, top universities, and top companies were accepted at a significantly greater rate by single-blind reviewers
than those who were double-blind (Tomkins, Zhang, & Heavlin, 2017). Peters and Ceci (1982) demonstrated a
similar effect on publishing in psychology journals, reinforcing the self-fulfilling nature of institutional-level
stereotypes.

Evidence of Scientific Gullibility

Thus far we have defined scientific gullibility, articulated standards for distinguishing scientific gullibility from
simply being wrong, reviewed basic standards of evidence, and reviewed the evidence regarding potential social
psychological factors that lead judgments to depart from evidence. But is there any evidence of actual scientific
gullibility in social psychology? One might assume that scientific gullibility occurs rarely among social
psychologists. We are in no position to reach conclusions about how often any of these forms of gullibility manifest,
because that would require performing some sort of systematic and representative sampling of claims in social
psychology, which we have not done.

Instead, in the next section, we take a different approach. We identify examples of prominent social psychological
claims that not only turned out be wrong, but that were wrong because scientists made one or more of the
mistakes we have identified. In each case, we identify the original claim, show why it is likely erroneous, and
discuss the reasons this should have been known and acknowledged.

Conclusions Without Data: The Curious Case of Stereotype “Inaccuracy”

Scientific articles routinely declare stereotypes to be inaccurate either without a single citation, or by citing an
article that declares stereotype inaccuracy without citing empirical evidence. We call this “the black hole at the
bot-tom of declarations of stereotype inaccuracy” (Jussim, Crawford, Anglin, Chambers et al., 2016), and give some
examples: “[S]tereotypes are maladaptive forms of categories because their content does not correspond to what
is going on in the environment” (Bargh & Chartrand, 1999, p. 467). “To stereotype is to allow those pictures to
dominate our thinking, leading us to assign identical characteristics to any person in a group, regardless of the
actual variation among members of that group” (Aronson, 2008, p. 309). No evidence was provided to support
either claim.

Even the American Psychological Association (APA), in its official pronouncements, has not avoided the inexorable
pull of this conceptual black hole. APA first declares: “Stereotypes ‘are not necessarily any more or less inaccurate,
biased, or logically faulty than are any other kinds of cognitive generalizations,’ and they need not inevitably
lead to discriminatory conduct” (APA, 1991, p. 1064). They go on to declare: “The problem is that stereotypes about
groups of people often are overgeneralizations and are either inaccurate or do not apply to the individual group member
in question ([Heilman, 1983], note 11, at 271)”

The APA referenced Heilman (1983), which does declare stereotypes to be inaccurate. It also reviews evidence of
bias and discrimination. But it neither provides nor reviews empirical evidence of stereotype inaccuracy. A similar
pattern occurs when Ellemers (2018, p. 278) declares, “Thus, if there is a kernel of truth underlying gender
stereotypes, it is a tiny kernel” without citing scholarship that assessed the accuracy of gender stereotypes.

These cases of claims without evidence regarding inaccuracy pervade the stereotype literature (see Jussim, 2012;
Jussim, Crawford, Anglin, Chambers et al., 2016, for reviews). It may be that the claim is so common that most
scientists simply presume there is evidence behind it – after all, why would so many scientists make such a
claim, without evidence? (see Duarte et al., 2015; Jussim, 2012; Jussim, Crawford, Anglin, Chambers et al., 2016;
Jussim, Crawford, Anglin, Stevens et al., 2016, for some possible answers). Given this state of affairs, it seems likely
that when the next publication declares stereotypes to be inaccurate without citing any evidence, it, too, will be
accepted.

Large Claims, Small Samples

Studies with very small samples rarely produce clear evidence for any conclusion; and, yet, some of the most
famous and influential social psychological findings are based on such studies. Social priming is one example of
this. One of the most influential findings in all of social psychology, priming elderly stereotypes causing people
to walk more slowly (Bargh, Chen, & Burrows, 1996, with over 4,000 citations as of this writing), was based on two
studies with sample sizes of 30 each. It should not be surprising that forensic analyses show that the findings of
this and similar studies are extraordinarily unlikely to replicate (Schimmack, Heene, & Kesavan, 2017), and that
this particular study has been subject to actual failures to replicate (Doyen, Klein, Pichon, & Cleeremans, 2012).

A more recent example involves power posing, the idea that expansive poses can improve one’s life (Carney,
Cuddy, & Yap, 2010). That is an extraordinarily confident claim for a study based on 42 people. It should not be
surprising, therefore, that most of its claims simply do not hold up under scrutiny (Simmons & Simonsohn, 2017) or
attempts at replication (Ranehill et al., 2015).

Failure to Eliminate Experimenter Effects

Experimenter effects occur when researchers evoke hypothesis-confirming behavior from their research
participants, something that has been well known for over 50 years (e.g., Rosenthal & Fode, 1963). Nonetheless,
research suggests that only about one-quarter of the articles in Journal of Personality and Social Psychology
and Psychological Science that involved live interactions between experimenters and participants explicitly
reported blinding those experimenters to the hypotheses or experimental conditions (Jussim, Crawford, Anglin,
Stevens et al., 2016; Klein et al., 2012).

Although it is impossible to know the extent to which this has created illusory support for psychological hypotheses,
it is not impossible for this state of affairs to lead to a high level of skepticism about findings in any published
report that has not explicitly reported experimenter blindness. This analysis is not purely hypothetical. In a rare
case of researchers correcting their own research, Lane et al. (2015) reported failures to replicate their earlier
findings (Mikolajczak et al., 2010, same team). They noted that experimenters had not previously been blind to
condition, which may have caused a phantom effect.

Research has also demonstrated that some priming “effects” occurred only when experimenters were not
blind to condition (Gilder & Heerey, 2018). Much, if not all, social psychological experimentation that involves
interactions between experimenters and participants, and that fails to blind experimenters, warrants high levels
of skepticism, pending successful (preferably pre-registered) replications that do blind experimenters to
hypothesis and conditions. Based on content analysis of the social psychological literature (Jussim, Crawford,
Anglin, Stevens et al., 2016; Klein et al., 2012), this may constitute a large portion of the social psychological
experimental literature.

Inferring Causation from Correlation


Inferring causality from correlation happens with regularity in psychology (e.g., Nunes et al., 2017), and, as we
show here, in work on intergroup relations. Gaps between demographic groups are routinely presumed to reflect
discrimination, which, like any correlation (in this case, between group membership and some outcome, such as
distribution into occupations, graduate admissions, income, etc.), might but does not necessarily explain the gap.

For example, when men receive greater shares of some desirable outcome, sexism is often the go-to explanation
(e.g., Ledgerwood, Haines, & Ratliff, 2015; van der Lee & Ellemers, 2015), even when alternative explanations are
not even considered (Jussim, 2017b). Sometimes, it is the go-to explanation even when an alternative explanation
(such as Simpson’s paradox) better explains the discrepancy (e.g., Albers, 2015; Bickel, Hammel, & O’Connell, 1975).

Similarly, measures of implicit prejudice were once presented as powerful sources of discrimination (e.g.,
Banaji & Greenwald, 2013) based on “compelling narratives.” The logic seemed to be something like

(1) implicit prejudice is pervasive,

(2) inequality is pervasive,

(3) therefore, implicit prejudice probably explains much inequality.

We call this a “phantom” correlation because the argument could be and was made in the absence of any direct
empirical link between any measure of implicit prejudice and any real-world gap. Indeed, even the more modest
goal of linking implicit prejudice to discrimination has proven difficult (Mitchell, 2018). It should not be surprising,
therefore, to discover that evidence indicates that implicit measures predict discrimination weakly at best (e.g.,
Forscher et al., 2016).

Furthermore, evidence has been vindicating the view proposed by Arkes and Tetlock (2004) that implicit “bias”
measures seem to reflect social realities more than they cause them (Payne, Vuletich, & Lundberg, 2017;
Rubinstein, Jussim, & Stevens, 2018). Thus, although it may well be true that there is implicit bias, and it is
clearly true that there is considerable inequality of all sorts between various demographic groups, whether the
main causal direction is from bias to inequality, or from inequality to “bias” remains unclear. This seems like an
example of scientific gullibility, not because the implicit bias causes inequality link is known to be “wrong,” but
because dubious and controversial evidence has been treated as the type of well-established “fact”
appropriate for influencing policy and law (Mitchell, 2018).

Overlooking Contrary Scholarship

The “power of the situation” is one of those canonical, bedrock “findings” emblematic of social psychology. It is true
that there is good evidence that, sometimes situations are quite powerful (Milgram, 1974). But the stronger
claim that also appears to have widespread acceptance is that personality and individual differences have little
to no effect once the impact of the situation is accounted for (see e.g., Jost & Kruglanski, 2002; Ross & Nisbett,
1991). The persistence of an emphasis on the power of the situation in a good deal of social psychological
scholarship provides one example of overlooking scholarship that has produced contrary evidence (Funder, 2006,
2009).

There are many problems with this claim, but with respect to scientific gullibility the key one is that it is usually
without actually comparing the “power of the situation” to evidence that bears on the “the power of individual
differences.” The typical effect size for a situational effect on behavior is about the same as the typical effect size
for a personality characteristic – and both are rather large relative to other social psychological effects (Fleeson,
2004; Fleeson & Noftle, 2008; Funder, 2006, 2009). It is not “gullibility” for those to believe in the “power of the
situation” simply based on ignorance of the individual differences data. It is gullibility to make such claims without
identifying and reviewing such evidence.

The Fundamental Publication Error: Correctives do not Necessarily Produce Correction


The fundamental publication error refers to the belief that just because some corrective to some scientific error
has been published, that there has been scientific self-correction (Jussim, 2017a). A failure to self-correct can
occur, even if a corrective has been published, by ignoring the correction, especially in outlets that are intended
to reflect the canon. With most of the examples presented here, not only are the original claims maintained by
violation of fundamental norms of scientific evidence, but ample corrections have been published. Nonetheless,
the erroneous claims persist. Despite the fact that dozens of studies have empirically demonstrated the
accuracy of gender and race stereotypes, claims that such stereotypes are inaccurate still appear in
“authoritative” sources (e.g., Ellemers, 2018; see Jussim, Crawford, & Rubinstein, 2015 for a review). Similarly, the
assumption that inequality reflects discrimination, without consideration of alternatives, is widespread (see, e.g.,
reviews by Hermanson, 2017; Stern, 2018; Winegard, Clark, & Hasty, 2018).

Reducing Scientific Gullibility

Changing Methods and Practices

Some researchers are actively working on ways to reduce gullibility and increase valid interpretations of published
findings, many of which are aimed at reforming the academic incentive structure. Simply put, within academia,
publications represent credibility and currency. The more a researcher publishes, and the more those
publications are cited by others in the field, the more their credibility as a researcher increases. This can then
lead to more publications, promotions, and funding opportunities. Thus, publishing one’s findings is essential, and
one of the most prominent gatekeepers of publication is the p<.05 threshold. Yet, such a metric can
promote questionable research practices (Simmons et al., 2011; Simonsohn et al., 2014). These findings may
constitute an example of Goodhart’s Law – that when a measure becomes a desirable target it ceases to
become a good measure (Koehrsen, 2018) – at work among researchers.

One intervention aimed at reducing behaviors that artificially increase the prevalence of p-values just below
0.05 is preregistration. Preregistration requires a researcher to detail a study’s hypotheses, methods, and
proposed statistical analyses prior to collecting data (Nosek & Lakens, 2014). By pre-registering a study, researchers
are not prevented from performing exploratory data analysis, but they are prevented from reporting exploratory
findings as confirmatory (Gelman, 2013).

Because of growing recognition of the power of pre-registration to produce valid science, some journals have even
begun embracing the registered report. A registered report is a proposal to conduct a study with clearly defined
methods and statistical tests that is peer reviewed before data collection. Because a decision to publish is made not
on the nature or statistical significance of the findings, but on the importance of the question and the quality of the
methods, publication biases are reduced. Additionally, researchers and journals have started data-sharing
repositories to encourage the sharing of non-published supporting material and raw data. Openly sharing methods
and collected data allows increased oversight by the entire research community and promotes collaboration.
Together, open research materials, preregistration, and registered reports all discourage scientific gullibility by
shedding daylight on the research practices and findings, opening studies to skeptical evaluation by other scientists,
and therefore, increasing clarity of findings and decreasing the influence of the types status and status quo biases
discussed earlier.

Credibility Categories

Recently, Pashler and De Ruiter (2017) proposed three credibility classes of research.

Class 1, the most credible, is based on work that has been

• published,
• successfully replicated by several pre-registered studies, and
• in which publication biases, HARKing (Kerr, 1998), and p-hacking can all be ruled out as explanations
for the effect.

Work that meets this standard can be considered a scientific fact, in the Gouldian sense of being well established.

Class 2 research is strongly suggestive but falls short of being a well-established “fact.” It might include

• many published studies, but


• there are few, if any, pre-registered successful replications, and
• HARKing and p-hacking have not been ruled out.

Class 3 evidence is that yielded by a small number of small sample studies, without pre-registered
replications, and without checks against HARKing and p-hacking. Such studies are preliminary and should not
be taken as providing strong evidence of anything, pending stronger tests and pre-registered successful
replications.

Pashler and De Ruiter’s (2017) system could have prevented social psychology from taking findings such as
stereotype threat (Steele & Aronson, 1995), social priming (Bargh et al., 1996), and power posing (Carney et
al., 2010) as “well established.” Had the field not had a norm of excessive scientism, and, instead, treated these
findings as suggestive, and warranting large-scale pre-registered replication attempts, much of the current
“replication crisis” may have been avoided. To be fair, the value of pre-registration was not widely recognized
until relatively recently, which may help explain why it was not used. But our main point remains intact; absent pre-
registration, or large, high-powered replications, such work should have been considered preliminary and
suggestive at best, especially considering the small sample sizes on which it was based.

Pashler and De Ruiter’s system is an important contribution to understanding when past literature in social
psychology provides a strong versus weak evidentiary basis for or against some theory, hypothesis, or
phenomenon. Nonetheless, we also think it is less important that researchers use this exact system, than it is that
they develop some systematic way of assigning credibility to research based on factors such as sample size,
consideration of alternative explanations, pre-registration, open data, and materials, etc. In fact, the field’s view of
how to evaluate research credibility is still evolving, and Pashler and De Ruiter’s system is not the final word; in fact,
it is more like an initial attempt to systematize strength of past evidence. Whatever system one uses, we predict
that a closer attention to the credibility of research, rather than a simple acceptance of something as fact just
because it was published, will go a long way to reducing scientific gullibility.

Conclusion

Scientific gullibility is a major problem because it has contributed to the development of a dubious scientific
“canon” – findings that are taken as so well established that they are part of the social psychological fundament,
as evidenced by their endorsement by the American Psychological Association, and their appearance in outlets that
are supposed to reflect only the most well-established phenomena, such as handbook and annual review chapters.
Gullibility begins with

• treating results from small sample size studies as well established “facts,”
• a lack of transparency surrounding data analysis,
• failure to understand limitations of statistical analyses,
• underestimation of the power of publication biases,
• or an over-reliance on p<.05.

Researchers also sometimes give undue credibility to

• papers that oversell findings,


• tell compelling narratives that aren’t substantiated by the data, or
• report data that support desired conclusions with insufficient skepticism.

Findings that have been roundly refuted or called into question in the empirical literature are often not
extirpated from the canon.

In this chapter, we articulated and provided evidence for six scientific gullibility red flags that can and do appear in
the research literature:

(1) large claims being made from small and/or potentially unrepresentative samples,

(2) many published reports of experiments do not state that experimenters were blind to hypotheses and
conditions,

(3) correlational data being used as evidence of causality,

(4) ignoring scholarship articulating clear opposing evidence or arguments,

(5) putting forth strong claims or conclusions that lack a foundation in empirical evidence, and

(6) neglecting to consider plausible alternative explanations for findings.

Although we are not claiming that the whole social psychological literature reflects gullibility, it is also true that
little is currently of sufficient quality to fall into Pashler and de Ruiter’s class 1 of “established fact.” On the other
hand, we see no evidence of consensus in the field to use their system. Absent some such system, however, it
remains unclear which areas of social psychology have produced sound science and established facts, and which
have been suggestive at best and entirely false at worst. Our hope is that by revealing these influences on, standards
for recognizing, and ways to limit scientific gullibility, we
have contributed something towards social psychology
producing a canon that is based on valid and well-justified
claims.

Bibliografie 4
Chris Chambers – The Seven Deadly Sins of
Psychology: A Manifesto for Reforming the
Culture of Scientific Practice, p.2-17; 23-45;
History may look back on 2011 as the year that changed
psychology forever. It all began when the Journal of
Personality and Social Psychology published an article called
"Feeling the Future: Experimental Evidence for Anomalous
Retroactive Influences on Cognition and Affect."' The paper,
written by Daryl Bem of Cornell University, reported a series
of experiments on psi or "precognition," a supernatural
phenomenon that supposedly enables people to see events
in the future. Bem, himself a reputable psychologist, took an
innovative approach to studying psi. Instead of using
discredited parapsychological methods such as card tasks or
dice tests, he selected a series of gold-standard psychological
techniques and modified them in clever ways.
One such method was a reversed priming task. In a typical priming task, people decide whether a picture shown on
a computer screen is linked to a positive or negative emotion. So, for example, the participant might decide whether
a picture of kittens is pleasant or unpleasant. If a word that "primes” the same emotion is presented immediately
before the picture (such as the word "joy" followed by the picture of kittens), then people find it easier to judge the
emotion of the picture, and they respond faster. But if the prime and target trigger opposite emotions then the
task becomes more difficult because the emotions conflict (e.g., the word "murder" followed by kittens). To
test for the existence of precognition, Bem reversed the order of this experiment and found that primes delivered
after people had responded seemed to influence their reaction times. He also reported similar "retroactive"
effects on memory. In one of his experiments, people were overall better at recalling specific words from a list that
were also included in a practice task, with the catch that the so-called practice was undertaken after the recall
task rather than before. On this basis, Bem argued that the participants were able to benefit in the past from
practice they had completed in the future. As you might expect, Bem's results generated a flood of confusion and
controversy. How could an event in the future possibly influence someone's reaction time or memory in the past?
If precognition truly did exist, in even a tiny minority of the population, how is it that casinos or stock markets turn
profits? And how could such a bizarre conclusion find a home in a reputable scientific journal?

Scrutiny at first turned to Bem's experimental procedures. Perhaps there was some flaw in the methods that could
explain his results, such as failing to randomize the order of events, or some other subtle experimental error. But
these aspects of the experiment seemed to pass muster, leaving the research community facing a dilemma. If true,
precognition would be the most sensational discovery in modern science. We would have to accept the existence
of time travel and reshape our entire understanding of cause and effect.

But if false, Bem's results would instead point to deep flaws in standard research practices — after all, if
accepted practices could generate such nonsensical findings, how can any published findings in psychology be
trusted?

And so psychologists faced an unenviable choice between, on the one hand, accepting an impossible scientific
conclusion and, on the other hand, swallowing an unpalatable professional reality. The scientific community was
instinctively skeptical of Bem's conclusions. Responding to a preprint of the article that appeared in late 2010, the
psychologist Joachim Krueger said: "My personal view is that this is ridiculous and can't be true." After all,
extraordinary claims require extraordinary evidence, and despite being published in a prestigious journal, the
statistical strength of Bem's evidence was considered far from extraordinary.

Bem himself realized that his results defied explanation and stressed the need for independent researchers to
replicate his findings. Yet doing so proved more challenging than you might imagine. One replication attempt by
Chris French and Stuart Ritchie showed no evidence whatsoever of precognition but was rejected by the same
journal that published Bem's paper. In this case the journal didn't even bother to peer review French and
Ritchie's paper before rejecting it, explaining that it "does not publish replication studies, whether successful
or unsuccessful." This decision may sound bizarre, but, as we will see, contempt for replication is common in
psychology compared with more established sciences. The most prominent psychology journals selectively
publish findings that they consider to be original, novel, neat, and above all positive. This publication bias, also
known as the "file-drawer effect," means that studies that fail to show statistically significant effects, or that
reproduce the work of others, have such low priority that they are effectively censored from the scientific record.
They either end up in the file drawer or are never conducted in the first place.

Publication bias is one form of what is arguably the most powerful fallacy in human reasoning: confirmation
bias. When we fall prey to confirmation bias, we seek out and favor evidence that agrees with our existing beliefs,
while at the same time ignoring or devaluing evidence that doesn't. Confirmation bias corrupts psychological
science in several ways.
In its simplest form, it favors the publication of positive results—that is, hypothesis tests that reveal statistically
significant differences or associations between conditions (e.g., A is greater than B; A is related to B, vs. A is the
same as B; A is unrelated to B).

More insidiously, it contrives a measure of scientific reproducibility in which it is possible to replicate but never
falsify previous findings, and it encourages altering the hypotheses of experiments after the fact to "predict"
unexpected outcomes. One of the most troubling aspects of psychology is that the academic community has
refused to unanimously condemn such behavior. On the contrary, many psychologists acquiesce to these
practices and even embrace them as survival skills in a culture where researchers must publish or perish. Within
months of appearing in a top academic journal, Bem's claims about precognition were having a powerful, albeit
unintended, effect on the psychological community. Established methods and accepted publishing practices fell
under renewed scrutiny for producing results that appear convincing but are almost certainly false. As psychologist
Eric-Jan Wagenmakers and colleagues noted in a statistical demolition of Bem's paper: "Our assessment suggests
that something is deeply wrong with the way experimental psychologists design their studies and report their statistical
results." With these words, the storm had broken.

A Brief History of the "Yes Man"

To understand the different ways that bias influences psychological science, we need to take a step back and
consider the historical origins and basic research on confirmation bias. Philosophers and scholars have long
recognized the "yes man" of human reasoning. As early as the fifth century BC, the historian Thucydides noted
words to the effect that "[w]hen a man finds a conclusion agreeable, he accepts it without argument, but when
he finds it disagreeable, he will bring against it all the forces of logic and reason." Similar sentiments were echoed
by Dante, Bacon, and Tolstoy. By the mid-twentieth century, the question had evolved from one of philosophy to
one of science, as psychologists devised ways to measure confirmation bias in controlled laboratory experiments.

Since the mid-1950s, a convergence of studies has suggested that when people are faced with a set of observations
(data) and a possible explanation (hypothesis), they favor tests of the hypothesis that seek to confirm it rather
than falsify it. In other words, people prefer to ask questions to which the answer is "yes," ignoring the maxim
of philosopher Georg Henrik von Wright that "no confirming instance of a law is a verifying instance, but any
disconfirming instance is a falsifying instance."

Psychologist Peter Wason was one of the first researchers to provide laboratory evidence of confirmation bias. In
one of several innovative experiments conducted in the 1960s and 1970s, he gave participants a sequence of
numbers, such as 2-4-6, and asked them to figure out the rule that produced it (in this case: three numbers in
increasing order of magnitude). Having formed a hypothesis, participants were then allowed to write down their
own sequence, after which they were told whether their sequence was consistent or inconsistent with the actual
rule. Wason found that participants showed a strong bias to test various hypotheses by confirming them, even
when the outcome of doing so failed to eliminate plausible alternatives (such as three even numbers). Wason's
participants used this strategy despite being told in advance that "your aim is not simply to find numbers which
conform to the rule, but to discover the rule itself." Since then, many studies have explored the basis of
confirmation bias in a range of laboratory-controlled situations.

Perhaps the most famous of these is the ingenious Selection Task, which was also developed by Wason in 1968.
The Selection Task works like this. Suppose I were to show you four cards on a table, labeled D, B, 3, and 7. I tell
you that if the card shows a letter on one side then it will have a number on the other side, and I provide you with a
more specific rule (hypothesis) that may be true or false: "If there is a Don one side of any card, then there is a 3
on its other side." Finally, I ask you to tell me which cards you would need to turn over in order to determine
whether this rule is true or false. Leaving an informative card unturned or turning over an uninformative card
(i.e., one that doesn't test the rule) would be considered an incorrect response. Before reading further, take a
moment and ask yourself, which cards would you choose and which would you avoid? If you chose D and avoided B
then you're in good company. Both responses are correct and are made by the majority of participants. Selecting D
seeks to test the rule by confirming it, whereas avoiding B is correct because the flip side would be uninformative
regardless of the outcome. Did you choose 3? Wason found that most participants did, even though 3 should be
avoided. This is because if the flip side isn't a D, we learn nothing—the rule states that cards with D on one side
are paired a 3 on the other, not that D is the only letter to be paired with a 3 (drawing such a conclusion would
be a logical fallacy known as "affirming the consequent"). And even if the flip side is a D then the outcome would
be consistent with the rule but wouldn't confirm it, for exactly the same reason. Finally, did you choose 7 or avoid
it? Interestingly, Wason found that few participants selected 7, even though doing so is correct—in fact, it is just as
correct as selecting D. If the flip side to 7 were discovered to be a D then the rule would be categorically disproven—
a logical test of what's known as the "contrapositive." And herein lies the key result: the fact that most
participants correctly select D but fail to select 7 provides evidence that people seek to test rules or hypotheses
by confirming them rather than by falsifying them.

Wason's findings provided the first laboratory-controlled evidence of confirmation bias, but centuries of
informal observations already pointed strongly to its existence. In a landmark review, psychologist Raymond
Nickerson noted how confirmation bias dominated in the witchcraft trials of the middle ages. Many of these
proceedings were a foregone conclusion, seeking only to obtain evidence that confirmed the guilt of the
accused.

For instance, to test whether a person was a witch, the suspect would often be plunged into water with stones
tied to her feet. If she rose then she would be proven a witch and burned at the stake. If she drowned then she
was usually considered innocent or a witch of lesser power. Either way, being suspected of witchcraft was
tantamount to a death sentence within a legal framework that sought only to confirm accusations. Similar
biases are apparent in many aspects of modern life. Popular TV programs such as CSI fuel the impression that
forensic science is bias-free and infallible, but in reality the field is plagued by confirmation bias. Even at the
most highly regarded agencies in the world, forensic examiners can be biased toward interpreting evidence
that confirms existing suspicions. Doing so can lead to wrongful convictions, even when evidence is based on
harder data such as fingerprints and DNA tests. Confirmation bias also crops up in the world of science
communication. For many years it was assumed that the key to more effective public communication of science
was to fill the public's lack of knowledge with facts—the so-called deficit model. More recently, however, this idea
has been discredited because it fails to take into account the prior beliefs of the audience.

The extent to which we assimilate new information about popular issues such as climate change, vaccines, or
genetically modified foods is susceptible to a confirmation bias in which evidence that is consistent with our
preconceptions is favored, while evidence that flies in the face of them is ignored or attacked. Because of this
bias, simply handing people more facts doesn't lead to more rational beliefs. The same problem is reflected in
politics. In his landmark 2012 book, The Geek Manifesto, Mark Henderson laments the cherry-picking of evidence
by politicians in order to reinforce a predetermined agenda. The resulting "policy-based evidence" is a perfect
example of confirmation bias in practice and represents the antithesis of how science should be used in the
formulation of evidence-based policy.

If confirmation bias is so irrational and counterproductive, then why does it exist? Many different explanations
have been suggested based on cognitive or motivational factors. Some researchers have argued that it reflects a
fundamental limit of human cognition. According to this view, the fact that we have incomplete information
about the world forces us to rely on the memories that are most easily retrieved (the so-called availability heuristic),
and this reliance could fuel a bias toward what we think we already know. On the other hand, others have argued
that confirmation bias is the consequence of an innate "positive-test strategy"—a term coined in 1987 by
psychologists Joshua Klayman and Young-Won Ha. We already know that people find it easier to judge
whether a positive statement is true or false (e.g., "there are apples in the basket") compared to a negative
one ("there are no apples in the basket"). Because judgments of presence are easier than judgments of
absence, it could be that we prefer positive tests of reality over negative ones. By taking the easy road, this bias
toward positive thoughts could lead us to wrongly accept evidence that agrees positively with our prior beliefs.
Against this backdrop of explanations for why an irrational bias is so pervasive, anthropologists Hugo Mercier and
Dan Sperber have suggested that confirmation bias is in fact perfectly rational in a society where winning
arguments is more important than establishing truths.

Throughout our upbringing, we are taught to defend and justify the beliefs we hold, and less so to challenge
them. By interpreting new information according to our existing preconceptions we boost our self-confidence and
can argue more convincingly, which in turn increases our chances of being regarded as powerful and socially
persuasive. This observation leads us to an obvious proposition: If human society is constructed so as to reward
the act of winning rather than being correct, who would be surprised to find such incentives mirrored in
scientific practices?

Neophilia: When the Positive and New Trumps the Negative but True

The core of any research psychologist's career—and indeed many scientists in general—is the rate at which
they publish empirical articles in high-quality peer-reviewed journals. Since the peer-review process is
competitive (and sometimes extremely so), publishing in the most prominent journals equates to a form of
"winning" in the academic game of life. Journal editors and reviewers assess submitted manuscripts on many
grounds. They look for flaws in the experimental logic, the research methodology, and the analyses. They study the
introduction to determine whether the hypotheses are appropriately grounded in previous research. They
scrutinize the discussion to decide whether the paper's conclusions are justified by the evidence. But reviewers do
more than merely critique the rationale, methodology, and interpretation of a paper. They also study the results
themselves. How important are they? How exciting? How much have we learned from this study? Is it a
breakthrough? One of the central (and as we will see, lamentable) truths in psychology is that exciting positive
results are a key factor in publishing—and often a requirement.

The message to researchers is simple: if you want to win in academia, publish as many papers as possible in
which you provide positive, novel results.

What does it mean to find "positive" results? Positivity in this context doesn't mean that the results are uplifting or
good news—it refers to whether the researchers found a reliable difference in measurements, or a reliable
relationship, between two or more study variables. For example, suppose you wanted to test the effect of a
cognitive training intervention on the success of dieting in people trying to lose weight. First you conduct a literature
review, and, based on previous studies, you decide that boosting people's self-control might help. Armed with a
good understanding of existing work, you design a study that includes two groups. The experimental group perform
a computer task in which they are trained to respond to images of foods, but crucially, to refrain from responding
to images of particular junk foods. They perform this task every day for six weeks, and you measure how much
weight they lose by the end of the experiment. The control group does a similar task with the same images but
responds to all of them—and you measure weight loss in that group as well. The null hypothesis (called "H0") in
this case is that there should be no difference in weight loss—your training intervention has no effect on
whether people gain or lose weight. The alternative hypothesis (called "H1.") is that the training intervention
should boost people's ability to refrain from eating junk foods, and so the amount of weight loss should be greater
in the treatment group compared with the control group. A positive result would be finding a statistically
significant difference in weight loss between the groups (or in technical terms, "rejecting H0"), and a negative
result would be failing to show any significant difference (or in other words, "failing to reject H0").

Note how I use the term "failing." This language is key because, in our current academic culture, journals indeed
regard such outcomes as scientific failures. Regardless of the fact that the rationale and methods are identical in
each outcome, psychologists find negative results much harder to publish than positive results. This is because
positive results are regarded by journals as reflecting a greater degree of scientific advance and interest to readers.
As one journal editor said to me, "Some results are just more interesting and important than others. If I do a
randomized trial on a novel intervention based on a long-shot and find no effect that is not a great leap forward.
However, if the same study shows a huge benefit that is a more important finding."

This publication bias toward positive results also arises because of the nature of conventional statistical analyses in
psychology. Using standard methods developed by Neyman and Pearson, positive results reject H0 in favor of the
alternative hypothesis (H1). This statistical approach—called null hypothesis significance testing—estimates
the probability (p) of an effect of the same or greater size being obtained if the null hypothesis were true.
Crucially, it doesn't estimate the probability of the null hypothesis itself being true: p values estimate the
probability of a given effect or more extreme arising given the hypothesis, rather than the probability of
a particular hypothesis given the effect. This means that while a statistically significant result (by
convention, p<.05) allows the researcher to reject H0, a statistically nonsignificant result (p>.05)
doesn't allow the researcher to accept HO. All the researcher can conclude from a statistically
nonsignificant outcome is that H0 might be true, or that the data might be insensitive.

The interpretation of statistically nonsignificant effects is therefore inherently inconclusive. Consider the thought
process this creates in the minds of researchers. If we can't test directly whether there is no difference between
experimental conditions, then it makes little sense to design an experiment in which the null hypothesis would ever
be the focus of interest. Instead, psychologists are trained to design experiments in which findings of interest
would always be positive. This bias in experimental design, in turn, means that students in psychology enter their
research careers reciting the mantra "Never predict the null hypothesis." If researchers can never predict the null
hypothesis, and if positive results are considered more interesting to journals than negative results, then the
inevitable outcome is a bias in which the peer-reviewed literature is dominated by positive findings that reject H0
in favor of Hl, and in which most of the negative or nonsignificant results remain unpublished. To ensure that they
keep winning in the academic game, researchers are thus pushed into finding positive results that agree with
their expectations—a mechanism that incentivizes and rewards confirmation bias.

All this might sound possible in theory, but is it true? Psychologists have known since the 1950s that journals are
predisposed toward publishing positive results, but, historically, it has been difficult to quantify how much
publication bias there really is in psychology. One of the most compelling analyses was reported in 2010 by
psychologist Daniele Fanelli from the University of Edinburgh. Fanelli reasoned, as above, that any domain of
the scientific literature that suffers from publication bias should be dominated by positive results that support
the stated hypothesis (H1). To test this idea, he collected a random sample of more than 2,000 published
journal articles from across the full spectrum of science, ranging from the space sciences to physics and
chemistry, through to biology, psychology, and psychiatry.

The results were striking.

Across all sciences, positive outcomes were more common than negative ones. Even for space science, which
published the highest percentage of negative findings, 70 percent of the sampled articles supported the stated
hypothesis. Crucially, this bias was highest in psychology, topping out at 91 percent. It is ironic that
psychology—the discipline that produced the first empirical evidence of confirmation bias—is at the same time
one of the most vulnerable to confirmation bias. The drive to publish positive results is a key cause of
publication bias, but it still explains only half the problem. The other half is the quest for novelty. To compete
for publication at many journals, articles must either adopt a novel methodology or produce a novel finding—and
preferably both. Most journals that publish psychological research judge the merit of manuscripts, in part,
according to novelty. Some even refer explicitly to novelty as a policy for publication. The journal Nature states
that to be considered for peer review, results must be "novel" and "arresting”, while the journal Cortex notes
that empirical Research Reports must "report important and novel material." The journal Brain warns authors that
"some [manuscripts] are rejected without peer review owing to lack of novelty," and Cerebral Cortex goes one step
further, noting that even after peer review, "final acceptance of papers depends not just on technical merit, but also
on subjective ratings of novelty."

Within psychology proper, Psychological Science, a journal that claims to be the highest-ranked in psychology,
prioritizes papers that produce "breathtaking" findings! At this point, you might well ask: what's wrong with
novelty? After all, in order for something to be marked as discovered, surely it can't have been observed already (so
it must be a novel result), and isn't it also reasonable to assume that researchers seeking to produce novel results
might need to adopt new methods? In other words, by valuing novelty aren't journals simply valuing discovery?

The problem with this argument is the underlying assumption that every observation in psychological research
can be called a discovery—that every paper reports a clear and definitive fact. As with all scientific disciplines,
this is far from the truth. Most research findings in psychology are probabilistic rather than deterministic:
conventional statistical tests talk to us in terms of probabilities rather than proofs. This in turn means that no
single study and no one paper can lay claim to a discovery. Discovery depends wholly and without exception
on the extent to which the original results can be repeated or replicated by other scientists, and not just once
but over and over again.

For example, it would not be enough to report only once that a particular cognitive therapy was effective at
reducing depression; the result would need to be repeated many times in different groups of patients, and by
different groups of researchers, for it be widely adopted as a public health intervention. Once a result has been
replicated a satisfactory number of times using the same experimental method, it can then be considered replicable
and, in combination with other replicable evidence, can contribute meaningfully to the theoretical or applied
framework in which it resides. Over time, this mass accumulation of replicable evidence within different fields can
allow theories to become accepted through consensus and in some cases can even become laws. In science,
prioritizing novelty hinders rather than helps discovery because it dismisses the value of direct (or close) replication.

As we have seen, journals are the gatekeepers to an academic career, so if they value findings that are positive
and novel, why would scientists ever attempt to replicate each other? Under a neophilic incentive structure,
direct replication is discarded as boring, uncreative, and lacking in intellectual prowess. Yet even in a research
system dominated by positive bias and neophilia, psychologists have retained some realization that reproducibility
matters. So, in place of unattractive direct replication, the community has reached for an alternative form of
validation in which one experiment can be said to replicate the key concept or theme of another by following a
different (novel) experimental method—a process known as conceptual replication. On its face, this redefinition
of replication appears to satisfy the need to validate previous findings while also preserving novelty.
Unfortunately, all it really does is introduce an entirely new and pernicious form of confirmation bias.

Replicating Concepts Instead of Experiments

In early 2012, a professor of psychology at Yale University named John Bargh launched a stinging public attack on
a group of researchers who failed to replicate one of his previous findings. The study in question, published by Bargh
and colleagues in 1996, reported that priming participants unconsciously to think about concepts related to elderly
people (e.g., words such as "retired," "wrinkle," and "old") caused them to walk more slowly when leaving the lab
at the end of the experiment. Based on these findings, Bargh claimed that people are remarkably susceptible
to automatic effects of being primed by social constructs. Bargh's paper was an instant hit and to date has
been cited more than 3,800 times. Within social psychology it spawned a whole generation of research on social
priming, which has since been applied in a variety of different contexts. Because of the impact the paper achieved,
it would be reasonable to expect that the central finding must have been replicated many times and confirmed as
being sound. Appearances, however, can be deceiving. Several researchers had reported failures to replicate
Bargh's original study, but few of these nonreplications have been published, owing to the fact that journals
(and reviewers) disapprove of negative findings and often refuse to publish direct replications.
One such attempted replication in 2008 by Hal Pashler and colleagues from the University of California San Diego
was never published in an academic journal and instead resides at an online repository called PsychFileDrawer.
Despite more than doubling the sample size reported in the original study, Pashler and his team found no
evidence of such priming effects—if anything they found the opposite result. Does this mean Bargh was wrong?
Not necessarily. As psychologist Dan Simons from the University of Illinois has noted, failing to replicate an
experimental effect does not necessarily mean the original finding was in error. Nonreplications can emerge by

• chance,
• can be due to subtle changes in experimental methods between studies, or
• can be caused by the poor methodology of the researchers attempting the replication.

Thus, nonreplications are themselves subject to the same tests of replicability as the studies they seek to replicate.
Nevertheless, the failed replication by Pashler and colleagues—themselves an experienced research team—raised
a question mark over the status of Bargh's original study and hinted at the existence of an invisible file drawer of
unpublished failed replications.

In 2012, another of these attempted replications came to light when Stephane Doyen and colleagues from the
University of Cambridge and Universite Libre de Bruxelles also failed to replicate the elderly priming effect.
Their article appeared prominently in the peer-reviewed journal PLOS ONE, one of the few outlets worldwide that
explicitly renounces neophilia and publication bias. The ethos of PLOS ONE is to publish any methodologically
sound scientific research, regardless of subjective judgments as to its perceived importance or originality. In their
study, Doyen and colleagues not only failed to replicate Bargh's original finding but also provided an
alternative explanation for the original effect—rather than being due to a priming manipulation, it was the
experimenters themselves who unwittingly induced the participants to walk more slowly by behaving
differently or even revealing the hypothesis.

The response from Bargh was swift and contemptuous. In a highly publicized blogpost at psychologytoday.com
entitled "Nothing in Their Heads," he attacked not only Doyen and colleagues as "incompetent or ill-informed,"
but also science writer Ed Yong (who covered the story) for engaging in "superficial online science journalism," and
PLOS ONE as a journal that "quite obviously does not receive the usual high scientific journal standards of peer-review
scrutiny." Amid a widespread backlash against Bargh, his blogpost was swiftly (and silently) deleted but not
before igniting a fierce debate about the reliability of social priming research and the status of replication in
psychology more generally. Doyen's article, and the response it generated, didn't just question the authenticity of
the elderly priming effect; it also exposed a crucial disagreement about the definition of replication.

Some psychologists, including Bargh himself, claimed that the original 1996 study had been replicated at
length, while others claimed that it had never been replicated. How is this possible? The answer, it turned out,
was that different researchers were defining replication differently. Those who argued that the elderly priming
effect had never been replicated were referring to direct replications: studies that repeat the method of a previous
experiment as exactly as possible in order to reproduce the finding. At the time of writing, Bargh's central finding
has been directly replicated just twice, and in each case with only partial success.

1.In the first attempt, published six years after the original study, the researchers showed the same effect but only
in a subgroup of participants who scored high on self-consciousness.

2.In the second attempt, published another four years later, a different group of authors showed that priming
elderly concepts slowed walking only in participants who held positive attitudes about elderly people; those who
harbored negative attitudes showed the opposite effect.

Whether these partial replications are themselves replicable is unknown, but as we will see in chapter 2, hidden
flexibility in the choices researchers make when analyzing their data (particularly concerning subgroup
analyses) can produce spurious differences where none truly exist. In contrast, those who argued that the elderly
priming effect had been replicated many times were referring to the notion of "conceptual replication": the idea
that the principle of unconscious social priming demonstrated in Bargh's 1996 study has been extended and applied
in many different contexts.

In a later blog post at psychologytoday.com called "Priming Effects Replicate Just Fine, Thanks" Bargh referred to
some of these conceptual replications in variety of social behaviors, including attitudes and stereo-types unrelated
to the elderly. The logic of "conceptual replication" is that if an experiment shows evidence for a particular
phenomenon, you can replicate it by using a different method that the experimenter believes measures the
same class of phenomenon. Psychologist Rolf Zwaan argues that conceptual replication has a legitimate role in
psychology (and indeed all sciences) to test the extent to which particular phenomena depend on specific
laboratory conditions, and to determine whether they can be generalized to new contexts.

The current academic culture, however, has gone further than merely valuing conceptual replication—it has
allowed it to usurp direct replication. As much as we all agree about the importance of converging evidence,
should we be seeking it out at the expense of knowing whether the phenomenon being generalized exists in the
first place? A reliance on conceptual replication is dangerous for three reasons.

The first is the problem of subjectivity. A conceptual replication can hold only if the different methods used in two
different studies are measuring the same phenomenon. For this to be the case, some evidence must exist that they
are. Even if we meet this standard, this raises the question of how similar the methods must be for a study to qualify
as being conceptually replicated. Who decides and by what criteria?

The second problem is that a reliance on conceptual replications risks findings becoming unreplicated in the future.
To illustrate how this could happen, suppose we have three researchers, Smith, Jones, and Brown, who publish
three scientific papers in sequence.

• Smith publishes the first paper, showing evidence for a particular phenomenon.

• Jones then uses a different method to show evidence for a phenomenon that appears similar to the one
that Smith discovered.

The psychological community decide that the similarity crosses some subjective threshold and so conclude
that Jones "conceptually replicates" Smith.

• Now enter Brown. Brown isn't convinced that Smith and Jones are measuring the same phenomenon and
suspects they are in fact describing different phenomena. Brown obtains evidence suggesting that this is
indeed the case. In this way, Smith's finding that was previously considered replicated by Jones now
assumes the bizarre status of becoming unreplicated. Finally, conceptual replication fuels an obvious
confirmation bias. When two studies draw similar conclusions using different methods, the second
study can be said to conceptually replicate the first. But what if the second study draws a very different
conclusion—would it be claimed to conceptually falsify the first study? Of course not. Believers of the
original finding would immediately (and correctly) point to the multitude of differences in
methodology to explain the different results. Conceptual replications thus force science down a one-
way street in which it is possible to confirm but never disconfirm previous findings.

Through a reliance on conceptual replication, psychology has found yet another way to become enslaved to
confirmation bias.

Reinventing History

So far we have seen how confirmation bias influences psychological science in two ways: through the pressure to
publish results that are novel and positive, and by ousting direct replication in favor of bias-prone conceptual
replication. A third, and especially insidious, manifestation of confirmation bias can be found in the
phenomenon of hindsight bias. Hindsight bias is a form of creeping determinism in which we fool ourselves (and
others) into believing that an observation was expected even though it actually came as a surprise. It may seem
extraordinary that any scientific discipline should be vulnerable to a fallacy that attempts to reinvent history.

Unfortunately, much psychological research seems to pay little heed to this aspect of the scientific method.
Since the hypothesis of an experiment is only rarely published in advance, researchers can covertly alter their
predictions after the data have been analyzed in the interests of narrative flair. In psychology this practice is
referred to as Hypothesizing After Results are Known (HARKing), a term coined in 1998 by psychologist
Norbert Kerr.

HARKing is a form of academic deception in which the experimental hypothesis of a study is altered after analyzing
the data in order to pretend that the authors predicted results that, in reality, were unexpected. By engaging in
HARKing, authors are able to present results that seem neat and consistent with (at least some) existing
research or their own previously published findings. This flexibility allows the research community to produce the
kind of clean and confirmatory papers that psychology journals prefer while also maintaining the illusion that
the research is hypothesis driven and thus consistent with the Hypothetico Deductive method.

HARKing can take many forms, but one simple approach involves reversing the predictions after inspecting the
data. Suppose that a researcher formulates the hypothesis that, based on the associations we form across our
lifetime between the color red and various behavioral acts of stopping (e.g., traffic lights; stop signs; hazard signs),
people should become more cautious in a gambling task when the stimuli used are red rather than white. After
running the experiment, however, the researcher finds the opposite result: people gambled more when exposed
to red stimuli.

The correct approach here would be to report that the hypothesis was unsupported, admitting that additional
experiments may be required to understand how this unexpected result arose and its theoretical implications.
However, the researcher realizes that this conclusion may be difficult to publish without conducting those
additional experiments, and he or she also knows that nobody reviewing the paper would be aware that the
original hypothesis was unsupported. So, to create a more compelling narrative, the researcher returns to the
literature and searches for studies suggesting that being exposed to the color red can lead people to "see red,"
losing control and becoming more impulsive. Armed with a small number of cherry-picked findings, the
researcher ignores the original (better grounded) rationale and rewrites the hypothesis to predict that people
will actually gamble more when exposed to red stimuli. In the final published paper, the introduction section
is written with this post hoc hypothesis presented as a priori.

Just how prevalent is this kind of HARKing? Norbert Kerr's survey of 156 psychologists in 1998 suggested that about
40 percent of respondents had observed HARKing by other researchers; strikingly, the surveyed psychologists
also suspected that HARKing was about 20 percent more prevalent than the classic research method. A more recent
survey of 2,155 psychologists by Leslie John and colleagues estimated the true prevalence rate to be as high as 90
percent despite a self-admission rate of just 35 percent.

Remarkably, not all psychologists agree that HARKing is a problem. Nearly 25 years before suggesting the
existence of precognition, Daryl Bem claimed that if data are strong enough "then researchers are justified in
"subordinating or even ignoring [their] original hypotheses" In other words, Bem argued that it is legitimate to
subvert the Hypothetico-Deductive method, and to do so covertly, in order to preserve the narrative structure
of a scientific paper. Norbert Kerr and others have objected to this point of view, as well they might. First and
foremost, because HARKing relies on deception, it violates the fundamental ethical principle that research should
be reported honestly and completely. Deliberate HARKing may therefore lie on the same continuum of malpractice
as research fraud. Secondly, the act of deception in HARKing leads the reader to believe that an obtained
finding was more expected, and hence more reliable, than it truly is—this, in turn, risks distorting the scientific
record to place undue certainty in particular findings and theories. Finally, in cases where a post hoc hypothesis
is pitted against an alternative account that the author already knows was unsupported, HARKing creates the
illusion of competitive hypothesis testing. Since a HARKed hypothesis can, by definition, never be
disconfirmed, this contrived scenario further exacerbates confirmation bias.

The Sin of Hidden Flexibility

In 2008, British illusionist Derren Brown presented a TV program called The System in which he claimed he could
predict, with certainty, which horse would win at the racetrack. The show follows Khadisha, a member of the public,
as Brown provides her with tips on upcoming races. In each case the tips pay off, and after succeeding five times in
a row Khadisha decides to bet as much money as possible on a sixth and final race. The twist in the program is that
Brown has no system—Khadisha is benefiting from nothing more than chance. Unknown to her until after
placing her final bet, Brown initially recruited 7,776 members of the public and provided each of them with a
unique combination of potential winners. Participants with a losing horse were successively eliminated at each of
six stages, eventually leaving just one participant who had won every time—and that person just happened to be
Khadisha. By presenting the story from Khadisha's perspective, Brown created the illusion that her winning
streak was too unlikely to be random—and so must be caused by The System—when in fact it was explained
entirely by chance.

Unfortunately for science, the hidden flexibility that Brown used to generate false belief in The System is the
same mechanism that psychologists exploit to produce results that are attractive and easy to publish. Faced
with the career pressure to publish positive findings in the most prestigious and selective journals, it is now standard
practice for researchers to analyze complex data in many different ways and report only the most interesting
and statistically significant outcomes. Doing so deceives the audience into believing that such outcomes are
credible, rather than existing within an ocean of unreported negative or inconclusive findings. Any conclusions
drawn from such tests will, at best, overestimate of the size of any real effect. At worst they could be
entirely false. By torturing numbers until they produce publishable outcomes, psychology commits our second
mortal sin: that of exploiting hidden analytic flexibility. Formally, hidden flexibility is one manifestation of
the "fallacy of incomplete evidence," which arises when we frame an argument without taking into
account the full set of information available. Although hidden flexibility is itself a form of research bias, its
deep and ubiquitous nature in psychology earns it a dedicated place in our hall of shame.

P-Hacking

As we saw earlier, the dominant approach for statistical analysis in psychological science is a set of techniques called
null hypothesis significance testing (NHST). NHST estimates the probability of an obtained positive effect, or one
greater, being observed in a set of data if the null hypothesis (HO) is true and no effect truly exists. Importantly,
the p value doesn't tell us the probability of H0 itself being true, and it doesn't indicate the size or reliability of
the obtained effect—instead what it tells us is how surprised we should be to obtain the current effect, or one
more extreme, if H0 were to be true? The smaller the p value, the greater our surprise would be and the more
confidently we can reject H0. Since the 1920s, the convention in psychology has been to require a p value of less
than .05 in order to categorically reject H0. This significance threshold is known as alpha (α)—the probability of
falsely declaring a positive effect when, in fact, there isn't one. Under NHST, a false positive or Type I error occurs
when we incorrectly reject a true H0. The a threshold thus indicates the maximum allowable probability of a Type I
error in order to reject H0 and conclude that a statistically significant effect is present. Why is a set to .05, you might
ask? The .05 convention is arbitrary, as noted by Ronald Fisher—one of the architects of NHST—nearly a century
ago: If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2
per cent point), or one in a hundred (the 1 per cent point).

Personally, I prefer to set a low standard of significance at the 5 per cent point, and ignore entirely all results which
fail to reach this level. Setting the a threshold to .05 theoretically allows up to 1 in 20 false rejections of H0 across a
set of independent significance tests. Some have argued that this threshold is too liberal and leads to a scientific
literature built on weak findings that are unlikely to replicate.; Furthermore, even if we believe that it is
acceptable for 5 percent of statistically significant results to be false positives, the truth is that exploiting analytic
flexibility increases a even more, increasing the actual rate of false positives. This flexibility arises because
researchers make analytic decisions after inspecting their data and are faced with many analysis options that
can be considered defensible yet produce slightly different p values.

For instance, given a distribution of reaction time values, authors have the option of excluding statistical outliers
(such as very slow responses) within each participant. They also have the option of excluding entire participants on
the same basis. If they decide to adopt either or both of these approaches, there are then many available methods
they could use, each of which could produce slightly different results. As well as being flexible, a key feature of such
decisions is that they are hidden and never published. The rules of engagement do not require authors to
specify which analytic decisions were a priori (confirmatory) and which were post hoc (exploratory)—in fact,
such transparency is likely to penalize authors competing for publication in the most prestigious journals. This
combination of culture and incentives inevitably leads to all analyses being portrayed as confirmatory and
hypothesis driven even where many were exploratory. In this way, authors can generate a product that is
attractive to journals while also maintaining the illusion (and possibly delusion) that they have adhered to the
hypothetico-deductive model of the scientific method. The decision space in which these exploratory analyses
reside is referred to as "researcher degrees of freedom." Beyond the exclusion of outliers, it can include
decisions such as which conditions to enter into a wider analysis of multiple factors, which covariates or
regressors to take into account, whether or not to collect additional participants, and even how to define the
dependent measure itself.

In even the simplest experimental design, these pathways quickly branch out to form a complex decision tree
that a researcher can navigate either deliberately or unconsciously in order to generate statistically significant
effects. By selecting the most desirable outcomes, it is possible to reject H0 in almost any set of data—and by
combining selective reporting with HARKing (as described in chapter 1) it is possible to do so in favor of almost any
alternative hypothesis. Exploiting researcher degrees of freedom to generate statistical signifi-cance is known as
"p-hacking" and was brought to prominence in 2011 by Joe Simmons, Leif Nelson, and Uri Simonsohn from the
University of Pennsylvania and University of California Berkeley.

Through a combination of real experiments and simulations, Simmons and colleagues showed how selective
reporting of exploratory analyses can generate meaningless p values. In one such demonstration, the authors
simulated a simple experiment involving one independent variable (the intervention), two dependent
variables (behaviors being measured), and a single covariate (gender of the participant). The simulation was
configured so that there was no effect of the manipulation, that is, 1-10 was predetermined to be true. They then
simulated the outcome of the experiment 15,000 times and asked how often at least one statistically
significant effect was observed (p<.05). Given that H0 was true in this scenario, a nominal rate of 5 percent false
positives can be assumed at alpha = .05. The central question posed by Simmons and colleagues was what would
happen if they embedded hidden flexibility within the analysis decisions. In particular, they tested the effect of
analyzing either of the two dependent variables (reporting if a positive effect was obtained on either one),
including gender as a covariate or not, increasing the number of participants after analyzing the results, and
dropping one or more of the conditions.

Allowing maximal combinatorial flexibility between these four options increased the false positive rate from
a nominal 5 percent to an alarming 60.7 percent. As striking as this is, 60.7 percent is probably still
an underestimate of the true rate of false positives in many psychology experiments. Simmons and
colleagues didn't even include other common forms of hidden flexibility, such as variable criteria for excluding
outliers or conducting exploratory analyses within subgroups (e.g., males only or females only). Their
simulated experimental design was also relatively simple and produced only a limited range of researcher degrees of
freedom.
In contrast, many designs in psychology are more complicated and will include many more options. Using a
standard research design with four independent variables and one dependent variable, psychologist Dorothy
Bishop has showed that at least one statistically significant main effect or interaction can be expected by chance in
more than 50 percent of analyses—an order of magnitude higher than the conventional a threshold. Crucially, this
rate of false positives occurs even without exploiting the researcher degrees of freedom illustrated by
Simmons and colleagues. Thus, where p-hacking occurs in more complex designs it is likely to render the
obtained p values completely worthless. One key source of hidden flexibility in the simulations by Simmons
and colleagues was the option to add participants after inspecting the results.

There are all kinds of reasons why researchers peek at data before data collection is complete, but one central
motivation is efficiency: in an environment with limited resources it can often seem sensible to stop data collection
as soon as all-important statistical significance is either obtained or seems out of reach. This temptation to peek
and chase p<.05 is of course motivated by the fact that psychology journals typically require the main conclusions
of a paper to be underpinned by statistically significant results. If a critical statistical test returns p = .07, the
researcher knows that reviewers and editors will regard the result as weak and unconvincing, and that the paper
has little chance of being published in a competitive journal. Many researchers will therefore add participants in
an attempt to nudge the p value over the line, without reporting in the published paper that they did so.

This kind of behavior may seem rational within a publishing system where what is best for science conflicts with the
incentives that drive individual scientists. After all, if scientists are engaging in these practices then it is surely
because they believe they have no other choice in the race for jobs, grant funding, and professional esteem.
Unfortunately, however, chasing statistical significance by peeking and adding data completely undermines
the philosophy of NHST. A central but often overlooked requirement of NHST is that researchers prespecify a
stopping rule, which is the final sample size at which data collection must cease. Peeking at data prior to this
end point in order to decide whether to continue or to stop increases the chances of a false positive. This is
because NHST estimates the probability of the observed data (or more extreme) under the null hypothesis,
and since the aggregate data pattern can vary randomly as individual data points are added, repeated
hypothesis testing increases the odds that the data will, by chance alone, fall below the nominated alpha level
(See figure 2.1). This increase in false positives is similar to that obtained when testing multiple hypotheses
simultaneously.

Just how common is p-hacking? The lack of transparency in the research community makes a definitive answer
impossible to glean, but important clues can be found by returning to the 2012 study by Leslie John and colleagues
from chapter 1. Based on a survey of more than 2,000 American psychologists, they estimated that 100 percent
have, on at least one occasion, selectively excluded data after looking at the impact of doing so, and that 100
percent have collected more data for an experiment after seeing whether results were statistically significant. They
also estimate that 75 percent of psychologists have failed to report all conditions in an experiment, and that more
than 50 percent have stopped data collection after achieving a "desired result:' These results indicate that, far
from being a rare practice, p-hacking in psychology may be the norm.
FIGURE 2.1. The peril
of adopting a flexible
stopping rule in null
hypothesis significance
testing. In the upper
panel, an experiment
is simulated in which
the null hypothesis
(H0) is true and a
statistical test is
conducted after each
new participant up to
a maximum sample
size of 50. A researcher
who strategically p-
hacks would stop as
soon as p drops below
.05 (dotted line). In this
simulation, p crosses
the significance
threshold after
collecting data for 19
participants (dark grey
symbols), despite the
fact that there is no
real effect to be
discovered. In the
lower panel we see how
the frequency of
interim analyses
influences the false
positive rate, defined
here as the probability
of finding a p value < .05
before reaching the
maximum sample size.
Each symbol in this
plot is the average
false positive rate across 10,000 simulated experiments, based on avariable stopping rule when H0 is true. If
the researcher initially collects data for five participants and then reanalyses after every subsequent set of 20
participants, the false positive rate is 0.12 (rightmost symbol), which is roughly twice the a threshold of .05
(dotted line). Moving leftward on the plot, as successive analyses become more frequent (i.e., peeking after fewer
and fewer participants), the false positive rate increases. If the researcher checks after every single participant
then it climbs to as high as 0.26, yielding a 1 in 4 chance that a positive result (p<.05) would be falsely declared
where none truly exists.

Peculiar Patterns of p

The survey by John and colleagues suggests that p-hacking is common in psychology, but can we detect more
objective evidence of its existence? One possible clue lies in the way p values are reported. If p-hacking is as
common as claimed, then it should distort the distribution of p values in published work. To illustrate why,
consider the following scenarios.

In one, a team of researchers collect data by adding one participant at a time and successively analyze the
results after each participant until statistical significance is obtained. Given this strategy, what p value would
you expect to see at the end of the experiment?

In another scenario, the team obtain a p value of .10 but don't have the option to collect additional data.
Instead, they try ten different methods for excluding statistical outliers. Most of these produce p values higher
than .05, but one reduces the p value to .049. They therefore select that option in the declared analysis and don't
report the other "failed" attempts.

Finally, consider a situation where p = .08 and the researchers have several degrees of freedom available, in
terms of characterizing the dependent variable— specifically, they are able report the results either in terms of
response times or performance accuracy, or an integrated measure of both. After analyzing all these different
measures they find that the integrated measure "works best," revealing a p value of .037, whereas the
individual measures alone reveal only nonsignificant effects (p>.05).

Although each of these scenarios is different, they all share one thing: in each case the researcher is attempting
to push the p value just over the line. If this is your goal then it would make sense to stop p-hacking as soon as the
p value drops below .05—after all, why spend additional resources only to risk that an effect that is currently
publishable might "disappear" with the addition of more participants or by looking at the data in a different
way? By focusing on merely crossing the significance threshold, the outcome should be to create a cluster of p
values just below .05. A number of individual cases of such behavior have been alleged.

In a Science paper published in 2012, researchers presented evidence that religious beliefs could be reduced by
instructing people to complete a series of tasks that require rational, analytic thinking. Despite sample sizes across
four experiments ranging from 57 to 179, each experiment returned p values within a range of p = .03 to p =
.04. Critics have argued either that the authors knew precisely how many participants would be required in
each experiment to attain statistical significance or that they p-hacked, consciously or unconsciously, in order
to always narrowly reach it. There is, however, an entirely innocent explanation. Through no fault of the authors,
their paper could be one of many unbiased studies considered by Science, with the journal selectively publishing
the one that "struck gold" in finding a sequence of four statistically significant effects.

Where it is impossible to distinguish biased practices by researchers from publication bias by journals, the
authors naturally deserve the benefit of the doubt. Because of the difficulty in differentiating publication bias
from researcher bias, individual cases of p-hacking are difficult, if not impossible, to prove. However, if p-
hacking is the norm then such cases may accrue across the literature to produce an overall preponderance of
p values just below the significance threshold. In 2012, psychologists E. J. Masicampo and Daniel Lalande asked
this question for the first time by examining the distribution of 3,627 p values sampled from three of the most
prestigious psychology journals.

Overall they found that smaller p values were more likely to be published than larger ones, but they also discovered
that the number of p values just below .05 was about five times higher than expected. Masicampo and Lalande's
findings have since been replicated by Nathan Leggett and colleagues from the University of Adelaide.

Not only did they find the same spike in p values just below .05, but they also showed that the spike increased
between 1965 and 2005. The reason for growing numbers of "just-significant" results is not known for certain (and
has itself been robustly challenged), but if it is a genuine phenomenon then one possible explanation is the huge
advancement in computing technology and statistical software. Undertaking NHST in 1965 was cumbersome and
laborious (and often done by hand), which acted as a natural disincentive toward p-hacking. In contrast, modern
software packages such as SPSS and R can reanalyze data many ways in just seconds. It has been suggested that
the studies of the Masicampo team and the Leggett team reveal evidence of p-hacking on a massive scale across
thousands of studies, but is it possible to show such effects within more specific fields?

A tool developed by Simonsohn, Leif, and Simmons called "p-curve" analysis promises to do just this.' The
logic of p-curve is that the distribution of statistically significant p values within a set of studies reveals their
evidential value (see figure 2.2).

For unbiased (non-p-hacked) results where H0 is false, we should see more p values clustered toward the lower
end of the spectrum (e.g., p<.01) than immediately below the significance threshold (e.g., p values between
.04 to .05). This, in turn, should produce a distribution of p values between p = 0 and p = .05 that is positively skewed. In
contrast, when researchers engage in p-hacking we should see a clustering of p values that is greatest just
below .05, with fewer instances at lower values—thus the distribution of p values should be negatively skewed.

Although p-curve has attracted controversy, it is a promising addition to the existing array of tools to detect hidden
analytic flexibility The problem of p-hacking is not unique to psychology. Compared with typical behavioral
experiments, functional brain imaging includes far more researcher degrees of freedom. As the blogger
Neuroskeptic has pointed out, “the decision space of even the simplest fMRI study can include hundreds of
analysis options, providing ample room for p-hacking.”. At the time of writing this book, no studies of the
distribution of p values have yet been under taken for fMRI or electroencephalography (EEG), but indirect evidence
suggests that p-hacking may be just as common in these fields as in psychological science.

Josh Carp of the University of Michigan has reported that out of 241 randomly selected fMRI studies, 207
employed unique analysis pipelines; this implies that fMRI researchers have numerous defensible options at
their disposal and are making those analysis decisions after inspecting the data. As expected by a culture of p-
hacking, earlier work has shown that the test-retest reliability of fMRI is moderate to low, with an estimated rate
of false positives within the range of 10-40 percent. We will return to problems with unreliability in chapter 3, but
for now it is sufficient to now that p-hacking presents a serious risk to the validity of both psychology and
cognitive neuroscience. Institutionalized p-hacking damages the integrity of science and may be on the rise. If
virtually all psychologists engage in p-hacking (even unconsciously) at least some of the time, and if p-hacking
increases the rate of false positives to 50 percent or higher, then much of the psychological literature will be false
to some degree.

This situation is both harmful and, crucially, preventable. It prompts us to consider how the scientific record
would change if p-hacking wasn't the norm and challenges us to reflect on how such practices can be tolerated in
any community of scientists. Unfortunately, p-hacking appears to have crept up on the psychological community
and become perceived as a necessary evil—the price we must pay to publish in the most competitive journals.
Frustration with the practice is, however, growing. As Uri Simonsohn said in 2012 during a debate with fellow
psychologist Norbert Schwarz: “I don't know of anybody who runs a study, conducts one test, and publishes it no
matter what the p-value is. ... We are all p hackers, those of us who realize it want change.”
FIGURE 2.2. The logic of the p-curve
tool developed by Uri Simonsohn
and colleagues. Each plot shows a
hypothetical distribution of p values
between 0 and .05. For example, the
x-value of .05 corresponds to all p
values between .04 and .05, while .01
corresponded to all pvalues between
0 and .01. In the upper panel, the
null hypothesis (H0) is true, and
there is no p-hacking; therefore p
values in this plot are uniformly
distributed.

In the middle panel, H0 is false,


leading to a greater number of
smaller p values than larger ones.
The positive (rightward) skew in this
plot doesn't rule out the presence of
p-hacking but does suggest that the
sample of p values is likely to contain
evidential value.

In the lower panel, H0 is true and


more p values are observed closer to
.05. The negative (leftward) skew in
this plot suggests the presence of
p-hacking.

Ghost Hunting

Could a greater emphasis on


replication help solve the problems
created by p-hacking? In particular, if
we take a case where a p-hacked
finding is later replicated, does the
fact that the replication succeeded
mean we don't need to worry
whether the original finding
exploited researcher degrees of
freedom?

If we assume that true discoveries


will replicate more often than false
discoveries then a coordinated
program of replication certainly has
the potential to weed out p-hacked findings, provided that the procedures and analyses of the original study and
the replication are identical. However, the argument that replication neutralizes p-hacking has two major short-
comings.
1.The first is that while direct (close) replication is vital, even a widespread and systematic replication initiative
wouldn't solve the problem that p-hacked studies already waste resources by creating blind alleys.

2.Second, as we saw earlier and will see again later, such a direct replication program simply doesn't exist in
psychology. Instead, the psychological community has come to rely on the more loosely defined "conceptual
replication" to validate previous findings, which satisfies the need for journals to publish novel and original results.
Within a system that depends on conceptual replication, researcher degrees of freedom can be just as easily
exploited to "replicate" a false discovery as to create one from scratch. A p-hacked conceptual replication of a
p-hacked study tells us very little about reality apart from our ability to deceive ourselves. The case of John
Bargh and the elderly priming effect provides an interesting case where researcher degrees of freedom may
have led to so-called phantom replication. Recall that at least two attempts to exactly replicate the original
elderly priming effect have failed.

By way of rebuttal, Bargh has argued that two other studies did successfully replicate the effect.

However, if we look closely at those studies, we find that in neither case was there an overall effect of elderly
priming—in one study the effect was statistically significant only once participants were divided into
subgroups of low or high self-consciousness; and in the other study the effect was only significant when
dividing participants into those with positive or negative attitudes toward elderly people. Furthermore, each
of these replications used different methods for handling statistical outliers, and each analysis included
covariates that were not part of the original elderly priming experiments. These differences hint at a potential
phantom replication. Driven by the confirmation bias to replicate the original (high-profile) elderly priming effect,
the researchers in these subsequent studies may have consciously or unconsciously exploited researcher degrees
of freedom to produce a successful replication and, in turn, a more easily marketable publication. Whether these
conceptual replications were contaminated by p-hacking cannot be known for certain, but we have good reason
to be suspicious.

Despite one-time prevalence estimates approaching 100 percent, researchers usually deny that they p-hack. In
Leslie John's survey in 2012, only about 60 percent of psychologists admitted to "Collecting more data after seeing
whether results were significant," whereas the prevalence estimate derived from this admission estimate was
100 percent. Similarly, while –30 percent admitted to "Failing to report all conditions" and –40 percent admitted to
"Excluding data after the impact of doing so," the estimated prevalence rates in each case were –70 percent and –
100 percent, respectively. These figures needn't imply dishonesty. Researchers may sincerely deny p-hacking
yet still do it unconsciously by failing to remember and document all the analysis decisions made after
inspecting data. Some psychologists may even do it consciously but believe that such practices are acceptable
in the interests of data exploration and narrative exposition. Yet, regardless of whether researchers are p-
hacking consciously or unconsciously, the solution is the same. The only way to verify that studies are not p-
hacked is to show that the authors planned their methods and analysis before they analyzed the data—and
the only way to prove that is through study preregistration.

Unconscious Analytic "Tuning"

What do we mean exactly when we say p-hacking can happen unconsciously? Statisticians Andrew Gelman and Eric
Loken have suggested that subtle forms of p-hacking and HARKing can join forces to produce false discoveries. In
many cases, they argue, researchers may behave completely honestly, believing they are following best practice
while still exploiting researcher degrees of freedom.

To illustrate how, consider a scenario where a team of researchers design an experiment to test the a priori
hypothesis that listening to classical music improves attention span. After consulting the literature they decide
that a visual search task provides an ideal way of measuring attention. The researchers choose a version of this
task in which participants view a screen of objects and must search for a specific target object among distractors,
such as the letter "0" among many "Q"s. On each trial of the task, the participant judges as quickly as possible
whether the "0" is present or absent by pressing a button — half the time the "0" is present and half the time it is
absent. To vary the need for attention, the researchers also manipulate the number of distractors (Qs) between
three conditions:

• 4 distractors (low difficulty, i.e., the "0" pops out when it is present),
• 8 distractors (me-dium difficulty) and
• 16 distractors (high difficulty).

The key dependent variables are the reaction times and error rates in judging whether the letter "0" is present
or absent. Most studies report reaction times as the measure for this task and find that reaction times increase with
the number of distractors. Many studies also report error rates. The researchers decide to adopt a repeated
measures design in which each participant performs this task twice, once while listening to classical music and once
while listening to nonclassical music (their control condition).

They decide to test 20 participants. So far the researchers feel they have done everything right: they have a

• prespecified hypothesis,
• a task selected with a clear rationale, and
• a sample size that is on parity with previous studies on visual search.

Once the study is complete, the first signs of their analysis are encouraging:

1.They successfully replicate two effects that are typically observed in visual search tasks, namely that
participants are significantly slower and more error prone under conditions with more distractors (16 is more
difficult than 8 which, in turn, is more difficult than 4), and that they are significantly slower to judge when a target
is absent compared to when it is present. So far so good.

On this basis, the authors judge that the task successfully measured attention. But then it gets trickier.

2.The researchers find no statistically significant main effect of classical music on either reaction times or error
rates, which does not allow them to reject the null hypothesis. However they do find a significant interaction
between the type of music (classical, nonclassical) and the number of distractors (4, 8,16) for error rates (p =
.01) but not for reaction times (p = .7). What this interaction means is that, for error rates, the effect of classical
music differed significantly between the different distractor conditions. Post hoc comparisons show that error
rates were significantly reduced when participants were exposed to classical music compared with control music,
in displays with 16 distractors (p = .01) but not in displays with 8 or 4 distractors (both p>.2).

In a separate analysis they also find that reaction times on target absent trials (i.e., no "0" present) are significantly
faster when exposed to classical music compared with control music (p = .03). The same advantage isn't significant
for trials where the "0" was present (p = .55) or when target-present and target-absent trials are averaged
together (as shown by the lack of a significant main effect).

The researchers think carefully about their results. After doing some additional reading they learn that error rates
can sometimes provide more sensitive measures of attention than reaction times, which would explain why classical
music influenced only error rates. They also know that the "target absent" condition is more difficult for participants
and therefore perhaps a more sensitive measure of attentional capacity—that would also explain why classical
music boosted performance on trials without targets.

Finally, they are pleased to see that the benefit of classical music on error rates with 16 distractors goes in the
direction predicted by their hypothesis. Therefore, despite the fact that the main effect of classical music is
not statistically significant on either of the measures (reaction times or error rates), the researchers conclude
that the results support the hypothesis: classical music improves visual attention, particularly under
conditions of heightened task difficulty. In the introduction of their article they phrase their hypothesis as,
"We predicted that classical music would improve visual search performance. Since misidentification of visual
stimuli can provide a more sensitive measure of attention than reaction time (e.g., Smith 2000), such effects
may be expected to occur most clearly in error rates, especially under conditions of heightened attentional
load or task difficulty."

In the discussion section of the paper, the authors note that their hypothesis was supported and argue that their
results conceptually replicate a previous study, which showed that classical music can improve the ability to detect
typographic errors in printed text.

What, if anything, did the researchers do wrong in this scenario? Many psychologists would argue that their
behavior is impeccable. After all, they didn't engage in questionable practices such as adding participants until
statistical significance was obtained, selectively removing outliers, or testing the effect of various covariates on
statistical significance. Moreover, they had an a priori hypothesis, which they tested, and they employed a
manipulation check to confirm that their visual search task measured attention. However, as Gelman and Loken
point out, the situation isn't so simple—researcher degrees of freedom have still crept insidiously into their
conclusions.

1.The first problem is the lack of precision in the researchers' a priori hypothesis, which doesn't specify the
dependent variable that should show the effect of classical music (either reaction time or error rates, or both)
and doesn't state under what conditions that hypothesis would or would not be supported. By proposing such
a vague hypothesis, the researchers have allowed any one of many different outcomes to support their
expectations, ignoring the fact that doing so inflates the Type I error rate and invites confirmation bias.

Put differently, although they have an a priori scientific hypothesis, it is consistent with multiple statistical
hypotheses.

2.The second problem is that the researchers ignore the lack of a statistically significant main effect of the
intervention on either of the measures; instead they find a significant interaction between the main manipulation
(classical music vs. control music) and the number of distractors (4, 8,16)- but for error rates only. Since the
researchers had no specific hypothesis that the effect of classical music would increase at greater distractor set
sizes for error rates only, this result was unexpected. Yet by framing this analysis as though it was a priori, the
researchers tested more null hypotheses than were implied in their original (vague) hypothesis, which in turn
increases the chances of a false positive.

3.Third, the researchers engage in HARKing. Even though their general hypothesis was decided before
conducting the study, it was then refined based on the data and is presented in the introduction in its refined state.
This behavior is a subtle form of HARKing that conflates hypothesis testing with post hoc explanation of unexpected
results. Since the researchers did have an a priori hypothesis (although vague) they would no doubt deny that
they engaged in HARKing. Yet even if blurred in their own recollections, the fact is that they adjusted and refined
their hypothesis to appear consistent with unexpected results.

4.Finally, despite the fact that their findings are less definitive than advertised, the researchers treat them as
a conceptual replication of previous work—a corroboration of the general idea that exposure to classical music
improves attention. Interestingly, it is in this final stage of the process where the lack of precision in their original
hypothesis is most clearly apparent. This practice also highlights the inherent weakness of conceptual replication,
which risks constructing bodies of knowledge on weak units of evidence. Is this scenario dishonest? No. Is it
fraudulent? No. But does it reflect questionable research practices? Yes. Unconscious as it may be, the fact is that
the researchers in this scenario allowed imprecision and confirmation bias to distort the scientific record. Even
among honest scientists, researcher degrees of freedom pose a serious threat to discovery.
Biased Debugging

Sometimes hidden flexibility can be so enmeshed with confirmation bias that it becomes virtually invisible. In 2013,
Mark Stokes from the University of Oxford highlighted a situation where an analysis strategy that seems completely
sensible can lead to publication of false discoveries.

Suppose a researcher completes two experiments. Each experiment involves a different method to provide
convergent tests for an overarching theory. In each case the data analysis is complicated and requires the researcher
to write an automated script.

After checking through each of the two scripts for obvious mistakes, the researcher runs the analyses.

1.In one of the experiments the data support the hypothesis indicated by the theory.

2.In the second experiment, however, the results are inconsistent.

Puzzled, the researcher studies the analysis script for the second experiment and discovers a subtle but serious
error. Upon correcting the error, the results of the second experiment become consistent with
the first experiment. The author is delighted and concludes that the results of both experiments
provide convergent support for the theory in question.

What's wrong with this scenario? Shouldn't we applaud the researcher for being careful? Yes and no.

On the one hand, the researcher has caught a genuine error and prevented publication of a false conclusion. But
note that the researcher didn't bother to double-check the analysis of the first experiment because the results
in that case turned out as expected and—perhaps more importantly—as desired.

Only the second experiment attracted scrutiny because it ran counter to expectations, and running counter to
expectations was considered sufficient grounds to believe it was erroneous. Stokes argues that this kind of results-
led debugging—also termed "selective scrutiny"—threatens to magnify false discoveries substantially,
especially if it occurs across an entire field of research. And since researchers never report which parts of their
code were debugged or not (and rarely publish the code itself), biased debugging represents a particularly
insidious form of hidden analytic flexibility.

Are Research Psychologists Just Poorly Paid Lawyers?

The specter of bias and hidden analytic flexibility inevitably prompts us to ask: what is the job of a scientist?

Is it to accumulate evidence as dispassionately as possible and decide on the weight of that evidence what
conclusion to draw? Or is it our responsibility to advocate a particular viewpoint, seeking out evidence to
support that view? One is the job of a scientist; the other is the job of a lawyer. As psychologist John Johnson from
Pennsylvania State University said in a 2013 blog post at Psychology Today:

“Scientists are not supposed to begin with the goal of convincing others that a particular idea is true and then
assemble as much evidence as possible in favor of that idea. Scientists are supposed to be more like detectives
looking for clues to get to the bottom of what is actually going on. They are supposed to be willing to follow the
trail of clues, wherever that may lead them. They are supposed to be interested in searching for the critical data
that will help decide what is actually true, not just for data that supports a preconceived idea. Science is supposed
to be more like detective work than lawyering.”

Unfortunately, as we have seen so far, psychology falls short of meeting this standard. Whether conscious or
unconscious, the psychological community tortures data until the numbers say what we want them to say—indeed
what many psychologists, deep down, would admit we need them to say in the competition for high-impact
publications, jobs, and grant funding. This situation exposes a widening gulf between the needs of the scientists
and the needs of science. Until these needs are aligned in favor of science and the public who fund it, the needs of
scientists will always win—to our own detriment and to that of future generations.

Solutions to Hidden Flexibility

Hidden flexibility is a problem for any science where the act of discovery involves the accumulation of evidence.
This process is less certain in some sciences than in others. If your evidence is clearly one way or the other, such as
the discovery of a new fossil or galaxy, then inferences from statistics may be unnecessary. In such cases no analytic
flexibility is required, whether hidden or disclosed—so none takes place. It is perhaps this association between
statistical analysis and noisy evidence that prompted physicist Ernest Rutherford to allegedly once remark: "If your
experiment needs statistics, you ought to have done a better experiment." In many life sciences, including psychology,
discovery isn't a black-and-white issue; it is matter of determining, from one experiment to the next, the
theoretical contribution made by various shades of gray. When psychologists set arbitrary criteria (p<.05) on the
precise shade of gray required to achieve publication—and hence career success—they also incentivize a host of
conscious and unconscious strategies to cross that threshold.

In the battle between science and storytelling, there is simply no competition: storytelling wins every time.
How can we get out of this mess? Chapter 8 will outline a manifesto for reform, many aspects of which are already
being adopted. The remainder of this chapter will summarize some of the methods we can use to counter hidden
flexibility.

Preregistration. The most thorough solution to p-hacking and other forms of hidden flexibility (including HARKing)
is to prespecify our hypotheses and primary analysis strategies before examining data. Preregistration ensures
that readers can distinguish the strategies that were inde-pendent of the data from those that were (or might have
been) data led. This is not to suggest that dataled strategies are necessarily incorrect or misleading. Some of the
most remarkable advances in science have emerged from exploration, and there is nothing inherently wrong with
analytic flexibility. The problems arise when that flexibility is hidden from the reader, and possibly from the
researcher's own awareness. By unmasking this process, preregistration protects the outcome of science from our
own biases as human practitioners. Study preregistration has now been standard practice in clinical medicine
for decades, driven by concerns over the effects of hidden flexibility and publication bias on public health.

For basic science, these risks may be less immediate but they are no less serious. Hidden flexibility distorts the
scientific record, and, since basic research influences and feeds into more applied areas (including clinical science),
corruption of basic literature necessarily threatens any forward applications of discovery. Recent years have
witnessed a concerted push to highlight the benefits of preregistration in psychology. An increasing number of
journals are now offering preregistered article formats in which part of the peer review process happens before
researchers conduct experiments. This type of review ensures adherence to the hypothetico-deductive model of
the scientific method, and it also prevents publication bias. Resources such as the Open Science Framework also
provide the means for researchers to preregister their study protocols.

p-curve. The p-curve tool developed by Simonsohn and colleagues is useful for estimating the prevalence of p-
hacking in published literature. It achieves this by assuming that collections of studies dominated by p-hacking will
exhibit a concentration of p values that peaks just below .05. In contrast, an evidence base containing positive
results in which p-hacking is rare or absent will produce a positively skewed distribution where smaller p values
are more common than larger ones. Although this tool cannot diagnose p-hacking within individual studies, it
can tell us which fields within psychology suffer more from hidden flexibility. Having identified them, the
community can then take appropriate corrective action such as an intensive program of direct replication.

Disclosure statements. In 2012, Joe Simmons and colleagues proposed a straightforward way to combat hidden
flexibility: simply ask the researchers. This approach assumes that most researchers

(a) are inherently honest and will not deliberately lie, and
(b) recognize that exploiting researcher degrees of freedom reduces the credibility of their own research.
Therefore, requiring researchers to state whether or not they engaged in questionable practices should act as a
disincentive to p-hacking, except for researchers who are either willing to admit to doing it (potentially suffering
loss of academic reputation) or are willing to lie (active fraud). The disclosure statements suggested by the
Simmons team would require authors to state in the methods section of submitted manuscripts how they made
certain decisions about the study design and analysis. These include whether the sample size was determined
before the study began and if any of experimental conditions or data were excluded. Their "21 word solution" (as
they call it) is: We report how we determined our sample size, all data exclusions (if any), all manipulations, and
all measures in the study.

Although elegant in their simplicity, disclosure statements have several limitations.

1.They can't catch forms of p-hacking that exploit defensible ambiguities in analysis decisions, such as testing
the effect of adding a covariate to the design or focusing the analysis on a particular subgroup (e.g., males
only) following exploration.

2.Furthermore, the greater transparency of methods, while laudable in its own right, doesn't stop the practice
of HARKing; note that researchers are not asked whether their a priori hypotheses were altered after inspecting
the data.

3.Finally, disclosure statements cannot catch unconscious exploitation of researcher degrees of freedom, such
as forgetting the full range of analyses undertaken or more subtle forms of HARKing (as proposed by Gelman
and Loken). Notwithstanding these limitations, disclosure statements are a worthy addition to the range of tools
for coun-teracting hidden analytic flexibility.

Data sharing. The general lack of data transparency is a major concern in psychology, and will be discussed in
detail in chapter 4. For now, it is sufficient to note that data sharing provides a natural antidote to some forms of p-
hacking—particularly those where statistical significance is based on post hoc analytic decisions such as different
methods for excluding outliers or focusing on specific subgroups. Publishing the raw data allows independent
scientists with no vested interest to test how robust the outcome is to alternative analysis pathways. If such
examinations were to reveal that the authors' published approach was the only one out of a much larger subset to
produce p<.05, the community would be justifiably skeptical of the study's conclusions. Even though relatively few
scientists scrutinize each other's raw data, the mere possibility that this could happen should act as a natural
deterrent to deliberate p-hacking.

Solutions to allow "optional stopping." Leslie John's survey showed how a major source of p-hacking is violation
of stopping rules; that is, continuously adding participants to an experiment until the p value drops below the
significance threshold. When H0 is true, p values between 0 and 1 are all equally likely to occur, therefore with the
addition of enough participants a p value below .05 will eventually be found by chance. Psychologists often neglect
stopping rules because, in most studies, there is no strong motivation for selecting a particular sample size in
the first place. Fixed stopping rules present a problem for psychology because the size of the effect being
investigated is often small and poorly defined. Fortunately, there are two solutions that psychologists can use to
avoid violating stopping rules.

1.The first, highlighted by Michael Strube from Washington University (and more recently by Daniel Lakens),
allows researchers to use a variable stopping rule with NHST by lowering the alpha level in accordance with
how regularly the researcher peeks at the results. This correction is similar to more conventional corrections for
multiple comparisons.

2.The second approach is to adopt Bayesian hypothesis testing in place of NHST. Unlike NHST, which estimates
the probability of observed data under the null hypothesis, Bayesian tests estimate the relative probabilities of
the data under competing hypotheses. In chapter 3 we will see how this approach has numerous advantages over
more conventional NHST methods, including the ability to directly estimate the probability of H0 relative to H1.
Moreover, Bayesian tests allow researchers to sequentially add participants to an experiment until the weight
of evidence supports either H0 or H1. According to Bayesian statistical philosophy "there is nothing wrong with
gathering more data, examining these data, and then deciding whether or not to stop collecting new data—no
special corrections are needed." This flexibility is afforded by the likelihood principle, which holds that "[w]ithin the
framework of a statistical model, all of the informa-tion which the data provide concerning the relative merits of
two hypoth-eses is contained in the likelihood ratio of those hypotheses." In other words, the more data you have,
the more accurately you can conclude support for one hypothesis over another.

Standardization of research practices. One reason p-hacking is so common is that arbitrary decisions are easy to
justify. Even within a simple experiment, there are dozens of different analytic pathways that a researcher can
take, all of which may have a precedent in the published literature and all of which may be considered
defensible. This ambiguity enables researchers to pick and choose the most desirable outcome from a
smorgasbord of statistical tests, reporting the one that "worked" as though it was the only analysis that was
attempted. One solution to this problem is to apply more constraints on the range of acceptable approaches,
applying institutional standards to such practices as outlier exclusion or the use of covariates. Scientists are
generally resistant to such changes, particularly when a clear best practice fails to stand out from the crowd. Such
standards can also be difficult to apply in emerging fields, such as functional brain imag-ing, owing to the rapid
developments in methodology and analysis strate-gies. However, where standardization isn't possible and any one
of several arbitrary decisions is possible, there is a strong argument that researchers should report all of them and
then summarize the robustness of the outcomes across all contingencies.

Moving beyond the moral argument. Is p-hacking a form of fraud? Whether questionable research practices such
as p-hacking and HARKing are fraudulent—or even on the same continuum as fraud—is controversial. Psychologist
Dave Nussbaum from the University of Chicago has argued that clearly fraudulent behavior, such as data
fabrication, is categorically different from questionable practices such as p-hacking because, with p-hacking,
the inten t of the researcher cannot be known.

Nussbaum is right. As we have seen in this chapter, many cases of p-hacking and HARK ing are likely to be
unconscious acts of self-deception. Yet in cases where researchers deliberately p-hack in order to achieve
statistical significance, Nussbaum agrees that such behavior is on the same continuum of extreme fraud. We
will return to the issue of fraud, but for now we can ask: does it matter if p-hacking is fraudulent? Whether the sin
of hidden flexibility is deliberate or an act of selideception is a distraction from the goal of reforming science.
Regardless of what lies in the minds of researchers, the effects of p-hacking are clear, populating the literature
with post hoc hypotheses, false discoveries, and blind alleys. The solutions are also the same.
Bibliografie 5
Sally Satel; Scott Lilienfeld – Brainwashed. The Seductive Appeal of Mindless
Neuroscience, p. 9-13;
Losing our minds in the age of brain science

You've seen the headlines. This is your brain on love. Or God.


Or envy. Or happiness. And they're reliably accompanied by
articles boasting pictures of color-drenched brains—scans
capturing Buddhist monks meditating, addicts craving cocaine,
and college sophomores choosing Coke over Pepsi. The
media—and even some neuroscientists, it seems—love to
invoke the neural foundations of human behavior to explain
everything from the Bernie Madoff financial fiasco to slavish
devotion to our iPhones, the sexual indiscretions of politicians,
conservatives' dismissal of global warming, and even an
obsession with self-tanning. Brains are big on campus, too.
Take a map of any major university, and you can trace the
march of neuroscience from research labs and medical
centers into schools of law and business and departments of
economics and philosophy. In recent years, neuroscience has
merged with a host of other disciplines, spawning such new
areas of study as neurolaw, neuroeconomics, neurophilosophy,
neuromarketing, and neurofinance. Add to this the birth of
neuroaesthetics, neurohistory, neuroliterature,
neuromusicology, neuropolitics, and neurotheology. The brain
has even wandered into such unlikely redoubts as English
departments, where professors debate whether scanning
subjects' brains as they read passages from Jane Austen novels
represents (a) a fertile inquiry into the power of literature or (b) a desperate attempt to inject novelty into a field
that has exhausted its romance with psychoanalysis and postmodernism?

Clearly, brains are hot. Once the largely exclusive province of neuroscientists and neurologists, the brain has now
entered the popular mainstream. As a newly minted cultural artifact, the brain is portrayed in paintings, sculptures,
and tapestries and put on display in museums and galleries. One science pundit noted, "If Warhol were around
today, he'd have a series of silkscreens dedicated to the cortex; the amygdala would hang alongside Marilyn
Monroe." The prospect of solving the deepest riddle humanity has ever contemplated—itself—by studying the brain
has captivated scholars and scientists for centuries. But never before has the brain so vigorously engaged the public
imagination.

The prime impetus behind this enthusiasm is a form of brain imaging called functional magnetic resonance
imaging (fMRI), an instrument that came of age a mere two decades ago, which measures brain activity and
converts it into the nowiconic vibrant images one sees in the science pages of the daily newspaper. As a tool for
exploring the biology of the mind, neuroimaging has given brain science a strong cultural presence. As one scientist
remarked, brain images are now "replacing Bohr's planetary atom as the symbol of science." With its implied
promise of decoding the brain, it is easy to see why brain imaging would beguile almost anyone interested in pulling
back the curtain on the mental lives of others: politicians hoping to manipulate voter attitudes, marketers tapping
the brain to learn what consumers really want to buy, agents of the law seeking an infallible lie detector, addiction
researchers trying to gauge the pull of temptations, psychologists and psychiatrists seeking the causes of mental
illness, and defense attorneys fighting to prove that their clients lack malign intent or even free will.

The problem is that brain imaging cannot do any of these things—at least not yet.

Author Tom Wolfe was characteristically prescient when he wrote of fMRI in 1996, just a few years after its
introduction, "Anyone who cares to get up early and catch a truly blinding twenty-first-century dawn will want to
keep an eye on it." Now we can't look away. Why the fixation? First, of course, there is the very subject of the scans:
the brain itself. More complex than any structure in the known cosmos, the brain is a masterwork of nature
endowed with cognitive powers that far outstrip the capacity of any silicon machine built to emulate it. Containing
roughly 86 billion brain cells, or neurons, each of which communicates with thousands of other neurons, the three-
pound universe cradled between our ears has more connections than there are stars in the Milky Way. How this
enormous neural edifice gives rise to subjective feelings is one of the greatest mysteries of science and philosophy.

Now combine this mystique with the simple fact that pictures—in this case, brain scans—are powerful. Of all
our senses, vision is the most developed. There are good evolutionary reasons for this arrangement: The major
threats to our ancestors were apprehended visually; so were their sources of food. Plausibly, the survival advantage
of vision gave rise to our reflexive bias for believing that the world is as we perceive it to be, an error that
psychologists and philosophers call naive realism. This misplaced faith in the trustworthiness of our perceptions is
the wellspring of two of history's most famously misguided theories: that the world is flat and that the sun revolves
around the earth. For thousands of years, people trusted their raw impressions of the heavens. Yet, as Galileo
understood all too well, our eyes can deceive us. He wrote in his Dialogues of 1632 that the Copernican model of
the heliocentric universe commits a "rape upon the senses"—it violates everything our eyes tell us.

Brain scan images are not what they seem either—or at least not how the media often depict them. Nor are brain-
scan images what they seem. They are not photographs of the brain in action in real time. Scientists can't just
look "in" the brain and see what it does. Those beautiful color-dappled images are actually representations of
particular areas in the brain that are working the hardest—as measured by increased oxygen consumption—
when a subject performs a task such as reading a passage or reacting to stimuli, such as pictures of faces. The
powerful computer located within the scanning machine transforms changes in oxygen levels into the familiar
candy-colored splotches indicating the brain regions that become especially active during the subject's
performance. Despite well-informed inferences, the greatest challenge of imaging is that it is very difficult for
scientists to look at a fiery spot on a brain scan and conclude with certainty what is going on in the mind of the
person.

Neuroimaging is a young science, barely out of its infancy, really. In such a fledgling enterprise, the half-life of facts
can be especially brief. To regard research findings as settled wisdom is folly, especially when they emanate from a
technology whose implications are still poorly understood. As any good scientist knows, there will always be
questions to hone, theories to refine, and techniques to perfect. Nonetheless, scientific humility can readily give
way to exuberance. When it does, the media often seem to have a ringside seat at the spectacle. Several years ago,
as the 2008 presidential election season was gearing up, a team of neuroscientists from UCLA sought to solve the
riddle of the undecided, or swing, voter. They scanned the brains of swing voters as they reacted to photos and
video footage of the candidates. The researchers translated the resultant brain activity into the voters' unspoken
attitudes and, together with three political consultants from a Washington, D.C.–based firm called FKF Applied
Research, presented their findings in the New York Times in an op-ed titled "This Is Your Brain on Politics." There,
readers could view scans dotted with tangerine and neon yellow hot spots indicating regions that "lit up" when the
subjects were exposed to images of Hillary Clinton, Mitt Romney, John Edwards, and other candidates. Revealed in
these activity patterns, the authors claimed, were "some voter impressions on which this election may well turn."
Among those impressions was that two candidates had utterly failed to "engage" with swing voters. Who were
these unpopular politicians? John McCain and Barack Obama, the two eventual nominees for president.
Another much-circulated study, published in 2008, "The Neural Correlates of Hate" came from neuroscientists at
University College London. The researchers asked subjects to bring in photos of people they hated—generally ex-
lovers, work rivals, or reviled politicians—as well as people about whom subjects felt neutrally. By comparing their
responses—that is, patterns of brain activation elicited by the hated face—with their reaction to the neutral photos,
the team claimed to identify the neurological correlates of intense hatred. Not surprisingly, much of the media
coverage attracted by the study flew under the headline: "'Hate Circuit' Found in Brain." One of the researchers,
Semir Zeki, told the press that brain scans could one day be used in court—for example, to assess whether a
murder suspect felt a strong hatred toward the victim.

Not so fast.

True, these data do reveal that certain parts of the brain become more active when people look at images of people
they hate and presumably feel contempt for them as they do so. The problem is that the illuminated areas on the
scan are activated by many other emotions, not just hate. There is no newly discovered collection of brain regions
that are wired together in such a way that they comprise the identifiable neural counterpart of hatred. University
press offices, too, are notorious for touting sensational details in their media-friendly releases: Here's a spot that
lights up when subjects think of God ("Religion center found!"), or researchers find a region for love ("Love found in
the brain"). Neuroscientists sometimes refer disparagingly to these studies as "blobology," their tongue-in-cheek
label for studies that show which brain areas become activated as subjects experience X or perform task Y.

To repeat: It's all too easy for the nonexpert to lose sight of the fact that fMRI and other brain-imaging
techniques do not literally read thoughts or feelings. By obtaining measures of brain oxygen levels, they show
which regions of the brain are more active when a person is thinking, feeling, or, say, reading or calculating. But it
is a rather daring leap to go from these patterns to drawing confident inferences about how people feel about
political candidates or paying taxes, or what they experience in the throes of love." Pop neuroscience makes an easy
target, we know. Yet we invoke it because these studies garner a disproportionate amount of media coverage and
shape public perception of what brain imaging can tell us. Skilled science journalists cringe when they read
accounts claiming that scans can capture the mind itself in action. Serious science writers take pains to describe
quality neuroscience research accurately. Indeed, an eddy of discontent is already forming. "Neuromania,"
"neurohubris," and "neurohype"—"neurobollocks," if you're a Brit—are just some of the labels that have been
brandished, sometimes by frustrated neuroscientists themselves. But in a world where university press releases
elbow one another for media attention, it's often the study with a buzzy storyline ("Men See Bikini-Clad Women as
Objects, Psychologists Say") that gets picked up and dumbed down.'

The problem with such mindless neuroscience is not neuroscience itself. The field is one of the great
intellectual achievements of modern science. Its instruments are remarkable. The goal of brain imaging is
enormously important and fascinating: to bridge the explanatory gap between the intangible mind and the
corporeal brain. But that relationship is extremely complex and incompletely understood. Therefore, it is
vulnerable to being oversold by the media, some overzealous scientists, and neuroentrepreneurs who tout facile
conclusions that reach far beyond what the current evidence warrants—fits of "premature extrapolation," as British
neuroskeptic Steven Poole calls them.

When it comes to brain scans, seeing may be believing, but it isn't necessarily understanding. Some of the
misapplications of neuroscience are amusing and essentially harmless. Take, for instance, the new trend of
neuromanagement books such as Your Brain and Business: The Neuroscience of Great Leaders, which advises
nervous CEOs "to be aware that anxiety centers in the brain connect to thinking centers, including the PFC [prefrontal
cortex] and ACC [anterior cingulate cortex]." The fad has, perhaps not surprisingly, infiltrated the parenting and
education markets, too. Parents and teachers are easy marks for "brain gyms," "brain-compatible education," and
"brain-based parenting," not to mention dozens of other unsubstantiated techniques. For the most part, these slick
enterprises merely dress up or repack-age good advice with neuroscientific findings that add nothing to the overall
program. As one cognitive psychologist quipped, "Unable to persuade others about your viewpoint? Take a Neuro-
Prefix—influence grows or your money back!"

But reading too much into brain scans matters when real-world concerns hang in the balance. Consider the law.
When a person commits a crime, who is at fault: the perpetrator or his or her brain? Of course, this is a false choice.
If biology has taught us anything, it is that "my brain" versus "me" is a false distinction. Still, if biological roots
can be identified—and better yet, captured on a brain scan as juicy blotches of color—it is too easy for
nonprofessionals to assume that the behavior under scrutiny must be "biological" and therefore "hardwired,"
involuntary or uncontrollable. Criminal lawyers, not surprisingly, are increasingly drawing on brain images
supposedly showing a biological defect that "made" their clients commit murder. Looking to the future, some
neuroscientists envision a dramatic transformation of criminal law. David Eagleman, for one, welcomes a time
when "we may someday find that many types of bad behavior have a basic biological explanation [and] eventually
think about bad decision making in the same way we think about any physical process, such as diabetes or lung disease.”
As this comes to pass, he predicts, "more juries will place defendants on the not-blameworthy side of the liner" But is
this the correct conclusion to draw from neuroscientific data? After all, if every behavior is eventually traced to
detectable correlates of brain activity, does this mean we can one day write off all troublesome behavior on a don't-
blame-me-blame-my-brain theory of crime? Will no one ever be judged responsible? Thinking through these
profoundly important questions turns on how we understand the relationship between the brain and the mind.

The mind cannot exist without the brain. Virtually all modern scientists, ourselves included, are "mind-body
monists": they believe that mind and brain are composed of the same material "stuff." All subjective experience,
from a frisson of fear to the sweetness of nostalgia, corresponds to physical events in the brain. Decapitation
proves this point handily: no functioning brain, no mind. But even though the mind is produced by the action of
neurons and brain circuits, the mind is not identical with the matter that produces it. There is nothing mystical or
spooky about this statement, nor does it imply an endorsement of mind-body "dualism," the dubious assertion that
mind and brain are composed of different physical material. Instead, it means simply that one cannot use the
physical rules from the cellular level to completely predict activity at the psychological level.

By way of analogy, if you wanted to understand the text on this page, you could analyze the words by submitting
their contents to an inorganic chemist, who could ascertain the precise molecular composition of the ink. Yet no
amount of chemical analysis could help you understand what these words mean, let alone what they mean in the
context of other words on the page. Scientists have made great strides in reducing the organizational complexity
of the brain from the intact organ to its constituent neurons, the proteins they contain, genes, and so on. Using this
template, we can see how human thought and action unfold at a number of explanatory levels, working upward
from the most basic elements.

As a psychiatrist and a psychologist, we have followed the rise of popular neuroscience with mixed feelings. We're
delighted to see laypeople so interested in brain science, and we are excited by the promise of new
neurophysiological discoveries. Yet we're dismayed that much of the media diet consists of "vulgarized
neuroscience," as the science watchdog Neuroskeptic puts it, that offers facile and overly mechanistic explanations
for complicated behaviors. We were both in training when modern neuroimaging techniques made their debut. The
earliest major functional imaging technique (PET, or positron emission tomography) appeared in the mid-1980s.
Less than a decade later, the near wizardry of fMRI was unveiled and soon became a prominent instrument of
research in psychology and psychiatry. Indeed, expertise in imaging technology is becoming a sine qua non for
graduate students in many psychology programs, increasing their odds of obtaining federal research grants
and teaching posts and boosting the acceptance rates of their papers by top-flight journals. Many psychology
departments now make expertise in brain imaging a requirement for their new hires.

The brain is said to be the final scientific frontier, and rightly so, in our view. Yet in many quarters brain-based
explanations appear to be granted a kind of inherent superiority over all other ways of accounting for human
behavior. We call this assumption "neurocentrism"—the view that human experience and behavior can be best
explained from the predominant or even exclusive perspective of the brain. From this popular vantage point, the
study of the brain is somehow more "scientific" than the study of human motives, thoughts, feelings, and actions.
By making the hidden visible, brain imaging has been a spectacular boon to neurocentrism.

Bibliografie 6

Ben Goldacre – I Think You’ll Find It’s a Bit


More Complicated Than That, p.56-58; 140-
142;

Neuro-Realism

Guardian, 30 October 2010

When the BBC tells you, in a headline, that libido problems are in
the brain and not in the mind, then you might find yourself
wondering what the difference between the two is supposed to
be, and whether a science article can really be assuming – in 2010
– that readers buy into some strange form of Cartesian dualism,
in which the self is contained by a funny little spirit entity in
constant pneumatic connection with the corporeal realm.

But first let’s look at the experiment the BBC is reporting.

As far as we know (the study hasn’t yet been published, only


presented at a conference) some researchers took seven women
with a ‘normal’ sex drive, and nineteen women diagnosed with
‘hypoactive sexual desire disorder’. The participants watched a series of erotic films in a scanner, and while they did
so, an MRI machine took images of blood flow in their brains. The women with a normal sex drive had an increased
flow of blood to some parts of their brain – some areas associated with emotion – while those with low libido did
not.

Dr Michael Diamond, one of the researchers, tells the Mail: ‘Being able to identify physiological changes, to me
provides significant evidence that it’s a true disorder as opposed to a societal construct.’ In the Metro he goes
further: ‘Researcher Dr Michael Diamond said the findings offer “significant evidence” that persistent low sex drive
– known as hypoactive sexual desire disorder (HSDD) – is a genuine physiological disorder and not made up.’

This strikes me as an unusual world view. All mental states must have physical correlates, if you believe that
the physical activity of the brain is what underlies our sensations, beliefs and experiences: so while different mental
states will certainly be associated with different physical states, that doesn’t tell you which caused which. If I do
not have the horn, you may well fail to see any increased activity in the part of my brain that lights up when I do
have the horn. That doesn’t tell you why I don’t have the horn: maybe I’ve got a lot on my plate, maybe I have a
physical problem in my brain, maybe I was raped last year. There could be any number of reasons.

Far stranger is the idea that a subjective experience must be shown to have a measurable physical correlate in
the brain before we can agree that the subjective experience is real, even for matters that are plainly subjective
and experiential.

Interestingly, the world view being advanced by these researchers and journalists is far from new: in fact, it’s part
of a whole series of recurring themes in popular misinterpretations of neuroscience, first described formally in a
2005 paper from Nature Reviews Neuroscience called ‘fMRI in the Public Eye’. To examine how fMRI brain-imaging
research was depicted in mainstream media, the researchers conducted a systematic search for every news story
about it over a twelve-year period, and then conducted content analysis to identify any recurring themes.

The most prevalent theme they identified was the idea that a brain-imaging experiment ‘can make a
phenomenon uncritically real, objective or effective in the eyes of the public’. They described this phenomenon
as ‘neuro-realism’, and the idea is best explained through their examples, which mirror these new claims about
libido perfectly.

The New York Times takes a similarly strange tack in a brain-imaging study on fear: ‘Now scientists say the feeling
is not only real, but they can show what happens in the brain to cause it.’

Many people find fatty food to be pleasurable, for the taste, the calories, and any number of other reasons. When
a brain-imaging study showed that the reward centres in the brain had increased blood flow after subjects in an
experiment ate high-fat foods, the Boston Globe explained: ‘Fat really does bring pleasure.’

They’re right, it does. But it’s a slightly strange world when a scan of blood flow in the brain is taken as vindication
of a subjective mental state, and a way to validate our experience of the world.

What If Academics Were as Dumb as Quacks with Statistics?

Guardian, 10 September 2011

We all like to laugh at quacks when they misuse basic statistics. But what if academics, en masse, make mistakes
that are equally foolish?

This week Sander Nieuwenhuis and colleagues publish a mighty torpedo in the journal Nature Neuroscience.
They’ve identified one direct, stark statistical error that is so widespread it appears in about half of all the
published papers surveyed from the academic neuroscience research literature.

To understand the scale of this problem, first we have to understand the statistical error they’ve identified.
This will take four hundred words. At the end, you will understand an important aspect of statistics better than half
the professional university academics currently publishing in the field of neuroscience.

Let’s say you’re working on some nerve cells, measuring the frequency with which they fire. When you drop a
particular chemical on them, they seem to fire more slowly. You’ve got some normal mice, and some mutant mice.
You want to see if their cells are differently affected by the chemical. So you measure the firing rate before and
after applying the chemical, both in the mutant mice, and in the normal mice.

When you drop the chemical on the mutant-mice nerve cells, their firing rate drops by, let’s say, 30 per cent.
With the number of mice you have (in your imaginary experiment), this difference is statistically significant, which
means it is unlikely to be due to chance. That’s a useful finding which you can maybe publish. When you drop the
chemical on the normal-mice nerve cells, there is a bit of a drop in firing rate, but not as much – let’s say the
drop is 15 per cent – and this smaller drop doesn’t reach statistical significance.

But here is the catch.

• You can say that there is a statistically significant effect for your chemical reducing the firing rate in the
mutant cells.
• And you can say there is no such statistically significant effect in the normal cells.
• But you cannot say that mutant cells and normal cells respond to the chemical differently. To say that,
you would have to do a third statistical test, specifically comparing the ‘difference in differences’, the
difference between the chemical-induced change in firing rate for the normal cells against the chemical-
induced change in the mutant cells.
Now, looking at the figures I’ve given you here (entirely made up, for our made-up experiment), it’s very likely that
this ‘difference in differences’ would not be statistically significant, because the responses to the chemical only
differ from each other by 15 per cent, and we saw earlier that a drop of 15 per cent on its own wasn’t enough
to achieve statistical significance.

But in exactly this situation, academics in neuroscience papers are routinely claiming that they have found a
difference in response, in every field imaginable, with all kinds of stimuli and interventions: comparing responses
in younger versus older participants; in patients against normal volunteers; in one task against another; between
different brain areas; and so on.

How often? Nieuwenhuis and colleagues looked at 513 papers published in five prestigious neuroscience journals
over two years. In half the 157 studies where this error could have been made, it was made. They broadened their
search to 120 cellular and molecular articles in Nature Neuroscience during 2009 and 2010: they found twenty-five
studies committing this statistical fallacy, and not one single paper analysed differences in effect sizes correctly.

These errors are appearing throughout the most prestigious journals in the field of neuroscience. How can we
explain that? Analysing data correctly, to identify a ‘difference in differences’, is a little tricksy, so thinking very
generously, we might suggest that researchers worry it’s too long-winded for a paper, or too difficult for readers.
Alternatively, perhaps less generously, we might decide it’s too tricky for the researchers themselves.

But the darkest thought of all is this: analysing a ‘difference in differences’ properly is much less likely to give you a
statistically significant result, and so it’s much less likely to produce the kind of positive finding you need to get
your study published, to get a point on your CV, to get claps at conferences, and to get a good feeling in your
belly. In all seriousness: I hope this error is only being driven by incompetence.

Bibliografie 7
Russ Poldrack – The New Mind Readers:
What Neuroimaging Can and Cannot
Reveal about Our Thoughts, p.67-69; 30-
32; 117-122;

Growing Pains

Whenever a new measurement tool emerges, the scientific


community often struggles to understand how to work with
the data and what their limits are, and fMRI has been no
exception. In fact, the high profile of fMRI research has made
it an attractive target for researchers looking to criticize it.

One of the major challenges for the analysis of fMRI is the


fact that we collect so many measurements at once. In
comparison to a psychology study where we might just
measure 5 to 10 different variables, in fMRI we regularly
collect data from more than 100,000 locations in the brain.
The unique challenges of dealing with such big data were
highlighted in one of the most amusing episodes in the
history of fMRI.

In 2009 I was a member of the program committee for the Organization for Human Brain Mapping, which is
responsible for vetting submissions to make sure that they meet the standards of the organization before accepting
them for presentation at the annual meeting. One of the criteria for rejecting a submission is if it is a joke, and one
particular submission was flagged for this reason by one of its reviewers. The title of the abstract was “Neural
Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument for Multiple
Comparisons Correction,” which certainly doesn’t sound like a very funny joke, but a closer reading of the
submission showed why the reviewers had been concerned:

“Subject. One mature Atlantic Salmon (Salmo salar) participated in the fMRI study. The salmon was approximately 18
inches long, weighed 3.8 lbs, and was not alive at the time of scanning.

Task. The task administered to the salmon involved completing an open-ended mentalizing task. The salmon was
shown a series of photographs depicting human individuals in social situations with a specified emotional valence. The
salmon was asked to determine what emotion the individual in the photo must have been experiencing.”

What the researchers, Craig Bennett and his colleagues, had done was to put a dead salmon in an MRI scanner,
present it with a “task,” and record fMRI data. They then analyzed the data in a particular way, and found that there
was apparently activation in the salmon’s brain in response to the task. The authors did not do this to demonstrate
some kind of afterlife mental capacity in the salmon; rather, they did it to prove a critical point about analyzing
fMRI data—one which had been known for many years, but had nonetheless been neglected by many
researchers in the field.

Remember that fMRI data consist of measurements from many small cubes (“voxels”) across the brain. In a
standard fMRI scan we would collect data from anywhere between 50,000 and 200,000 voxels. In order to
determine which parts of the brain respond to our task, we compute a statistic at each voxel, which quantifies how
much evidence there is that the voxel’s signal fluctuates in the way we would expect if it were actually
responding to the task. We then have to determine which regions show a strong enough response that they cannot
be explained by random variability, which we do using a statistical test. If the response in a voxel is strong enough
that we don’t think it can be explained by chance, then we call it a “statistically significant” response. In order to
determine this, we need to determine how willing we are to accept false positive results—that is, results that are
called statistically significant even though there is no actual signal in the data (known technically as “type I errors”).
There is also another kind of error that we can make, in which we fail to find a statistically significant result even
when there is a true effect in the voxel; we call this a “false negative” or “type II error.” These two types of statistical
errors exist in a delicate balance—holding all else equal, increasing our tolerance for false positives will
decrease the rate of false negatives, and vice versa.

The usual rate of false positives that we are willing to accept is five percent. If we use this threshold, then we will
make a false positive error on five percent of tests that we perform. If we are doing just a single test, then that seems
reasonable—19 out of every 20 times we should get it right. But what if we are doing thousands of statistical tests
at once, as we do when we analyze fMRI data? If we just use the standard five percent cutoff for each test, then
the number of errors that we expect to make is 0.05 multiplied by the number of tests, which means that across
100,000 voxels we are almost certain to make thousands of false positive errors, and this is in fact what
Bennett and his colleagues found. They wrote:

“Can we conclude from this data that the salmon is engaging in the perspective-taking task? Certainly not. What
we can determine is that random noise in the [fMRI] timeseries may yield spurious results if multiple comparisons
are not controlled for.”

Unfortunately, that part of their conclusion was often lost when the results were discussed in the media,
resulting in a misleading impression that fMRI data were untrustworthy.

In fact, neuroimaging researchers have understood this problem of “multiple comparisons” since the days of PET
imaging, and statisticians have developed many different ways to deal with it. The simplest (named after the
mathematician Carlo Bonferroni) is to divide the rate of false positives for each test (known as alpha) by the
number of tests. This controls the false positive rate, but is often overly conservative, meaning that the actual rate
of false positives will be less than the five percent rate. This is problematic because, as I mentioned before, there
is a seesaw relation between false positive and false negative rates, so an overly conservative test will also
cause a high number of false negative errors, meaning that researchers will fail to find true effects even when
they are present. However, there are a number of methods that have been developed that allow researchers to
control the level of false positives without being overly conservative. While it was common to see fMRI papers
published without appropriate statistical corrections in the early days of imaging, today nearly every paper
reporting fMRI results will use a method to correct for multiple comparisons.

What Can’t Neuroimaging Tell Us?

While fMRI has shown itself to be incredibly powerful, it has also been used in ways that go beyond what it can
actually tell us, which was illustrated well in an event from 2007. On November 11 of that year, an op-ed piece titled
“This Is Your Brain on Politics” was published in the New York Times. The authors, well-known neuroscientists and
politicial scientists, reported results from a study in which they used fMRI to measure brain activity while so-called
“swing voters” viewed video clips of candidates in the then-ongoing US presidential primaries.

Based on these data, they drew a number of broad conclusions about the state of the electorate, which were based
on the brain areas that were active while viewing the videos. One of the claims in the op-ed was that:

“Emotions about Hillary Clinton are mixed. Voters who rated Mrs. Clinton unfavorably on their questionnaire appeared
not entirely comfortable with their assessment. When viewing images of her, these voters exhibited significant activity
in the anterior cingulate cortex, an emotional center of the brain that is aroused when a person feels compelled to act
in two different ways but must choose one. It looked as if they were battling unacknowledged impulses to like Mrs.
Clinton. Subjects who rated her more favorably, in contrast, showed very little activity in this brain area when they
viewed pictures of her.”

Here was the verdict on Barack Obama:

“Mr. Obama was rated relatively high on the pre-scan questionnaire, yet both men and women exhibited less brain
activity while viewing the pre-video set of still pictures of Mr. Obama than they did while looking at any of the other
candidates. Among the male subjects, the video of Mr. Obama provoked increased activity in some regions of the brain
associated with positive feeling, but in women it elicited little change.”

As I read this piece, my blood began to boil. My research has focused on what kinds of things we can and cannot
learn from neuroimaging data, and one of the clearest conclusions to come from this work is that activity in a
particular region in the brain cannot tell us on its own whether a person is experiencing fear, reward, or any
other psychological state. In fact, when people claim that activation in a particular brain area signals something
like fear or reward, they are committing a basic logical fallacy, which is now referred to commonly as reverse
inference. My ultimate fear was that the kind of fast-and-loose interpretation of fMRI data seen in the New York
Times op-ed would lead readers to think erroneously that this kind of reasoning was acceptable, and would
also lead other scientists to ridicule our field.

What’s the problem with reverse inference? Take the example of a fever. If we see that our child has a fever, we
can’t really tell what particular disease he or she has, because there are so many different diseases that cause
a fever (flu, pneumonia, and bacterial infections, just to name a few). On the other hand, if we see a round red
rash with raised bumps, we can be fairly sure that it is caused by ringworm, because there are few other diseases
that cause such a specific symptom. When we are interpreting brain activation, we need to ask the analogous
question: How many different psychological processes could have caused the activation? If we knew, for
example, that mental conflict was the only thing that causes the anterior cingulate cortex to be active, then we
would be fairly safe in concluding from anterior cingulate activity that the person is experiencing conflict when
viewing images of Hillary Clinton. On the other hand, if many different things can cause the region to be active,
then we can’t safely draw that conclusion. Figure 1.5 shows an example of each of these two different cases. Work
that I published in 2006 showed that activity of individual brain regions was not very specific or different
psychological functions (that is, it’s more like a fever than a round rash), and thus that this kind of simple reverse
inference is problematic.

Figure 1.5. Can you infer


cognitive function from areas
of brain activation? If there
was a one-to-one mapping
between brain areas and
cognitive functions, as shown in
the left panel, then reverse
inference based on activation in
those areas would be possible—
activation in the amygdala
would imply fear, and
activation in the ventromedial
prefrontal cortex would imply
reward. However, the brain is
actually organized more like the right panel—any mental function involves a combination of many different
brain regions, that are combined in different ways to support different mental functions.

The anterior cingulate cortex is a prime example of this. When we looked across many thousands of published
neuroimaging studies in a later study, we found that this area was active in about one-quarter of all those studies,
which involved many different types of cognitive tasks. This means that we cannot tell very much at all about
what a person is doing from the fact that the anterior cingulate cortex was active.

Is fMRI Reliable Enough for Real-World Decisions?

A final concern centers on whether the results from published fMRI studies are reliable enough to use them in ways
that will deeply impact people’s lives. Most people assume that when a scientist reports a research finding that it is
likely to be true, but this assumption has come under intense scrutiny, owing in large part to the work of a researcher
named John Ioannidis.

In 2005, Ioannidis was a researcher at a small university in Greece, having completed his medical training at Harvard
and Tufts and established himself as a medical researcher studying the treatment of HIV infection. Over time, his
interests had turned from clinical research to what he now calls “meta-research”—that is, the study of how scientific
research is done. For the previous decade he had become increasingly concerned about problems with how medical
research was done, which too often led ultimately to “medical reversals,” in which the field of medicine suddenly
decides that its practices have been all wrong.

During a trip to the small Greek island of Sikinos, he began to write the paper that would become his calling card,
titled “Why Most Published Research Findings Are False.” In this paper, Ioannidis outlined a theoretical model for
how scientific decisions are made and the factors that could lead them to be true or false. His analysis focused on a
statistical concept of the “positive predictive value,” which as I mentioned earlier in the chapter is the likelihood
that a positive result found by a researcher is true.

Let’s say that I am a medical researcher studying whether a new drug improves symptoms in people with multiple
sclerosis better than existing treatments. In the best case I would use a randomized controlled trial, in which
patients are randomly assigned to receive either the drug being tested or the standard treatment for the disease,
and their symptoms are measured and then compared statistically. Let’s further suppose that our analysis finds a
statistically significant difference between the new drug and the standard treatment. The positive predictive value
reflects how likely it is that this is a true positive rather than a false positive—that is, whether the new drug is
really better than the old drug, or whether we have made a statistical mistake. In his paper, Ioannidis outlined a
set of factors that can decrease the positive predictive value of a finding, and argued that in general the positive
predictive value of research findings is much lower than most of us would like to think.

Perhaps the most important of these factors is how much statistical power the study has. Power is a statistical
concept that refers to how likely we are to find a true effect if it really exists, which depends both on

1.how large the sample size is for the study and

2.how small the effect is that we are trying to study.

A very large effect can be found even with a relatively small sample—for example, smokers have an incidence of
lung cancer that is more than 20 times that of nonsmokers, so we don’t need to study huge populations to observe
a difference in the rates of lung cancer between smokers and nonsmokers. However, many of the effects that are
investigated in biomedical research are much smaller than this—for example, overweight individuals are 1.35 times
more likely to have heart disease compared with normal weight individuals. That is still a fairly important effect,
but it requires a much larger sample size to detect such an effect with confidence.

Until Ioannidis’s 2005 paper, most researchers had focused on how the power of a study is related to false
negatives—everyone realized that smaller studies are less likely to find an effect even when it truly exists. What
Ioannidis pointed out was that the power of a study also affects the positive predictive value. Think of it this way: if
researchers perform a study with zero power that means that they have no chance of finding a true positive effect
even if it exists. However, remember that there are always going to be a number of false positive results—how
many being determined by the false positive rate that we specified in our analysis (usually five percent). So, in
the case with zero power, we will have zero true positives and five percent false positives, meaning that the
positive predictive value—the proportion of all positive results that are true—will be zero percent! As the power
of the study increases, we start finding more true positives alongside our constant five percent of false positives, so
the positive predictive value goes up.

Sometimes scientific papers can take a while to have their full effect, as if the world needs time to catch up with
them. When Ioannidis’s paper was published in the journal PLOS Medicine in 2005, there was some initial
controversy, but the paper did not have a broad impact immediately. However, by 2011 it was becoming
increasingly clear that Ioannidis’s claim that “most published research findings are false” might actually be correct,
at least within the field of social psychology. There were several intersecting causes for this concern. One was the
discovery that Diederik Stapel, a prominent social psychologist, had published a number of high-profile papers
based on data that he had faked. The fact that these claims had not been challenged by other researchers, despite
the apparent rumors that his results were not replicable, led to concern about the reliability of other research
in the field. Another source of concern arose around a paper published by the social psychologist Daryl Bem, which
had claimed to find scientific evidence for precognition (that is, the ability to see the future). Statisticians who dug
into Bem’s results realized that they had probably arisen not because he had truly found evidence for the
paranormal, but because he had tortured his data in a way that allowed him to find false positive results that
fit his hypothesis.

From this, the term “p-hacking” was born, referring to the fact that researchers can and sometimes will run many
different analyses in an attempt to find a statistically significant result. Uri Simonsohn and his colleagues, in a paper
with the provocative title “False-Positive Psychology,” showed that when researchers take advantage of the various
kinds of flexibility that are present in research (such as stopping the study as soon as one has found a statistically
significant effect), almost anything could be found to be statistically significant.

In their example, they showed that using these methods they could find statistical evidence for the nonsensical
conclusion that listening to a particular song (the Beatles’ When I’m Sixty-Four) made people more than a year
younger compared with a control song. The economists Leslie John, George Loewenstein, and and Drazen Prelec
came up with a more general term for this kind of analytic trickery: “questionable research practices.” They
conducted a survey of a large number of psychologists (which I remember taking), and the results were striking:
many researchers admitted to research practices that were very likely to increase the prevalence of false
results. While almost everyone agreed that falsifying data was wrong and only about one percent of researchers
admitted to having done it, there were other practices that were surprisingly common amongst researchers, such
as deciding to collect more data based on whether the results were statistically significant (which more than half of
researchers admitted to) and reporting an unexpected finding as if it had been predicted all along (which about a
third admitted to).

The crisis came to a head around the inability of independent researchers to replicate a number of prominent
findings in social psychology. The social psychologist Brian Nosek put together a major effort to determine how
many of the reported results in the psychological literature are actually reproducible. He organized more than 250
other researchers to join him in attempting to replicate 100 published research studies, in an effort known as the
Reproducibility Project: Psychology (RPP for short). The results, published in 2015, were shocking: whereas 97% of
the original papers had reported statistically significant findings, only 35% of the replications found significant
effects. John Ioannidis’s predictions had been borne out, though this did not give him pleasure. As he told the
journalist Ed Yong: “The success rate is lower than I would have thought. … I feel bad to see that some of my
predictions have been validated. I wish they’d been proven wrong.”

If psychological research is as unreliable as the RPP suggests, then what about neuroimaging? Because
neuroimaging research is so much more expensive than psychological research (often costing well over $25,000 to
run a single fMRI study), an effort on par with the RPP would be almost impossible. There are a couple of reasons
to think that many of the conclusions from neuroimaging might be relatively reliable.

1.First, many of the very basic findings are visible in individual subjects assuming that enough data are
available; nearly every healthy human has a face-responsive area in the fusiform gyrus and a default mode network.

2.Second, studies that combine data across many different research studies (known as meta-analyses) show
very consistent patterns of activity in relation to specific mental processes such as language or social function and
many of these fit with other evidence such as the effects of brain lesions. Finally, using large data sets such as the
Human Connectome Project we can analyze how well the results overlap across large groups of subjects, and we
find that they are generally quite reliable.

At the same time, there are very good reasons to think that a substantial number of findings from
neuroimaging research may be false. A major reason is that many neuroimaging studies have very low statistical
power, which makes any positive results more likely to be false.

This first came to light in a paper published by Kate Button and colleagues (including John Ioannidis), titled “Power
Failure.” Button and her colleagues analyzed studies from across neuroscience, including both neuroimaging
studies (though not fMRI studies) and studies in rats. What they found was that these studies overall had terrible
statistical power; whereas we usually shoot for 80% power, meaning that we have an 80% chance of finding a true
effect if it exists, many studies in neuroscience had power of less than 10%. With a number of collaborators I did
a subsequent study of statistical power specifically for fMRI studies, and found that most of these studies were also
badly underpowered.

Where does this leave us? I think that we have to be very careful in our interpretation of published fMRI
research studies. There are a number of questions we have to ask about any particular study.

1.First, how large is the sample size? There are no hard and fast rules here, because the necessary sample size
depends on the size of the effect that is expected; very powerful effects (like activity in the motor cortex when
someone makes a movement) can be found with small samples, while weaker effects (such as those involving
correlations between brain and behavior across people) require much larger samples, usually 100 or more; the
tiny sample of 16 in our 2007 paper just doesn’t make the grade. Researchers can use a technique called “power
analysis” to find out how big a sample they need to find their effect of interest, and this should be the gold standard
for determining the sample size for a study.

2.A second question is whether the analyses and hypotheses were planned before the study was performed,
and whether these plans were followed. One of the developments arising from the reproducibility crisis in
psychology is the idea that the methods of a study should be “preregistered”—that is, a description of the methods
should be deposited in a database where they will be available to anyone once the research is complete, so that we
can see that the methods were actually planned in advance as opposed to reflecting p-hacking or other
questionable practices. At one point this would have required mailing a letter to one’s self, but now there are
websites that provide the ability to register and time stamp a hypothesis.

This idea has been used for clinical trials in medicine for more than a decade now, and while there are still
problems, it has helped improve the reliability of clinical trial research, mostly by reducing the number of
positive outcomes, some of which were presumably false positives. However, to paraphrase Helmuth von
Moltke, no analysis plan survives contact with the data—once we start analyzing the data we often realize that there
were issues that we had not envisioned when we first planned the study. One important point is that deviating from
the planned analysis is acceptable as long as one is transparent about the deviations and the reasons for them.
Ultimately we would also like to have a separate and independent data set that we can use to confirm the finding;
this is now the standard practice in some other fields of research, such as genetics.

3.Finally, we need to remember that science is a process for gaining understanding, not a body of knowledge.
We learn from our mistakes and we move forward, with the realization that all of our knowledge is tentative and
will likely be revised or overturned in the future. The willingness to change our mind and, indeed, our efforts to find
our shortcomings and address them are the hallmarks of science that drew me to it originally and give me continued
faith that it can help us better understand the world.

Bibliografie 8
Robert Newman – Neuropolis: A Brain
Science Survival Guide, p.17-20;

Chapter 2 - On Rafts Across the Sea of Okhotsk

We Are Our Brains is written by a man with a grudge against


humanity on account of being called Dick Swaab. Dick
Swaab argues that people from Japan and Papua New
Guinea struggle to tell the difference between fear and
surprise:

Japanese and New Guineans find it difficult to distinguish


between a face expressing fear and a face expressing surprise.

A thought experiment seems in order. Let's say Yoko Ono is


having her annual business meeting with Paul McCartney to
settle the Beatles estate. If, during the course of that
meeting, she finds herself struggling to decipher what
exactly Paul McCartney's facial expression might possibly
mean, then she is no different from the rest of us, who have
shared her perplexity ever since that day about a dozen years
ago, when McCartney went into his plastic surgeon's and said: 'I'm tired of expressing lots of different emotions, can
you give me just a rictus of mild surprise and vague curiosity?'

'Sure,' replied the surgeon. 'Do you want a hint of disingenuousness with that?'

'I don't think that'll be necessary do you?'

Paul McCartney seems to get by pretty well with just the one emotion on his face. In live performance, however, he
concedes that the rictus of mild surprise and vague curiosity has changed the emotional register of the songs. As
he told one interviewer:

If you take a song like 'Eleanor Rigby', when we did it with Beatles it was always very much a song about pity and
compassion. Now, when I perform 'Eleanor Rigby' live, it's much more a song about mild surprise and vague curiosity.
Sort of,'Ooh, I wonder where all those lonely people came from all of a sudden?'

The argument that the Japanese cannot tell fear from surprise contradicts one of the central tenets of human
evolutionary biology. “I have endeavoured to show in considerable detail,” wrote Darwin in The Expression of the
Emotions In Man and Animals:

“that all the chief expressions exhibited by man are the same throughout the world. This fact is interesting, as it affords
a new argument in favour of the several races being descended from a single parent-stock, which must have been
almost completely human in structure, and to a large extent in mind, before the period at which the races diverged from
each other.”

Everything significant about our species was already well in place 35,000 years ago when Paleolithic sailors rafted
across the Sea of Okhotsk to become the first humans to make landfall on the Japanese archipelago. If Dick Swaab
is going to take a sledgehammer to the Darwinian principle that all people everywhere express emotions in pretty
much the same way, then we might reasonably expect him to provide some evidence. I mean, that's a mainstay of
human biology. But Dick Swaab produces no evidence to support his claim. None. In fact, the atrocious allegation
that the entire Japanese nation suffers from a sort of autism is made in a book which offers no sources or footnotes
at all. The New Guineans are also supposed to be unable to do what is child's play for Africans, Europeans and
continental Asians, and tell fear from surprise. And Dick Swaab has worked out why. It's because: 'linguistic and
cultural environments [...] determine [...] how facial expressions are interpreted'. Over 800 different languages are
spoken in Papua New Guinea and West Papua, and they not even from the same language families. The Ternate
spoken in West Papua is from a different language family to the Austronesian and Papuan languages spoken in Port
Moresby. Nowhere else on earth exhibits such linguistic diversity. Nowhere else on earth, therefore, is it less likely
that a common language could create a shared inability to read facial expressions. Dick Swaab literally could
not have chosen a worse example from the face of the earth than 'New Guineans' to support his argument. But
Dick Swaab is on a roll. Don't stop him now:

When surveying a scene, Chinese individuals, unlike Americans, don't focus on a single object at a time but look at it in
relation to its surroundings.

Last time I looked, the United States of America was a newish political state created from every race and nation
on earth. According to the US Census Bureau, more than a fifth of the population, over sixty million people, speak
a language other than English in the home. Americans are not a biological entity. They are not a linguistic one
either. There is no specifically American way of seeing, just as there is no Chinese way of seeing. The Chinese people
are not a Terracotta Army all facing one way, all seeing everything holistically the whole time. When NASA
astronaut Mae Jemison, of mixed East-Asian and African-American descent, looked out of the Space Shuttle
Endeavour's window did she see the big picture or the small? In her memoir Find Where The Wind Goes, Mae
Jemison wrote that 'science provides an understanding of a universal experience': What is so terribly damaging
about Dick Swaab's parascience is precisely its denial of the universality of human experience.
Bibliografie 9

Peter Bandettini - fMRI, p.71-75;


Signal to Noise

The signal to noise and the functional contrast to noise are


influenced by many variables. These include, among other
things,

1.voxel volume,

2.echo time (TE),

3.repetition time (TR),

4.flip angle,

5.receiver bandwidth,

6.field strength, and

7.RF coil used.

Not considering fMRI for a moment, the image signal to


noise is increased with larger voxel volume, shorter echo
time, longer repetition time, narrow receiver bandwidth,
higher field strength, and smaller RF coil.

In the context of fMRI, the functional contrast to noise is


optimized with a voxel volume equaling the size of the
activated area, TE ≈ gray matter T2*, short TR (optimizing
samples per unit time), narrow receiver bandwidth, high field
strength, and smaller RF receiver coils. Smaller coils are now common in multi-channel arrays. These arrays
typically have sixteen to thirty-two RF coils.

Stability

Theoretically, the noise, if purely thermal in nature, should propagate similarly over space and across time. In fMRI
this is not at all the case since physiologic noise plays a large role over time-series data collection. For EPI, stability
is much more of an issue on the longer time scale. Flow and motion —both which confound image quality—occur
with cardiac and respiratory cycles. Subject movement and scanner instabilities also contribute. As mentioned
already, single-shot acquisition such as EPI generally has better temporal stability than multi-shot techniques.

An SNR of 100 to 1 limit is determined mostly by physiologic noise as the brain is constantly pulsating with the
heartbeat and breathing. Typically, it is optimal to adjust imaging parameters such that the image SNR matches
the temporal SNR. If image SNR is higher than the temporal SNR, the time series is considered to be dominated by
physiologic noise. Filtering out this noise has proven to be difficult, but in resting state fMRI, the noise can be used
for the resting state fluctuation and connectivity information it contains.

One of the most important and challenging problems in fMRI development work is the elimination of
physiologic noise from the time series. If physiologic noise could be identified and effectively removed, then the
temporal signal to noise would only be limited by the RF coil sensitivity, allowing fMRI time-series SNR ratios to
approach 1000 to 1, opening up a wide range of applications and new findings using BOLD contrast.
Image Quality

The most prevalent image quality issues are image warping and signal dropout. While books can be written
on this subject, the description here is kept to the essentials. Because much of image quality has to do with
magnetic field inhomogeneity, it’s useful to mention what is performed before each study to minimize this.
Magnetic field “shimming” is a procedure in which current is adjusted through “shim” coils within the bore of
the magnet that make small, spatially specific changes in the main magnetic field. This is a procedure in which
specific areas where there is magnetic field inhomogeneity are targeted. The current in the shim coils is iteratively
adjusted, typically using an algorithm rather than by hand, until the field inhomogeneities are reduced to a level
that is satisfactory. That said, shimming is far from perfect, and field inhomogeneities still are prevalent, having a
greater impact on image quality at higher field strengths and in particular on low-resolution, long readout-window
sequences like EPI.

Image warping is fundamentally caused by three things: B0 field inhomogeneities; gradient nonlinearities; and
in the case where extremely large gradients are applied such as with diffusion imaging, eddy currents. A nonlinear
gradient will cause nonlinearities in spatial encoding, causing the image to be distorted. This is primarily a problem
when using local, small gradient coils that have a small region of linearity that drops off rapidly at the edges of the
field of view. With the prevalence of whole-body gradient-coils for performing echo planar imaging, this problem
is not a major issue. If the B0 field is inhomogeneous, as is typically the situation with imperfect shimming
procedures—particularly at higher field strengths, the protons will be processing at different frequencies than
expected in their particular location. This will cause image deformation in these areas of poor shim— particularly
with a long readout window or the long acquisition time of EPI.

The long readout window duration allows for more time for these “offresonance” effects to be manifest. Solutions
to this include: obtaining a better B0 shim, mapping the B0 field to perform a correction based on this map, or, after
the image has been reconstructed, performing image warping to match it with a non-warped high resolution
structural image. Another viable solution is to reduce the readout window duration. This last solution is now
possible with SMASH and SENSE imaging as this allows the same resolution image to be obtained in a fraction of
the time, therefore producing images that suffer from much less warping.

Signal dropout is related to inhomogeneities in B0, typically at interfaces of tissues having different
susceptibilities. If within a voxel, because of the B0 inhomogeneities, protons are precessing at different
frequencies, their signals will cancel each other out. Several strategies exist for reducing this problem.

1.One is, again, to shim as well as possible at the desired area. Due to imperfect shimming procedures, this
solution helps but does not solve the signal dropout problem.

2.The second potential solution is to reduce the voxel size (increase the resolution), thereby having less
stratification of different frequencies within a voxel.

3.The third potential solution is to choose the slice orientation such that the smallest voxel dimension (in many
studies, the slice thickness is greater than the in-plane voxel dimension) is orientated perpendicular to the largest
B0 gradient or in the direction of the greatest inhomogeneity.

Lastly, gradient electronics and structural improvements have mitigated eddy currents for most imaging
applications. However, when the gradients are pushed extremely hard in the case of diffusion imaging, eddy
currents that transiently exist can occur during the readout window, distorting the images. To make the problem
worse, the distortions will depend on the directions in which the diffusion gradients are applied. In the case of
diffusion tensor imaging, where gradients are applied in many different directions, distortions will occur in many
different directions, causing the images to be out of alignment in many areas. Spatial correction procedures exist
to mitigate these issues, but they are not perfect.
This brings up a final important point. A common operation is to superimpose a functional image, obtained
with an EPI time series, on top of a structural image, obtained with multi-shot acquisition. Because these
sequences have different readout window widths, they will have different amounts of distortion—particularly
in areas of poor shim that have large offresonance effects. There are two solutions. The first is to perform in
postprocessing a nonlinear image warping to better align the two images. This works for the most part; however,
when attempting to align structure at the level of layers or columns, it tends to break down. The second solution is
to use the EPI data for the structural underlay as well. For layer resolution work, this is essential as the EPI readout
window is exceedingly long and the alignment has to be precise to less than about 0.1 mm. The general principle to
take from this is that images with different readout window widths will have different levels of distortion that needs
to be corrected if precise alignment is desired.

As with many of the topics discussed in this book, much more can be said, but the goal here is to introduce basic
concepts and terms in a clear, practical way. MRI is a complex method that lies at the interface of engineering,
physics, and human physiology. Development of all the technology mentioned in this chapter is still progressing—
with improvements in speed, sensitivity, resolution, interpretability of the signal, and even the type of physiologic
information obtained all coming about at a rapid rate.

MRI is a complex method that lies at the interface of engineering, physics, and human physiology.
Development of all the technology mentioned in this chapter is still progressing.

Bibliografie 10
Russel A. Poldrack; Jeanette A.
Mumford; Thomas E. Nichols – Handbook
of Functional MRI Data Analysis, p.184;

Box 10.4.2 Circularity and “voodoo correlations”

In 2009, a paper ignited a firestorm in the fMRI


community over ROI analysis methods. Originally titled
“Voodoo Correlations in Social Neuroscience,” the paper by
Vul et al.(2009) accused a large numberof studies of
engaging in circular data analysis practices that led to
greatly inflated estimates of effect sizes.

The paper focused on findings of correlations between


behavioral measures (such as personality tests) and
activation, but the point holds for any study that attempts to
estimate the size of an activation effect using a region of
interest derived from the same data. Around the same time,
Kriegeskorte et al.(2009) published a paper that more
generally outlined the problem of “circular analysis” in
neuroscience (not limited to neuroimaging).

The circular analysis problem arises when one selects a subset of noisy variables (e.g., voxels) from an initial
analysis for further characterization. When a voxel exceeds the threshold (and is thus selected for further analysis),
this can be due either to signal or to noise. In the case in which there is no signal, the only voxels that will
exceed the threshold will be the ones that have a very strong positive noise value. If we then estimate the mean
intensity of only those voxels that exceed the threshold, they will necessarily have a large positive value; there is
no way that it could be otherwise, since they were already selected on the basis of exceeding the threshold. In the
case where there is both true signal and noise, the mean of the voxels that exceed threshold will be inflated by the
positive noise values, since those voxels with strong negative noise contributions will not reach threshold. Thus,
the mean effect size for voxels reaching threshold will over-estimate the true effect size.

The review by Vul et al.(2009) suggested that a large numberof studies in the social neuroscience literature had
used circular analysis methods to determine effect sizes, and that these estimates were thus inflated. Although
the published responses to that paper raised various issues, none disputed the prevalence or problematic nature of
circular analysis in the fMRI literature. Since the publication of these papers, the literature has become much more
sensitive to the issue of circularity, and it is now generally unacceptable.
Cursul 4 - Neuroni și neurotransmițători
Bibliografie video

Toate pasajele albastre din text sunt linkuri pe care se poate da click (și, Doamne-ferește,
citi și în plus)
Neuronul:

https://www.youtube.com/watch?v=oSSBrlMFGzI – BrainFacts.Org – Creierul tău complex


https://www.youtube.com/watch?v=L82bDTBMGUU – Matthew Barry Jensen – Khan
Academy – Tipuri de celule neurale
https://www.youtube.com/watch?v=Qka6Gj7b39M – David Eagleman – Neuron către neuron
https://www.youtube.com/watch?v=6qS83wD29PY - 2-Minute Neuroscience: Neuronul
https://www.youtube.com/watch?v=5V7RZwDpmXE – 2 Minute Neuroscience: Mielina
https://www.youtube.com/watch?v=W2hHt_PXe5o – 2 Minute Neuroscience: Potențialele
de acțiune

https://www.youtube.com/watch?v=HzrJWTtw8VM – Neuro Transmission – Ce e un


potențial de acțiune?
https://www.youtube.com/watch?v=8TG8QCkKpHI – Neuro Transmission – Ce e un potențial
de repaus?
https://www.youtube.com/watch?v=mOgHC5G8LuI – Interactive Biology – Felul în care tecile de mielină grăbesc
potențialul de acțiune
https://www.youtube.com/watch?v=_7_XH1CBzGw – Suzana Herculano-Houzel – Ce e așa special la creierul uman?
https://www.youtube.com/watch?v=zgBPwm7sZrA – Simon Whistler - De ce alcoolul nu omoară neuroni
https://www.youtube.com/watch?v=WUdAvmggg34 – Eric Kandel – Divizia pentru neurobiologie și comportament
https://www.youtube.com/watch?v=RuTPEqm1xfc – Bloomberg – Un microscop de dimensiuni reduse care-ți
poate citi mintea în timp real

Sinapsele:
https://www.youtube.com/watch?v=VitFvNvRIIY – Crash Course – Sinapsa
https://www.youtube.com/watch?v=fdhzNYJH7eA – Neuro Transmission – Ce e o sinapsă?
https://www.youtube.com/watch?v=0UbLfCNJb_0– BrainFacts.Org – Felul în care experiența îți modifică creierul

https://www.youtube.com/watch?v=uVQXZudZd5s – Khan Academy - Plasticitate neuronală și excitare pe termen


lung (LTP) (Donald Hebb. Sinapse hebbiene.)

https://www.youtube.com/watch?v=SIp_CTEfiR4 – FromBrainToSoma - Postulatele


neurale ale lui Hebb

https://www.youtube.com/watch?v=q2LVaPhM0ec – Proiectul Blue Brain

https://www.youtube.com/watch?v=04Lick3RIwU – Dennis Charney – Neuroplasticitate și rezistența creierului

https://www.youtube.com/watch?v=Oin_hCC2WrE – Cristian Axenie - Neurotehnologii: de la neuroni la inteligența


artificială

https://www.youtube.com/watch?v=gcK_5x2KsLA – Oolution Technologies - Rețele Neurale. O explicație simplă.

https://www.youtube.com/watch?v=0qVOUD76JOg - Mike Tyka – Arta rețelelor neurale.

https://www.youtube.com/watch?v=A9a2P30sDl4 – I-Han Chou – Optogenetică și îmbunătățirea funcțiilor


creierului
https://www.youtube.com/watch?v=Nb07TLkJ3Ww – Edward Boyden – Optogenetică explicată

https://www.youtube.com/watch?v=yKPTuCoop8c - Giacomo Rizzolatti - Neuronii oglindă: de la maimuţe la


oameni

Neurotransmițătorii:
https://www.youtube.com/watch?v=hGDvvUNU-cw – Brainfacts.Org – Cum comunică neuronii
https://www.youtube.com/watch?v=tPqI6ZgJgjY – BrainFacts.org – Cum procesează creierul tău informația
https://www.youtube.com/watch?v=W4N-7AlzK7s&t=3s – Crash Course – Mintea chimică
https://www.youtube.com/watch?v=RRfH4ixgJwg - Hoogenraad Lab – Mecanismele de bază ale neurotrasmiterii

https://www.youtube.com/watch?v=WhowH0kb7n0 2-Minute Neuroscience: Sinapsele


https://www.youtube.com/watch?v=6uMcdpiV094 - Alpha Media – Creierul și neurotransmițătorii
https://www.youtube.com/watch?v=NXOXZ-kaSVI - 2-Minute Neuroscience: Receptorii

https://www.youtube.com/watch?v=FXYX_ksRwIk - Khan Academy - Tipuri de neurotrasmițători


https://www.youtube.com/watch?v=QL51iPCovXo - 2-Minute Neuroscience: Dopamină
https://www.youtube.com/watch?v=bX0_AB9Sqt8 - 2-Minute Neuroscience: Serotonină

https://www.youtube.com/watch?v=bQIU2KDtHTI - 2-Minute Neuroscience: GABA


https://www.youtube.com/watch?v=1eaIcBmZRws - 2-Minute Neuroscience: Glutamat
https://www.youtube.com/watch?v=tLc9fQd58bg - 2-Minute Neuroscience: Oxitocină
https://www.youtube.com/watch?v=mVJjWYXS4JM – YJ Lin – AP Psychology – Influența drogurilor asupra
neurotransmițătorilor
https://www.youtube.com/watch?v=J8w_0sZ97Bc – Nadine Kabbani - De la sfinți la sociopați: dopamină și
decizii
https://www.youtube.com/watch?v=PkIc3QyzB2Q – Terry Jones – PET (Tomografie cu emisie de pozitron)
https://www.youtube.com/watch?v=08etW9Alxug – Terry Jones – PET și povara patologiilor psihiatrice
https://www.youtube.com/watch?v=mfmhFPhbkMM – Terry Jones – PET și sistemul dopaminergic de
recompensă

https://www.youtube.com/watch?v=ny39nuo_-Bs – Terry Jones – PET și cercetare serotoninergică

Celulele gliale:

https://www.youtube.com/watch?v=AwES6R1_9PM – 2 Minute Neuroscience – Celulele gilale

https://www.youtube.com/watch?v=L2t7eTYWnl0 – Neurotransmission – Ce sunt celulele microgliale?

https://www.youtube.com/watch?v=64MgiEDWyRg – AnatomyZone – Celulele gliale


https://www.youtube.com/watch?v=EsW4olW14iQ – Gary Simonds – Tumori din celule gliale

https://www.youtube.com/watch?v=Utaeaz-tD5s – Neuro Transmission – Ce sunt astrocitele?


https://www.youtube.com/watch?v=QWXvgM-V_2g – Institutul Salk – Astrocitele ajută creierul la memorie
https://www.youtube.com/watch?v=HOA-QW9xTa8 – Neuro Transmission – Ce sunt oligodendrocitele?

https://www.youtube.com/watch?v=4jLyzRKAs1k – Cell Press – Astrocitele – Piesa lipsă din puzzle-ul schizofreniei?


(Abstract video pentru Papouin et al. - Septal Cholinergic Neuromodulation Tunes the Astrocyte-Dependent
Gating of Hippocampal NMDA Receptors to Wakefulness)
localizaționism vs. holism vs. dualism vs. materialism
echipotențialism monism vs. idealism

glanda pineală. CUI PRODEST? Cine are câtă


MITURI putere politică în ce epocă și cum
frenologie
decide “adevărul științific”?

de la neurologie la neurobullshit (conflicte de interese și (in)accesibilitatea creierului –


neuropsihologie la referințe gratuite la neuroimagistică cutia craniană, bariera
psihologie cognitivă la irelevantă afirmațiilor făcute) hematoencefalică, LCR, religie,
neuroștiință cognitivă politică, epistemologie, finanțe

neuroanatomie organic vs. funcțional devine organic vs. psihanaliza ca dominantă a


localizaționistă vs. “psihologic” în absența accesibilității psihologiei clinice, abandonarea
neurofiziologie holistă neurofiziologiei (dualism de facto) neuroștiinței în psihiatrie

anatomo-clinic microscopie EEG PET CT RMN RMNf MEG tM/DCs

publish or perish conflicte de interese nedeclarate (financiare + faimă academică) neofilia jurnalelor

biasul împotriva infirmării ipotezelor industria profitabilă a formărilor în psihoterapie și psihologie clinică

bias-ul acordării granturilor de cercetare preferențial studiilor neofile care fac afirmații implauzibil de generale

publication bias p-hacking HARK-ing putere statistică obsesia ritualică pentru NHST

grade de libertate l-hacking (cherry picking prin citarea selectivă a literaturii care susține ipoteza)

confuzia corelație cauzalitate afirmații făcute direct în presă, nepublicate în jurnale

confirmation bias vs. falsificabilitate neurorealism naiv inferențe inverse absurde

corectarea pentru comparații multiple analiză circulară (voodoo correlations)


neuroni uni/bi/multipolari sinapse chimice neurotransmițători oxitocina, hormonul fericirii

pre/postsinaptici electrice agoniști antagoniști dopamina, molecula plăcerii

senzoriali (aferenți – bottom-up) excitatoare/inhibitoare inaccesibilitatea neurochimiei umane


motori (eferenți – top-down)
lichid cefalo-rahidian recaptare (reabsorbție)
neuronii oglindă – umanitate și autism
anti/psihotice/depresive/convulsivante somnifere
neurogeneză? celule gliale receptori
anxiolitice timostabilizatoare stimulante
Bibliografie 1
David J. Linden – Mintea ca întâmplare. Cum ne-a oferit evoluția creierului
iubirea, memoria, visele și pe Dumnezeu, p.38-43; 53-54;
Construind un creier cu componentele de ieri
Este deja un clişeu să fim uluiți de complexitatea microscopică a
creierului uman. Orice om de ştiintă care vorbeşte despre acest subiect
aude inevitabil fantoma blândă, de unchi cumsecade, a lui Carl Sagan
şoptind: „Mi-li-aaarde şi mi-li-aaarde de mici celule cerebrale". Ei bine,
chiar este impresionant. Sunt o groază de celule acolo. Cele două
tipuri principale de celule ale creierului sunt: neuronii, responsabili
pentru semnalizarea electrică și/sau chimică rapidă (ocupația principală
a creierului) şi celulele gliale, importante pentru funcţiile de
mentenanţă şi care creează un mediu optim pentru neuroni (şi care, de
asemenea, participă direct în unele forme de semnalizare electrică).
Faimoasele cifre sunt: aproximativ 86 de miliarde (86.000.000.000)
de neuroni în creierul adultului uman şi aproximativ un trilion
(1.000.000.000.000) de celule gliale. Ca să ai o idee, dacă ai dori să-ţi
donezi neuronii tuturor oamenilor de pe planetă, fiecare ar primi cam
16. Neuronii nu sunt o dezvoltare recentă a evoluţiei. Sunt moi şi
aşadar nu s-au păstrat în fosile, deci nu ştim exact când au apărut primii
neuroni. Dar ştim că meduza modernă, viermii şi melcii au neuroni.
Alte animale moderne, cum ar fi bureţii de mare, nu au. Aşadar, cea mai
bună ipoteză a noastră este că au apărut cam în perioada când meduza şi rudele ei, un grup de animale numite
Cnidaria, au apărut, după dovezile fosile în era pre-cambriană, cam acum 600 de milioane de ani. Incredibil, dar cu
câteva excepţii, neuronii şi celulele gliale ale viermelui nu sunt diferite substanţial de cele din creierul nostru.
În acest capitol, sper să-ţi arăt că celulele creierului nostru au un design străvechi ce le face lente şi nedemne
de încredere şi care le limitează capacitatea de semnalizare. Neuronii prezintă o varietate de forme şi mărimi,
dar au anumite structuri în comun.
La fel ca toate celulele, neuronii sunt delimitaţi de un înveliș, membrana exterioară (numită de asemenea
„membrană plasmatică”). Toţi neuronii au un corp celular, ce conţine nucleul, depozitul instrucţiunilor genetice
codate în ADN. Corpul celular poate fi rotund, triangular sau fusiform şi poate varia de la 4 la 100 de microni (de
obicei, are 20 de microni). O modalitate probabil mai utilă de a gândi acest lucru este că cinci dintre aceste corpuri
celulare neuronale tipice pot fi plasate unul Iângă altul în grosimea unui fir de păr uman tipic. Astfel, membranele
exterioare ale neuronilor şi celulelor gliale sunt incredibil de înghesuite, cu un spaţiu foarte mic între ele. Dendritele
(termen derivat din cuvântul grecesc pentru „copac") ies din corpul celular sub forma unor ramuri mari, îngustate
spre vârf, ce primesc semnale chimice de la neuronii vecini. Voi discuta în curând cum se întâmplă acest lucru.
Dendritele pot fi scurte sau lungi, fusiforme sau stufoase ori, rareori, complet absente. O privire la microscop ne
arată că unele sunt netede în timp ce altele sunt acoperite de formaţiuni micuţe numite spini dendritici. Neuronii
tipici au mai multe ramuri dendritice, dar au de asemenea o singură proeminenţă lungă şi subţire ce creşte din corpul
celular. Acesta este axonul şi este partea neuronală de transmitere a informaţiilor. Axonul, de obicei mai subţire
decât dendritele, nu se îngustează pe măsură ce se extinde din corpul celular. Din corpul celular creşte un singur
axon, dar deseori el se ramifică ulterior, uneori spre destinaţii foarte diferite. Axonii pot fi remarcabil de lungi:
unii se intind pe toată distanţa de la baza măduvei spinării spre degetele de la picioare (ceea ce face ca axonul cel
mai lung să aibă cam 1 m, pentru omul obişnuit, şi până la 3,65 m în cazul unei girafe).
Figura 2.2 Arhitectura sinaptică

În joncțiuni specializate, denumite sinapse, informația trece de la axonul unui neron la dendrita (sau corpul
celular) al neuronului următor (figura 2.2). În sinapse, capetele axonilor (numite terminații axonale) aproape ating
(fără a atinge propriu-zis) următorul neuron Terminațiile axonale conțin multe vezicule sinaptice, mici mingiuțe
învelite într-o membrană.

Cel mai comun tip de veziculă sinaptică din creier este încărcată cu aproximativ 2000 de molecule dintr-un
compus special numit neurotransmițător. Între terminația axonală a unui neuron şi dendrita următorului axon se
află un spațiu minuscul umplut cu apă sărată, numit fanta sinaptică. Prin minuscul vreau să zic extrem de
minuscul: cam 5000 de fante sinaptice ar încăpea în grosimea unui singur fir de păr uman. Fanta sinaptică este
locul unde veziculele sinaptice eliberează neurotransmițătorii pentru a semnaliza următorului neuron din lanț.
Sinapsele sunt cruciale pentru povestea noastră. Ele vor apărea în mod repetat pe măsură ce voi discuta tot — de
la memorie şi emoție la somn. Ar trebui aşadar să zăbovim acum un pic asupra lor. În primul rând, numărul
sinapselor din creier este uluitor. În medie, fiecare neuron primeşte 5000 de sinapse, adică locațiile unde terminațiile
axonilor altor neuroni realizează contactul (intervalul este de la 0 la 200.000 de sinapse). Cele mai multe sinapse
contactează dendritele, alte sinapse corpul celular, iar cele mai puține axonul. Multiplicând 5000 de sinapse per
neuron cu 86 de miliarde de neuroni, obținem o estimare a extraordinarului număr de sinapse din creier: aproximativ
400 de trilioane (400.000.000.000.000). Sinapsele sunt punctele-cheie de comutare între cele două forme de
semnalizare rapidă din creier: impulsurile chimice şi electrice. Semnalizarea electrică utilizează un impuls scurt şi
rapid, numit „potenţial de acţiune", ca unitate fundamentală de informaţie. Potenţialele de acţiune sunt semnale
electrice scurte care îşi au originea în baza axonică, locul unde se uneşte corpul celular cu axonul. Când potenţialul
de acţiune, după ce a traversat axonul, ajunge la terminaţiile axonale, declanşează o serie de reacţii chimice
ce determină o schimbare structurală dramatică (vezi figura 2.3). Veziculele sinaptice fuzionează cu membrana
exterioară a terminaţiei axonale, descărcându-şi conţinutul (moleculele neurotransmiţătorilor) în fanta sinaptică.
Aceste molecule neurotransmiţătoare străbat apoi fanta sinaptică, unde intră în contact cu proteine specializate,
numite receptori neurotransmiţători, inserate în membrana dendritei neuronului vecin. Receptorii convertesc din
nou semnalul chimic al neurotransmiţătorului în semnal electric. Semnalele electrice din receptorii activaţi pe
toată suprafaţa dendritei sunt canalizate spre corpul celular. Dacă ajung suficiente semnale electrice concomitente,
se declanşează un nou potenţial de acţiune şi acesta va fi transmis mai departe de-a lungul lanţului neuronilor.

Figura 2.3 Sinapsele, locurile-cheie din creier pentru convertirea semnalelor electrice în semnale chimice și apoi din
nou în semnale electrice
Este tentant să spunem că axonul este ca un fir electric izolat de cupru. Dar acest lucru ascunde una din
ineficiențele fundamentale ale neuronilor. Firul de cupru nu are nevoie să facă nimic ca să menţină
deplasarea semnalelor electrice: este total pasiv, este un bun conductor şi este bine izolat împotriva pierderii
exterioare de sarcină electrică. Drept consecință, semnalele electrice din firele de cupru se deplasează
aproape de viteza luminii, cu aproximativ 1.076.651.136 km/oră. De cealaltă parte, axonul utilizează mecanisme
moleculare cu părți mobile (canale ionice sensibile la voltaj ce se deschid şi se închid rapid) pentru a menține
impulsul pe măsură ce acesta se deplasează. Prin comparație, axonul este un conductor destul de slab.
Soluția salină din interiorul
axonului nu este nici pe departe un conductor la fel de bun cum este cuprul. Mai mult, membrana exterioară a
axonului este o izolaţie cam fisurată. Conducerea semnalelor electrice de-a lungul axonului este probabil cel mai
bine ilustrată printr-o analogie hidraulică. Firul de cupru izolat este ca o ţeavă de oţel (nu are scurgeri), cu un
diametru de 3 m (un flux central mare), în timp ce axonul este ca un furtun de grădină, cu diametrul de 25,4 mm (un
flux central mic), ce a fost străpuns de găuri micuţe de-a lungul lui (are scurgeri) pentru a-ţi permite să uzi un răsad
de flori. Această combinaţie de flux central mic şi scurgeri face ca apa să curgă încet prin furtunul de grădină.

În mod asemănător, fluxul curentului electric prin axon este limitat de asemenea de fluxul central mic şi scurgeri.
Drept consecinţă, semnalele electrice străbat axonul de obicei lent, cam cu 160 934,4 km/oră. Este, totuşi, un
interval destul de mare, unde axonii cei mai subţiri, neizolaţi, se târăsc cam cu 160 km/oră şi cei mai rapizi (axoni
groşi ori cei bine izolaţi de celulele gliale vecine) merg cu aproape 643 km/oră. Cu toate acestea, chiar şi cei mai
rapizi axoni, precum cei implicaţi în retragerea reflexă a degetului de pe o plită fierbinte, conduc semnalele
electrice cu mai puţin de o milionime din viteza firelor de cupru. O altă diferenţă între neuronii noştri şi aparatele
concepute de oameni, cum ar fi computerele (cu care neuronii sunt deseori comparaţi) implică intervalul temporal
al semnalelor lor. Tiparul declanşării impulsurilor este principalul fel în care neuronii codează şi transmit informaţia,
deci limitele temporale ale declanşării impulsurilor au o importanţă specială. Procesorul unui computer desktop
(circa 2006) poate performa 10 miliarde de operaţii pe secundă, dar un neuron tipic din creierul uman este
limitat la 400 de impulsuri pe secundă (deşi unii neuroni speciali, cum ar fi cei din sistemul auditiv ce codează
sunete de înaltă frecvenţă, pot genera până la 1200 de impulsuri pe secundă). În plus, cei mai mulţi neuroni nu
pot susţine asemenea rate înalte mult timp (mai mult de câteva secunde) înainte de a avea nevoie de repaus.

Cu asemenea limitări de viteză şi timp, pare uluitor că, într-adevăr, creierul poate face ceea ce face.

Bibliografie 2
Edward E. Smith; Susan Nolen-
Hoeksema; Barbara L. Fredrickson;
Geoffrey R. Loftus – Introducere în
psihologie, ediția a XIV-a, p.45-53;
233-234;
Capitolul 2 - Bazele biologice ale psihologiei

Neuronii – pietrele de temelie ale sistemului nervos


Unitatea de bază a sistemului nervos este neuronul,
o celulă specializată care transmite impulsuri
neuronale sau mesaje altor neuroni, glande sau
mușchi. Neuronii dețin secretul funcționării creierului
și sunt responsabili pentru existența conștiinței. Știm
ce rol joacă neuronii în transmiterea impulsurilor
nervoase și știm și cum funcționează unele circuite
neuronale, dar abia începem să descoperim rolul lor
complex în cadrul memoriei, emoției și gândirii.
Diferitele tipuri de neuroni din sistemul nervos
păstrează nişte caracteristici comune, în ciuda
diferenţelor uriaşe de mărime sau aspect (fig. 2.1). Din corpul celular sau soma pleacă un număr de ramuri scurte
numite dendrite (din cuvântul grec „dendron”, care înseamnă „copac"), care primesc impulsuri neuronale de la
neuronii adiacenţi. Axonul este un tub subţire care pleacă din soma transmite mesaje altor neuroni (muşchilor sau
glandelor). La capăt, axonul se ramifică într-un număr de mici ramuri de mici ramuri care se termină prin nişte mici
umflături numite terminale sinaptice. Butonul terminal nu atinge, de fapt, neuronii adiacenţi. Există un mic spaţiu
între butonul terminal corpul celulei sau dendritele neuronului receptor. Această joncţiune se numeşte sinapsă, iar
distanţa în sine se numeşte fantă sinaptică. Când un impuls neuronal coboară prin axon şi ajunge la butonii
terminali, acesta declanşează secreţia unui neurotransmiţător, substanță chimică eliberată în fanta sinaptică cu
rolul de a stimula neuronul învecinat. transmiţând astfel impulsul de la un neuron la următorul. Axonii multor
neuroni formează sinapse cu dendritele și corpul celular al unui singur neuron (fig. 2.2). Deşi toţi neuronii au aceste
caracteristici generale, ei diferă foarte mult ca mărime și formă (fig. 2.3). Un neuron din măduva spinării poate avea
un axon de 1-1,3 metri plecând din măduva spinării şi ajungând până la muşchii degetului mare de la picior de
exemplu; un neuron din creier poate ocupa numai câteva miimi de centimetru. Neuronii sunt împărţiţi în trei
categorii după funcţia lor generală:

• neuroni senzitivi,
• neuroni motori şi
• neuroni de asociaţie sau interneuroni.
Neuronii senzitivi transmit impulsurile primite de receptori la sistemul nervos central. Receptorii sunt celule
specializate din organele de simț, muşchi, piele şi încheieturi, care detectează schimbările fizice sau chimice şi
traduc aceste evenimente în impulsuri care circulă prin neuronii senzitivi.
Neuronii motori transportă semnalele eferente de la creier sau măduva spinării la muşchi şi glande.

Neuronii de asociație sau intercalari primesc semnalele de la neuronii senzitivi şi trimit impulsurile altor
neuroni interni sau neuronilor motori. Neuronii intercalari există numai în creier, ochi şi măduva spinării. Un
nerv este un fascicul de axoni lungi care aparțin mai multor sute de neuroni. Un singur nerv poate conține atât
axoni ai neuronilor senzitivi, cât şi axoni ai neuronilor motori. Corpurile celulare ale neuronilor sunt, în general,
grupate impreună în tot sistemul nervos în creier și măduva spinării. Un grup format din mai multe corpuri
celulare neuronale se numeşte nucleu. Un grup de corpuri neuronale situat în afara creierului sau a măduvei
spinării se numeşte ganglion. În afară de neuroni, sistemul nervos are un mare număr de celule non-
neuronale, numite celule gliale, care sunt imprăştiate printre — şi deseori înconjoară — neuronii. Celulele
gliale sunt de nouă ori mai numeroase decât neuronii şi ocupă mai mult de jumătate din volumul creierului.
Numele de glia vine de la cuvântul grecesc „glue", care sugerează una dintre funcțiile lor — mai precis aceea de a
menține neuronii la locul lor. În plus, ei oferă neuronilor nutrienți şi par să se ocupe de toată „bucătăria" creierului,
strângând şi eliminând produşii reziduali şi fagocitând neuronii morți şi substanțele străine, întreținând astfel
capacitatea de semnalizare a neuronilor (Haydon, 2001). În acest fel, celulele gliale au grijă de neuroni aşa cum
antrenorul unei echipe de fotbal are grijă ca jucătorii săi să fie în cea mai bună formă fizică în timpul meciului.
Potențiale de acțiune
Informația circulă pe suprafața neuronului sub forma unui impuls neuronal numit potențial de acțiune — un
impuls electrochimic care pleacă din corpul celulei coboară până la terminațiile axonului. Fiecare potențial de
acțiune este rezultatul intrării şi ieşirii din celula nervoasă a unor molecule încărcate cu o sarcină electrică, cunoscute
sub numele de ioni. Pentru a înțelege cum apar potențialele de acțiune trebuie să realizăm faptul că neuronii sunt
de obicei foarte selectivi în privința ionilor care intră sau ies din celulă. Membrana celulară a neuronului este
semipermeabilă, ceea ce înseamnă că unii ioni pot trece cu uşurință prin membrana celulară în timp ce alții nu pot
trece decât când se deschid anumite căi speciale de acces în membrană. Aceste căi de acces, numite canale ionice,
sunt nişte molecule proteice de forma unor gogoşi, care formează nişte pori pe toată membrana celulei (fig. 2.4).
Aceste proteine reglează intrarea şi ieşirea ionilor — ca cei de sodiu (Na), potasiu (K+), calciu (Ca++) şi clor (Cl-) — în
şi din neuron.
Figura 2.1
Diagramă
schematică a
neuronului
Săgețile indică
direcția
impulsurilor
nervoase. Unii
axoni sunt
ramificați, iar
ramurile se
numesc
"colaterale".
Axonii multor
neuroni sunt
acoperiți cu o
teacă izolantă de
mielină, care ajută
la creșterea vitezei
de propagare a
impulsului nervos.

Figura 2.2 Diferite forme și mărimi relative ale neuronilor

Axonul unui neuron din măduva spinării care nu se vede în întregime în figură, poate atinge peste un metru.
Fiecare canal ionic este selectiv, permiţând unui singur tip de ion să treacă prin el când este deschis. Când un
neuron nu generează un potenţial de acţiune se spune despre el că este în stare de repaus. În stare de repaus,
membrana celulară nu este permeabilă pentru ionii de Na+ şi aceştia se găsesc în concentraţii mari în afara
neuronilor. În schimb, membrana este permeabilă faţă de ionii de K+, care tind să se concentreze în interiorul
neuronului. Într-un neuron în repaus, nişte structuri proteice separate, numite pompe ionice, ajută la menţinerea
acestei distribuţii inegale a ionilor intre interiorul şi exteriorul celulei, pompându-i înăuntru sau în afara celulei. De
exemplu, pompele ionice transportă Na+ în afara neuronului, ori de câte ori acesta pătrunde in celulă, şi transportă
K+ inapoi în neuron, ori de câte ori acesta iese din el. În acest fel, neuronul aflat în repaus îşi menține o
concentrație înaltă de de ioni de Na+ în exterior şi o concentrație redusă în interior.
Figura 2.3
Sinapse cu corpul
celular al unui
neuron.
Mulți axoni diferiți,
fiecare cu mai multe
ramificații, fac
sinapsă cu
dendritele și corpul
celular al unui singur
neuron.
Fiecare ramificație
axonală se termină
printr-o umflătură
numită "buton
terminal sinaptic",
care conține
neurotransmițători.
Când este eliberat,
neurotransmițătorul
transmite impulsul
nervos prin fanta
sinaptică la
dendritele sau
corpul celular al
celulei receptoare.
Efectul general al acestor canale şi pompe ionice este polarizarea membranei neuronului aflat in repaus, creând o
diferenţă de potenţial intre interior şi exterior — în interior potenţialul ionic negativ al celulei este mult mai mare
decât afară. Potenţialul electric al unui neuron în stare de repaus este numit potenţialul de repaus al membranei şi
merge de la —50 la —100 milivolţi. Potenţialul de repaus al membranei unui neuron este similar cu incărcătura
electrică a unei baterii; atât neuronii, cât şi bateriile folosesc gradienți electrici pentru a depozita energia. Energia
bateriei este folosită pentru a furniza curent dispozitivelor electronice (ca telefoanele mobile), în timp ce energia
unui neuron este folosită pentru a transmite potenţiale de acţiune.
Când un neuron este stimulat de un curent de excitaţie, diferenţa de potenţial între interiorul şi exteriorul celulei se
reduce. Acest proces este numit depolarizare şi este efectul acţiunii neurotransmiţătorilor asupra receptorilor
dendritici. Dacă depolarizarea este suficient de mare, canalele de potenţial ale ionilor de Na+ aflate la suprafaţa
axonului se deschid pentru un interval foarte scurt şi ionii de Na+ intră în celulă. Sodiul inundă celula pentru că
sarcinile electrice opuse se atrag; sodiul are o sarcină electrică pozitivă, în timp ce interiorul celulei este încărcat
negativ. Acum, interiorul acestei zone a axonului devine pozitiv în raport cu exteriorul. Canalele învecinate de Na+
simt căderea de tensiune și se deschid, ducând la depolarizarea zonei adiacente din axon. Acest proces de
depolarizare care se repetă, propagându-se în jos pe toată lungimea axonului, este un potențial de acțiune. În
timp ce potențialul de acțiune coboară de pe axon, canalele de Na+ se închid în urma sa și diferitele pompe ionice
intră în acțiune pentru a readuce membrana celulară la starea de repaos.
Importanța canalelor de Na+ este dovedită de efectul anestezicelor locale (ca novocaina și xilocaina) care sunt
folosite în mod curent pentru a elimina sensibilitatea la nivelul gurii în timpul intervențiilor dentare. Aceşti agenţi
împiedică deschiderea canalelor de Na+, oprind astfel potenţialul de acţiune şi împiedicând semnalele de la
organele de simţ să ajungă la creier (Catterall, 2000).
Viteza potenţialului de acţiune care se propagă de-a lungul axonului poate merge de la 3 la 320 kilometri pe oră, în
funcţie de diametrul axonului; axonii mai mari au, în general, o propagare mai rapidă. Viteza poate fi afectată şi de
prezenţa sau absenţa tecii de mielină. Această teacă este formată din celule gliale specializate, care se înfăşoară în
jurul axonului, una lângă alta, lăsând mici spaţii între ele (vedeţi fig. 2.1) Aceste mici spaţii se numesc noduri
Ranvier. Izolarea asigurată de teaca de mielină permite o propagare saltatorie, în care impulsul nervos sare de
la un nod Ranvier la următorul. Acest lucru măreşte foarte mult viteza de propagare a potenţialelor de acţiune
spre capătul axonului. (Adjectivul saltatorie vine de la cuvântul latinesc saltare, care înseamnă „a sări". În
propagarea saltatorie, potenţialul de acţiune sare de la un nod Ranvier la următorul.) Teaca de mielină este prezentă
mai ales în locurile în care transmisia potenţialului de acţiune este critică (de exemplu, la axonii care stimulează
muşchii scheletici). În scleroza multiplă, o afecţiune în care primele simptome devin vizibile între 16 şi 30 de ani,
sistemul imun atacă şi distruge tecile de mielină ale corpului, producând severe disfuncţii nervilor motori.
Figura 2.4.Canale ionice. Substanțele
chimice, cum ar fi sodiul, potasiul, calciul
și clorul traversează membrana celulară
prin intermediul unor molecule proteice
de forma unor gogoși, așa-numitele
"canale ionice"

a) în timpul trecerii unui potențial de


acțiune, porțile sodiului din membrana
neuronală se deschid și ionii de sodiu
pătrund în axon, aducând cu ei o sarcină
electrică pozitivă

b) după apariția unui potențial de


acțiune într-un anumit punct pe axon,
porțile sodiului se închid în punctul
respectiv și se deschid în punctul următor
de-a lungul axonului. Când porțile sodiului
se închid, porțile potasiului se deschid și
ionii de potasiu ies din axon, ducând cu ei
o sarcină electrică pozitivă.
Un singur neuron generează un potenţial de acţiune, când excitaţia care ajunge la el prin multiple sinapse depăşeşte
un anumit prag. Neuronul generează un potenţial de acţiune într-o emisie unică şi rapidă, devenind apoi inactiv
timp de câteva miimi de secundă. Mărimea potenţialului de acţiune este constantă. Acesta nu poate fi declanşat
de un stimul decât dacă stimulul atinge un anumit prag minim. Astfel, în urma informaţiilor primite din sinapse,
neuronul emite sau nu un potenţial de acţiune. Dacă emite un potenţial de acţiune, acesta are întotdeauna aceeaşi
mărime. Această caracteristică a neuronului se numeşte principiul tot-sau-nimic.
Vă puteţi imagina potenţialele neuronale de acţiune sub forma unor semnale binare (0 şi 1) pe care calculatoarele
le folosesc pentru a primi informaţii în cadrul unor programe. Neuronul emite întregul potenţial de acţiune (1) sau
nu emite nimic (0). Odată iniţiat, potenţialul de acţiune se propagă de-a lungul axonului până la terminaţiile
axonale. Aşa cum am spus mai devreme, neuronii nu realizează un contact direct în sinapse şi semnalul trebuie
să treacă printr-o mică fantă (fig. 2.6). Când un potenţial de acţiune coboară de-a lungul axonului şi ajunge la
terminaţiile sinaptice, acesta stimulează veziculele sinaptice din butonii terminali. Veziculele sinaptice sunt mici
structuri sferice care conţin neurotransmiţători. Când sunt stimulate, eliberează neurotransmiţătorii în sinapsă.
Neurotransmiţătorii difuzează din neuronul activ sau presinaptic în fanta sinaptică, pentru a se cupla cu receptorii,
care sunt proteine localizate în membrana dendritică a neuronului receptor sau postsinaptic.
Figura 2.6 Eliberarea
neurotransmițătorilor în
fanta sinaptică.
Neurotransmițătorul este
transportat către membrana
presinaptică în veziculele
sinaptice, care fuzionează cu
membrana, eliberând
conținutul lor în fanta
sinaptică.
Neurotransmițătorii
difuzează în fantă și se
combină cu moleculele
receptoare din membrana
postsinaptică
Neurotransmiţătorii şi
receptorii se potrivesc ca
două piese de puzzle sau ca
o cheie în broască.

Această reacţie cheie-în-


broască declanşează o
schimbare directă în
permeabilitatea canalelor
ionice în neuronul receptor.

Unii neurotransmiţători au un efect excitant care permite ionilor cu sarcini pozitive, ca Na+, să intre în celulă,
depolarizând neuronul receptor şi creând în interiorul celulei o sarcină electrică pozitivă mai mare decât cea din
exterior.

Alţi neurotransmiţători sunt inhibitori. Ei creează în interiorul neuronului receptor o sarcină electrică negativă mai
mare decât cea din afara celulei (adică hiperpolarizează membrana celulei), fie permiţând ionilor cu sarcină pozitivă,
ca de exemplu să părăsească neuronul, fie lăsând ionii cu sarcină negativă, ca ionii de să pătrundă în celulă. Pe scurt,
efectul excitant creşte probabilitatea celulei de a genera un potenţial de acţiune, iar efectul inhibitor descreşte
probabilitatea apariţiei unui potenţial de acţiune.

Orice neuron poate primi neurotransmiţători de la cele câteva mii de sinapse pe care le are cu alţi neuroni. Unii
dintre aceştia eliberează neurotransmiţători excitatori, iar alţii neurotransmiţători inhibitori. În funcţie de
sincronizarea perioadelor de excitaţie şi a perioadelor refractare, axonii eliberează neurotransmiţătorii momente
diferite. Dacă într-un anumit moment şi într-un anumit loc de pe membrana celulară efectele excitatorii ale
neuronului receptor sunt mai mari decât efectele inhibitorii, se produce depolarizarea şi neuronul emite un potenţial
de acţiune tot-sau-nimic. Dacă un neurotransmiţător a fost eliberat şi difuzează în fanta sinaptică, acţiunea sa
trebuie să fie foarte rapidă pentru a asigura menţinerea controlului.

Pentru unii neurotransmiţători, sinapsa este aproape imediat degajată prin reabsorbţie (recaptare), captarea
neurotransmiţătorului de către terminalele sinaptice din care a fost eliberat. Reabsorbţia (recaptarea)
blochează acţiunea neurotransmiţătorului şi eliberează terminalele axonale de sarcina de a fabrica din nou
substanţa respectivă. Pentru alţi neurotransmiţători, efectul este neutralizat prin degradare, reacţia enzimelor în
prezenţa neurotransmiţătorului în fanta sinaptică, care îl descompun chimic îl inactivează.
Figura 2.7 – O electromiogramă a unui neuron plin de
sinapse

Sumarul secțiunii
1.Neuronul este unitatea de bază a sistemului nervos
2.Neuronii primesc semnale chimice pe anumite
ramificații numite dendrite și transmit potențiale
electrochimice printr-o prelungire de forma unui tub,
numită axon

3.Neurotransmițătorii chimici sunt eliberați în sinapsă și


transportă mesaje de la un neuron la altul.
Neurotransmițătorii acționează legându-se de proteine
receptoare.

4.Când un neuron este suficient de depolarizat, generază un potențial de acțiune de tipul tot-sau-nimic. Acest
potențial de acțiune se propagă pe axon și declanșează descărcarea unui alt neurotransmițător la nivelul butonilor
terminali.

Detectorii trăsăturilor din cortex

Majoritatea caracteristicilor cunoscute sub numele de trăsături primitive ale percepţiei unui obiect vin din
cercetările realizate de biologi pe alte specii (de exemplu, pisici şi maimuţe), în care s-au folosit înregistrările
reacţiilor unei singure celule din cortexul vizual. Aceste cercetări analizează sensibilitatea anumitor neuroni
corticali la prezenţa anumitor stimuli în regiunile retinei asociate cu aceşti neuroni. O astfel de regiune retiniană
este numită câmp receptor. Aceste cercetări pe celule unice au fost iniţiate de David Hubel şi Torsten Wiesel
(1968), care au câştigat premiul Nobel, în 1981, pentru descoperirile lor despre activarea celulelor. Hubel şi Wiesel
au identificat trei tipuri de celule în cortexul vizual, care diferă în funcţie de trăsăturile la care se activează.
1.Celulele simple reacţionează când ochiul este expus la un stimul de forma unei linii (o dungă subţire sau o
margine dreaptă între o regiune luminată şi una întunecată), cu o anumită orientare şi localizare în cadrul
câmpului receptiv. Reacţia cea mai amplă ca intensitate este obţinută pentru o bară verticală şi descreşte pe
măsură ce orientarea se îndepărtează tot mai mult de la poziţia optimă. Alte celule simple sunt adaptate la alte
poziţii şi orientări.

2.O celulă complexă reacţionează o linie sau o margine cu o anumită orientare, dar nu necesită prezenţa
stimulului într-un anumit loc din câmpul ei receptor. Ea reacţionează continuu în timp ce stimulul se mişcă în
câmpul ei perceptiv

3.Celulele hipercomplexe necesită nu numai o anumită orientare a stimulului, ci și o anumită mărime. Dacă un
stimul depăşeşte lungimea optimă, reacţia descreşte şi poate dispărea cu totul. De la cercetările iniţiale realizate
de Hubel şi Wiesel, oamenii de ştiinţă au descoperit că celulele reacţionează şi la alte caracteristici ale formei,
în afara marginilor sau a liniilor. De exemplu, există celule hipercomplexe care reacţionează la colţuri sau
unghiuri de o anumită lungime (DeValois şi DeValois, 1980; Shapley şi Lennie, 1985). Toate aceste celule se
numesc detectori de trăsături. Pentru că marginile, liniile, colţurile şi unghiurile la care reacţionează aceşti detectori
pot fi folosite pentru a aproxima multe forme, detectorii de trăsături pot fi consideraţi componentele de bază ale
percepţiei formei . După cum vom vedea mai târziu, această ipoteză pare, totuşi, mai veridică pentru formele
simple, ca literele, decât pentru cele complexe, ca mesele sau tigrii.
Bibliografie 3
Charles Watson; Matthew Kirkcaldie; George Paxinos - The Brain. An
Introduction to Functional Neuroanatomy, p.2-5;
Neurons and their connections
Neurons receive stimuli on their sensitive filaments
called dendrites. The membrane covering each
dendrite has many tiny channels which control the flow
of positive and negative ions across the membrane.
Some of these ion channels are sensitive to chemical or
physical stimuli, and can cause changes in the electrical
charge on the membrane. If enough of these small
membrane voltage changes happen at the same time,
they trigger an action potential. When an action
potential is triggered, this sharp, clear signal is
transmitted along the axon. At its far end, the axon
breaks up into a number of terminal branches.
Specialized swellings on these branches are called axon
terminals or terminal boutons. An action potential
causes the boutons to release chemical signaling
compounds called neurotransmitters.

These neurotransmitters connect in a key-and-lock


fashion with receptors on the target cell. Receptors
are protein structures that are shaped to receive
specific transmitters. The combination of the
neurotransmitters with the receptors can open ion channels or trigger other changes. Neurotransmitter receptors
are mostly found on the dendrites of other neurons, but they are also found on the surface of muscle cells, glial cells,
or gland cells. The close physical pairing between an axon terminal and a concentration of receptors on another cell
is called a synapse.

Synapses
Neurons are able to receive and integrate information from multiple stimuli, and can send messages to distant
regions of the nervous system. The activity of a single neuron in the central nervous system may be influenced by
tens of thousands of synaptic inputs. In the cerebellum up to a quarter of a million synapses are connected to a
single Purkinje cell. The axon that carries information away from the neuron may branch into hundreds or even
thousands of terminals. Conversely, there are neurons dedicated to a single input, like a photoreceptor in the eye.
Synapses are just one of the influences on neuronal firing. Other influences include the activity of nearby glia, the
composition of extra-cellular fluid, and the presence of circulating hormones. Direct electrical links also exist
between the membranes of some neurons. To further complicate the picture, many of these types of
communication can be two-way, so that an axon terminal that synapses on a dendrite might also be influenced by
feedback from the dendrite. Because receptors often have effects on internal biochemistry and gene expression,
the communication between cells in the nervous system is rich
and diverse, much more so than simplistic models of neurons
as integrators of their inputs.
Synapses can be changed. Even in the adult brain, synapses
may be discarded and new ones formed according to need.

This process of tuning and revising connections is termed


plasticity.
Plasticity is the basis of developmental maturation, learning,
memory, recovery from injury, and indeed every functional
adaptation of the nervous system. All regions of the nervous
system are plastic to some extent, but the extent of plasticity
varies a great deal. Generally, the juvenile nervous system is
more plastic than the adult, and the cerebral cortex is more
functionally plastic than the brainstem and spinal cord. Most
communication between neurons occurs by the release of
chemical neurotransmitters from axon terminals and the
detection of these chemicals on the surface of cell membrane.
Synapses are the spaces in which this communication takes
place. They may be small and relatively open to the
extracellular space, or large and enclosed, but they share
common structural features.
The neuron or sensory receptor that secretes a
neurotransmitter is called the presynaptic cell, and the
corresponding receptors are in the membrane of the
postsynaptic cell. Synapses are activated by action
potentials that travel to the axon terminals. The action
potential causes calcium ions to enter the terminals, and this in
turn activates cellular machinery to deliver vesicles full of
neurotransmitter to the presynaptic membrane. The number
and size of vesicles delivered, as well as the way in which they
release their contents, vary greatly with cell type.
Neurotransmitter molecules are released from the presynaptic
memebrane and diffuse rapidly across the nanometer-sized
fluid space of the synaptic cleft. The neurotransmitter binds to
and activates a receptor on the postsynaptic membrane.
Receptor binding is a dynamic process, with transmitters
occupying active sites for only a fraction of the time they are
present in the cleft. Their time for activation is limited by the
action of enzymes, which break down the transmitter. In other
situations the transmitters are pumped away by specialized
transport systems on glia or presynaptic cell membranes. This all happens within milliseconds. For by interruptions
in the steady stream of neurotransmitter flowing from the visual receptors. Retinal glia take up the
neurotransmitter so fast that light flickering tens of times per second is easily noticeable. Chemical
communication between neurons and their targets can also occur outside synapses. For example, in the
autonomic nervous system it is common for receptors to be activated by neurotransmitter leaking from nearby
axons, or by substances circulating in the blood.
Receptors
Neurotransmitter release can have two kinds of effects on the post-synaptic cell, depending on the number and
type of receptors in its membrane.
1.The first is ionotropic, in which a receptor allows ions to pass through the membrane when activated.

2.The second is metabotropic, in which a receptor triggers internal biochemical signaling when activated.
Most neurotransmitters can activate specific receptors of both kinds. The post-synaptic cell can change its
response by altering the number and type of receptors on the post-synaptic membrane.

Ionotropic receptors that are linked to sodium channels will excite a neuron and make it more likely to fire, whereas
those linked to chloride channels tend to suppress firing by returning membrane potential to its resting state. The
resulting changes in membrane potential are called post-synaptic potentials. Post-synaptic potentials can be
either excitatory (EPSP) or inhibitory (IPSP), depending on whether they make the neuron more or less likely
to fire. Dozens of different synapses may be activated in close proximity to each other, opposing or reinforcing each
other's effects in a mix of competing influences. These effects spread passively across the membrane, but the
effects diminish with distance. Whether or not the charge on the cell membrane reaches the threshold and triggers
an action potential is determined by the mix of influences felt at the trigger zone, usually located where the axon
leaves the cell body at the axon hillock. This means that the influence of a synapse varies according to how close it
is to the axon hillock. For example, the chandelier cells of the cerebral cortex activate ionotropic chloride channels
located directly on the axon hillock, so their effect is to instantly cancel whatever activity might be pushing the
postsynaptic neuron towards triggering an action potential. This enables them to function as a silencer of axonal
outputs.

Metabotropic receptors activate internal signaling molecules to produce a wide variety of effects on
postsynaptic cells. These effects are typically longer-lasting than the brief gushes of ions through ionotropic
receptors. Some metabotropic receptors cause opening or closing of ion channels by means of internal messenger
molecules, while others change the expression of genes and the production of proteins. Another type of
metabotropic receptor releases calcium from intracellular stores, sparking oscillating cycles of enzyme activity or
causing changes to the structure of the synapse itself. It is not unusual for a synapse to have both ionotropic and
metabotropic receptors.
Gap junctions are are small openings in the cell membrane that connect neurons together, allowing ions and
therefore electric currents to pass from one neuron to another with no physical barrier. Because of this, changes
in membrane potential in one cell can have an instant influence on connected cells. In some cases, the gap junction
system is combined with chemical synaptic communication.
Certain large networks of inhibitory neurons in the brain are gap-junction coupled, and also make inhibitory
synapses on each other. In this situation, an action potential in one neuron causes a brief simultaneous electrical
excitation in neurons linked to it by gap junctions, which is followed by a synaptic inhibition. The timing of these
two actions causes specific frequencies of activity to spread more easily between neurons, and the network may
end up firing in synchrony. This largescale coordination is thought to be important in the activity of large areas of
cerebral cortex, and may even contribute to the neuronal basis of attention.
Glia were long regarded as the simple glue filling in the space between neurons, but they are now recognized as
important functional partners to neurons. While neurons tend to communicate with fast, specific signals, glial
activity in the same region may take the form of gradual shifts in electrical excitability. Glial impulses spread
through the tissue at a slower rate and alter neuronal activity, neurotransmitter kinetics, metabolic activity, blood
flow, and the extracellular environment. Since glia and neurons are in contact across almost every part of their
membranes, they interact and influence each other in a tight, reciprocal relationship, which is not completely
understood.
Astrocytes
The principal glial type in the central nervous system is the astrocyte –a general-purpose cell whose functions
include buffering the cellular environment, tending synapses, metabolic regulation, signaling and communication,
and governing capillary flow and traffic between neural tissue and the bloodstream. Individual astrocytes fill non-
overlapping domains with fine, fluffy processes surrounding virtually every square micron of neuronal membranes
and capillary walls. A single astrocyte interacts with dozens of neurons, regulating their environment and
metabolism, and communicating back and forth to regulate synapses and neuron excitability. Astrocytes also
provide neurons with pre-processed food to meet their energy demands. This relationship, combined with their
ability to alter the size of brain capillaries, allows them to change blood flow according to neuronal energy use.
Apart from meeting metabolic demands efficiently, these changes are the basis of functional magnetic resonance
imaging (See Chapter 11).
Oligodendrocytes
These cells manufacture myelin, the multiple layers of fatty cell membrane that encircle many axons. The myelin
sheath is not continuous; there is a small gap between the sheath made by one oligodendrocyte and the next. This
gap is called a node. Myelin sheaths speed up signaling along the axon by allowing action potentials to jump from
one node to the next. Not a lot is known about the other functions of oligodendrocytes. However, they have
recently been shown to respond to synaptic events with post-synaptic potentials and action-potential-like spikes,
further blurring the line between the functions of neurons and glia.
Bibliografie 4
Leon Dănăilă; Mihai Golu – Tratat de neuropsihologie, volumul 1, p.37-46; 59-60;
63-64; 71-73;
STRUCTURA MICROSCOPICĂ ŞI BIOCHIMIA
SISTEMULUI NERVOSCENTRAL (SNC) AL
OMULUI
Introducere
Sistemul nervos este unic în ceea ce priveşte imensa gamă de
mecanisme de comandă-control de care dispune. El primeşte
milioane de biţi de informaţie de la diverse organe
senzoriale, pe care le prelucrează şi le integrează pentru a
determina răspunsul adecvat din partea organismului.

Structura generală a sistemului nervos include trei


componente principale:

a) un compartiment senzorial (receptorii senzoriali),


b) un compartiment motor (efectorii) şi
c) un compartiment asociativ-integrativ.
Compartimentul senzorial ne arată că majoritatea
activităţilor sistemului nervos este iniţiată de experienţe
senzoriale provenite de la receptori (vizuali, auditivi, tactili,
etc). De la aceştia, informaţiile ajung la sistemul nervos
central prin nervii spinali sau cranieni şi sunt conduse către
multiple arii senzoriale primare din: măduva spinării, substanţa reticulată (bulbară, pontină, mezencefalică),
cerebel, talamus şi cortexul cerebral. Ulterior, semnalele sunt transmise zonelor de asociaţie ale sistemului
nervos, pentru a fi prelucrate, astfel încât răspunsul motor ce trebuie dat să fie cât mai adecvat. Experienţa
senzorială poate produce o reacţie imediată sau poate fi memorizată pentru un timp de ordinul minutelor,
săptămânilor sau chiar anilor putând apoi ajuta la condiţionarea reacţiilor organismului într-un moment viitor.
Totuşi, peste 99% din informaţiile senzoriale sunt eliminate de creier ca fiind nesemnificative sau
neimportante. După selectarea informaţiei senzoriale importante, aceasta este canalizată spre regiunile motorii
cerebrale corespunzătoare pentru a produce răspunsul dorit. Canalizarea informaţiei defineşte funcţia reglatoare a
sistemului nervos. Astfel, dacă o persoană pune mâna pe un obiect fierbinte, aceasta este retrasă imediat. Gestul
respectiv, impreună cu alte răspunsuri asociate (ţipătul de durere, lacrimarea, îndepărtarea întregului corp de
obiectul respectiv), reprezintă punerea în funcţie numai a unei mici porţiuni din sistemul motor al organismului.
Compartimentul motor are un rol important în controlul diverselor activităţi ale organismului prin contracţia
muşchilor scheletici din întregul corp, prin contracţia muşchilor netezi din organele interne şi prin secreţia glandelor
endo şi exocrine. Muşchii şi glandele au primit denumrea de efectori, deoarce desfăşoară functii dictate de
semnalele nervoase de comandă. Axul motor al sistemului nervos pentru controlul muşchilor striaţi este format din
următoarele sectoare: măduva spinării, substanţa reticulată (bulbară, pontină, mezencefalică), ganglionii
bazali, cerebelul şi cortexul cerebral. Fiecare din sectoarele enumerate joacă rolul său specific în controlul
mişcărilor corpului, nivelurile nervoase inferioare fiind implicate în primul rând în răspunsul automat al organismului
la stimulii senzoriali. Nivelurile nervoase superioare sunt implicate în mişcările voluntare controlate de procesele de
gândire. Paralel cu acesta, acţionează un alt sistem motor care controlează musculatura netedă şi glandele, numit
sistem nervos autonom. Totuşi, numai o mică parte din informaţiile senzoriale importante provoacă un răspuns
motor imediat. Majoritatea informaţiilor sunt stocate pentru controlul ulterior al activităţilor motorii şi pentru
utilizarea sa în procesul de gândire. Informaţiile stocate în sistemul nervos participă la mecanismele de prelucrare
ulterioară, prin faptul că gândirea compară noile experienţe senzoriale cu cele memorate. Memoria ajută la
selectarea noilor informaţii senzoriale importante şi la canalizarea lor spre arii de stocare adecvate în vederea
utilizării ulterioare, sau spre arii motorii pentru a produce răspunsuri din partea organismului (Guynton, 1987).
Psihoneurologia se interesează nu numai de anatomia şi fiziologia SNC, ci şi de legăturile care se stabilesc între
fiziologia organismului în general şi comportamentul său. Fiziologia încorporează totalitatea sistemelor
fiziologice din corp - sistemul nervos, sistemul endocrin (tiroidian, pituitar, adrenal etc.), sistemul exocrin
(glandele sudoripare, glandele lacrimale etc.), sistemul gastrointestinal, sistemul circulator, sistemul muscular
etc. Deşi interesul predominant al psihoneurologiei este axat pe creier şi pe restul sistemului nervos, în capitolele
următoare vom vorbi şi despre aceste sisteme. Pentru înţelegerea adecvată a raportului psihic-creier se impune
efectuarea unei analize la nivel anatomic global care să evidenţieze structura şi funcţiile neuronului, modul de
grupare a elementelor neuronale de-a lungul nevraxului şi relevarea marilor conglomerate neuronale cu importanţă
deosebită în plan psihocomportamental.
Anatomia şi fiziologia clasică a SNC s-au constituit şi s-au dezvoltat făcând abstracţie aproape total de
implicarea diferitelor structuri şi procese neurofiziologice în producerea vieţii şi activităţii psihice. Aceasta a
atras după sine axarea analizei psihologice în plan fenomenologic şi privarea acesteia de una dintre cele mai
importante surse de argumentare obiectivă, ştiinţifică. De-abia în ultimile decenii s-a produs o schimbare de
optică de natură să stabilească o relaţie de comunicare şi cooperare între cele trei ştiinţe, care facilitează înţelegerea
adecvată a specificului raportului psihic-creier. Unitatea structurală de bază a sistemului nervos este neuronul.
El este o celulă vie şi are aceeaşi schemă structurală cu cea a SN.
Neuronul
Toate celulele corpului sunt constituite după acelaşi tipar: prezintă un nucleu care contine cromozomi şi material
genetic cu catene de acid deoxiribonucleic (ADN) şi acid ribonucleic (ARN), o citoplasmă gelatinoasă cu mitocondrii
pentru producerea energiei şi ribozomi pentru sinteza proteică. Fiecare celulă din corp, conţine un număr identic
de cromozomi, care la om este de 46. Pe lângă faptul că reprezintă caracterisicile unui organism şi controlează
transmiterea caracterelor de la o generaţie la alta, ADN-ul dirijează însăşi celula care-l conţine. În ciuda faptului
că celulele au acelaşi pattern de bază, ele se specializează în decursul dezvoltării organismului, în mod inexplicabil.
Dacă ne referim la sistemul nervos observăm că acesta are o structură de tip discret, discontinuu. Din punct de
vedere histologic, el se compune din elemente celulare, cărora le revin 75% din masa totală a ţesutului nervos (35%
celule neuronale şi 40% celule nevroglice), substanţă intermediară necelulară (lichid extracelular cu elemente
macromolecu-lare) care reprezintă circa 15%, şi reţeaua de vase sanguine care ocupă circa 10%. Fireşte, aceste
proporţii variază semnificativ în funcţie de natura ţesutului nervos (substanţă albă sau substanţă cenuşie, segmente
centrale etc.).
Structura neuronului
Deşi neuronii variază considerabil ca formă şi mărime, ei au caracteristici comune. Un neuron tipic are patru regiuni
morfologice:

• corpul celular, compus din nucleu şi perikarion,


• dendrite,
• axon şi
• terminaţii presinaptice.
Fiecare din aceste părţi are o anumită funcţie. Nucleul este cel mai important organit celular, dar după completa
dezvoltare a sistemului nervos acest nucleu nu suferă mitoze şi în consecinţă neuronul matur este incapabil de
reproducere. În corpul neural au loc procesele de analiză-sinteză a informației. Din această cauză neuronul este
asemănat cu un microsistem logistic capabil de a efectua operații de comparație, discriminare, clasificare, bazate
pe criterii de ordin pragmatic, semantic şi sintactic. Natura şi conținutul transformărilor efectuate depind de
specializarea funcțională a neuronilor (senzorială, motorie sau de asociație). Prin urmare, celulele din sistemul
nervos sunt mai variate decât în oricare altă parte a organismului. Diversitatea citologică este rezultatul procesului
de dezvoltare biologică numit diferențiere. Fiecare tip de celulă sintetizează numai aminite macromolecule
(enzime, proteine structurale, constituenți membranari şi poduşi de secretie). Dar nu toți constituenții neuronului
sunt specializați. Multe molecule sunt comune tuturor celulelor corpului, unele sunt caracteristice tuturor
neuronilor, altele unei mari clase de neuroni şi, în sfârşit, unele molecule sunt caracteristice numai unui număr mic
de celule nervoase. Cea mai importantă funcție a corpului celulei neuronale este aceea de a sintetiza
macromolecule. Ca şi în celelalte celule, în neuroni informația genetică de encodare proteică şi aparatul complex
de sintetizare sunt conținute în ADN-ul celulei. Apoi, neuronii au mitocondrii şi enzime atât pentru biosinteza
moleculelor mici, cât şi pentru metabolismul intermediar - căi majore care transformă carbohidrații şi alte substanțe
în energie de consum. Deoarece celulele nervoase sunt excitabile, au unii constituenți membranari comuni cu alte
țesuturi excitabile şi mulți componenți înalt specializati care aparțin numai unei anume clase de celule nervoase
specifice. Astfel, numai anumiți neuroni conțin una sau alta din substantele transmițătoare, canale ionice speciale,
mecanisme de transport membranar sau receptori pentru neurotransmitători.
Deci, pentru clarificarea funcției neurale este necesară identificarea şi caracterizarea acestor molecule generale şi
neuronale specifice (Schwartz & Kandel, 1991).

Dendritele şi axonul.
La nivelul corpilor celulari ai celor mai multi neuroni întâlnim două feluri de fibre nervoase, denumite
• dendrite şi
• axoni.
Un neuron are de obicei mai multe dendrite şi un singur axon. La majoritatea neuronilor, dendritele sunt relativ
scurte şi mult arborizate. Aceste fibre împreună cu membrana corpului ceular, reprezintă principala suprafață
receptivă a neuronului, care comunică cu fibrele provenite de la alti neuroni. Axonul ia naştere dintr-o uşoară
ridicătură a carpului celular, fiind prelungirea centrifugă cea mai lungă a neuronului. Aproape de terminaţie, axonul
se divide în ramuri fine care au nişte umflături specializate denumite terminale presinaptice. Acestea reprezintă
elementele transmiţătoare ale neuronului. Cu ajutorul terminaţiilor sale, un neuron transmite informația cu privire
la propria sa activitate suprafeței receptive (dendrite sau corp celular) a altor neuroni.
Punctul de contact dintre doi neuroni se numeşte sinapsă.
• Celulele care transmit informația în afară poartă denumirea de celule presinaptice,
• iar cele care primesc informația se numesc celule postsinaptice.
Spațiul care separă celulele presinaptice de cele postsinaptice este denumit fantă sinaptică. Grupurile de fibre
mielinizate apar albe, iar masa acestora formează substanța albă a sistemului nervos. Fibrele nemielinizate şi corpul
celulelor neuronale corespund substantei cenuşii a sistemului nervos. Important de reținut este faptul că neuronul
este o celulă unică a cărei citoplasmă umple axonii şi dendritele. Numărul neuronilor este de aproximativ 86 de
miliarde, din care aproximativ 16 miliarde se află la nivelul cortexului cerebral. Când neuronii sunt lipsiți de
oxigen, ei suferă o serie de modificări structurale ireversibile, care le modifică forma şi le ratatinează nucleul.
Fenomenul poartă denumirea de modificare celulară ischemică, care în timp duce la dezintegrarea celulelor
afectate. Deficiența de oxigen poate proveni din lipsa fluxului sanguin (ischemie), din diminuarea concentrației
oxigenului sanguin sau din blocarea respirației aerobice de către unele toxine.
Clasificarea neuronilor.
Pe baza diferențelor structurale, a numărului şi formei proceselor neuronale care iau naştere din corpul celular,
neuronii sunt clasificati în trei grupe majore: unipolari, bipolari şi multipolari.
1) Neuronii unipolari au un singur proces primar (o singură fibră nervoasă) care se extinde de la corpul celulei şi
care poate da naştere la mai multe ramuri. La scurtă distantă de corpul celular această fibră se poate divide în
două ramuri: un ram este conectat cu o parte din periferia corpului servind drept dendrită, iar celălalt intră în creier
sau în măduva spinării servind drept axon. Aceste celule predomină atât la nevertebrate cât şi în masa unor
ganglioni localizati în afara creierului şi măduvei spinării, ganglioni care aparțin sistemului nervos autonom.
2) Neuronii bipolari au o somă ovoidă care dă naştere la două procese (fibre nervoase): unul periferic sau dendrită
care transportă informatia de la periferie şi un altul central sau axon, care duce informatia către SNC. Aceşti
neuroni se găsesc la nivelul părtilor specializate ale ochilor, nasului şi urechii. Unele celule bipolare localizate la
nivelul gangionilor spinali au rolul de a transporta informatia tactilă, dureroasă şi de presiune.
3) Neuronii multipolari predomină la nivelul sistemului nervos al vertebratelor. Aceste celule au un singur axon şi
una sau mai multe ramuri dendritice care emerg din toate părtile corpului celular. Cei mai mulți neuroni al căror
corp se află în creier şi măduva spinării sunt de acest tip. Numărul şi extinderea proceselor dendritice corelează cu
numărul contactelor sinaptice făcute cu alți neuroni. O celulă spinală motorie, ale cărei dendrite sunt moderate ca
număr şi extindere, are circa 10.000 de contacte - 2000 pe corpul celular şi 8000 pe dendrite. Marea arborizație
dendritică a unei celule Purkinje din cerebel primeşte aproximativ 150.000 de contacte.
Pe baza diferenţelor funcţionale, neuronii cerebrali pot fi clasificaţi în trei grupe majore: senzoriali, motori şi
interneuroni.

1) Neuronii senzoriali sau aferenți sunt aceia care transportă impulsurile nervoase de la periferia corpului către
creier sau măduva spinării. Aceşti neuroni pot avea uneori, în vârful dendritelor terminaţii receptoare specializate,
iar alteori au dendrite strâns asociate cu celule receptoare localizate în piele sau în diferite organe senzoriale.
Stimularea terminaţiilor receptoare sau a celulelor receptoare se produce în prezența modificărilor care apar
în interiorul sau în afara corpului. Impulsurile transmise de-a lungul fibrelor neuronilor senzoriali ajung la creier
sau la măduva spinării, unde sunt prelucrate de către alti neuroni. Neuronii senzoriali sunt specializati în
receptarea informatiei emise de sursele din afara sistemului nervos central, în prelucrarea ei şi în elaborarea în final
a unui model adecvat al unei însuşiri sau al stimulului în ansamblu. Gruparea acestor neuroni la diferite niveluri ale
nevraxului împreună cu căile nervoase alcătuite din terminatiile lor denditrice şi axonice formează marile sisteme
ale sintezei aferente. Trebuie remarcat faptul că există o deosebire între neuronii aferenți sau primari şi cei
senzoriali. Termenul de aferent este aplicat tuturor informațiilor conştiente şi inconştiente care de la periferie
ajung la SNC. Termenul de senzorial este aplicat numai inputurilor aferente care generează în creier percepții
conştiente.

2) Neuronii motori sau eferenți transmit impulsurile nervoase în afara creierului sau măduvei spinării, la efectorii
musculari sau glandulari. Când un impuls motor ajunge la un muşchi, acesta se contractă; când impulsul nervos
ajunge la o glandă, aceasta eliberează secreție. Prin urmare neuronii motorii sunt specializati în elaborarea
mesajelor de comandă şi a răspunsurilor la stimulii din mediul intern sau extern al organismului. Gruparea lor
ierarhică formează marile sisteme ale sintezei eferente.
3) Inteneuronii (neuroni intercalari sau de asociație) se află în interiorul creierului sau a măduvei spinării. Ei
formează legături între neuronii senzoriali şi cei motori. Functia inteneuronilor este aceea de a transmite impulsuri
dintr-o parte a creierului sau măduvei spinării la cealaltă. Datorită acestui fapt, ei pot direcționa impulsurile
senzoriale primite către centrii corespunzători pentru prelucrare şi interpretare. Alte impulsuri sosite sunt
transferate neuronilor motorii. Interneuronii reprezintă clasa cea mai numeroasă de celule din sistemul nervos
care nu au specificitate senzorială sau motorie. Neuronii de asociație se deosebesc după lungimea axonului. Cei
cu axoni lungi (celule de tip Golgi I) transmit informația la distanțe mari, de la o regiune cerebrală la alta, căpătând
numele de interneuroni de proiecție. Interneuronii cu axoni scurți (celule Golgi de tip II) prelucrează informația în
interiorul unei regiuni cerebrale specifice fiind numiți interneuroni locali. Gruparea neuronilor asociativi formează
zonele de asociație sau integrative ale SNC. Pe măsura trecerii de la un segment inferior la altul superior, ponderea
neuronilor asociativi și implicit a zonelor de asociație crește semnificativ. Astfel, la nivelul scoarței cerebrale zonele
de asociație reprezintă aproximativ 2/3 din suprafața totală.
Celulele nevrogliale
Corpul celulelor nervoase şi axonii sunt înconjuraţi de celule gliale. Acestea se află într-o proporţie de 10 până la 50
de ori mai mare decât neuronii din SNC al vertebratelor. Se presupune că celulele gliale nu sunt esenţiale pentru
prelucrarea informaţiei, dar au numeroase alte roluri:

a) Ele servesc ca elemente de suport, dând fermitatea şi structura creierului. De asemenea, ele separă unele de
altele diferite grupe neuronale.

b) Două tipuri de celule gliale, oligodendrocitele din SNC şi celulele lui Schwann din SNP (sistemul nervos periferic),
formează mielină, pătură izolatoare care acoperă axonii cei mai mari.

c) Unele celule gliale sunt gunoiere, deoarece înlătură celulele neuronale care mor din diferite cauze. Alte celule
gliale au funcţii nutritive pentru celulele nervoase. La nivelul SN al vertebratelor deosebim două categorii de celule
gliale: microglia şi macroglia

Microglia derivă din macrofage, dar embriologic şi fiziologic nu are nici o legătură cu alte tipuri de celule ale
sistemului nervos. Celulele microgliale sunt fagocite şi se mobilizează după traumatisme, infecții şi alte boli.

Macroglia se compune din trei tipuri


celulare predominante: astrocitele, oligodendrocitele şi celulele Scwhann.

Sinapsa

Neuronii nu sunt interconectați fizic intre ei. Daca ar fi așa, atunci potentialul de actiune s-ar propaga de
la un neuron la altul sau de-a lungul căilor nervoase, la întamplare. Din aceasta cauză joncțiunile dintre
neuroni ca și cele dintre ei, pe de o parte și elementele receptoare și executorii, pe de altă parte se realizeaza prin
intermediul unor formațiuni și mecanisme complexe pe care Foster și Scherringhton (1897) le-au denumit
sinapse. La nivelul sinapselor, neuronii nu se află în contact direct deoarece aici există un gol denumit fantă
sinaptică (sinaptic cleft). Pentru ca un impuls continue drumul de-a lungul unei căi nervoase, el trebuie să
traverseze acest spațiu. Cercetarile histologice au aratat că la locul de contact, terminatiile celulare prezinta
proeminențe care au forme diferite (picioruș, inel, buton, bulb, varicozități, etc.). Toate aceste structuri sunt
cunoscute sub numele generic de butoni sinaptici. Diametrul unui buton este de 0,5 – 2 microni, iar suprafata
de contact are 2 – 4 microni pătrați. În general, sinapsa reprezinta o barieră față de potențialul de acțiune care
se propagă către terminalul axonal sau presinaptic.

Structural, sinapsa are următoarele elemente:

• o membrană presinaptică cu vezicule caracteristice denumite vezicule aposinaptice ale lui De


Robertis și Benelt și
• o membrană postsinaptică situată exact în fața terminației presinaptice și despicătura sau fanta
sinaptică (spatiul cuprins intre cele doua membrane).

Cu ajutorul microscopului electronic s-au evidențiat în fiecare buton terminal două structuri interne
importante pentru funcția excitatorie sau inhibitorie a sinapsei: veziculele sinaptice și mitocondriile. Veziculele
sinaptice contin substanțe neurotransmițătoare care, odata eliberate in fanta sinaptica, excită sau inhibă
neuronul postsinaptic. Mitocondriile fumizeaza adenozin trifosfat (ATP), care ulterior asigura energia sintezei
noilor cantități de neurotransmițător. Dupa natură, terminatiile sinaptice pot fi:

• axosomatice (în maduvă și ganglionii spianli);


• axodendritice (în scoarț a cerebrală),
• dendrodendritice și
• axonale.

Dupa efectul produs la nivelul neuronului receptor distingem

• sinapse excitatoare și
• sinapse inhibitoare.

La acestea se adaugă:

• sinapsele receptoare (senzoriale), prin care se realizează trecerea informatiei de la nivelul celulelor
senzoriale periferice în structurile neuronale specifice sistemelor sintezei aferente și
• sinapse efectoare (vegetative sau motorii), prin intermediul cărora se transmit semnalele de
comanda de la centrii sintezei eferente la organele executorii de răspuns (glande sau mușchi).

După mecanismul de transfer al excitației de la nivelul neuronului emitent cel al neuronului receptor se
presupune existenta unor sinapse cu transmisie chimică și a altora cu transmisie electrică sau combinată.
Aproape toate sinapsele folosite pentru transmiterea de semnale în sistemul nervos central sunt sinapse
chimice. La aceste sinapse, primul neuron secretă o substanță chimică numita neurotransmițător, care la
rândul sau acționează asupra unei proteine receptor din membrana neuronului urmator, excitandu-l,
inhibându-l sau variindu-i sensibilitatea. Fiecare neuron motor primeste doua până la șase contacte de la un
neuron senzitiv, dar fiecare neuron senzitiv contactează 500 până la 1000 neuroni motori. Neurotransmițătorul
utilizat de celulele senzoriale primare nu a fost încă identificat cu certitudine, dar se presupune ca este
aminoacidul L-glutamat. Pe suprafața somei și a dendritelor neuronului motor tipic din cornul anterior al
măduvei spinarii se afla un numar imens, de până la 100.000 de mici butoni, denumiți terminații presinaptice.
Transmiterea sinaptică.
La nivelul sistemului nervos, informația se transmite printr-o succesiune de neuroni sub formă de impuls nervos.
Cu ocazia transmiterii interneuronale apare posibilitatea blocării unui impuls in momentul transmiterii sale de la un
neuron la altul, schimbarea unui impuls singular in stimuli repetativi sau integrarea impulsurilor provenite de la mai
multi neuroni cu transmiterea in continuare a unui semnal mai complex. Toate aceste mecanisme aparțin funcțiilor
sinaptice ale neuronilor.
• La nivelul unui neuron presinaptic, impulsul trece de la dendrita la corpul său celular, apoi la axon și la
terminațiile sale.
• Dupa traversarea sinapsei impulsul parcurge dendrita și corpul celular al unui neuron postsinaptic.
Prin urmare, sinapsele transmit întotdeauna dinspre neuronul care secretă transmițătorul (neuron presinaptic),
spre neuronul asupra căruia acționează transmițătorul (neuron postsinaptic). Acesta este principiul conducerii
într-un singur sens prin sinapsele chimice. El face ca semnalul să fie îndreptate către un scop specific, în vederea
executării cu exactitate a miliardelor de miliarde de funcții motorii, senzoriale, de memorie, gândire, etc.
Transmiterea sinaptică electrică și chimică.
Ramon y Cajal (1894) a fost primul care a facut descrierea histologică a punctelor de contact dintre neuroni, pe
care ulterior Foster și Sherringhton (1897) le-au denumit sinapse. Loewi și Navratil (1926) au demonstrat că
acetilcolina mediază transmiterea de la nervul vag la inimă. Acest fapt a provocat in jurul anilor '30 dezbateri
considerabile cu privire la mecanismele de transmitere sinaptică de la nivelul nerv-mușchi și creier. Școala
fiziologică condusă de Eccles (1957) argumenta că transmiterea sinaptica este electrică, iar potentialul de acțiune
circulă în mod pasiv de la neuronul presinaptic la cel postsinaptic. Școala farmacologică condusă de Dale (1935)
argumenta că transmiterea se face pe cale chimică cu ajutorul unui mediator chimic (o substanță
transmițătoare) care eliberat de neuronul presinaptic da naștere unui flux de curent în celula postsinaptică.
Ulterior, intre anii 1950 Si 1960 a devenit clar că nu toate sinapsele utilizează același mecanism. Astfel, Fatt și
Katz (1951), Eccles (1957), Furshpan și Potter (1959) au dovedit existența ambelor modalități de transmitere. La
nivelul SN informația este transferată de la un neuron la altul prin două tipuri de transmițători sinaptici: electrici și
chimici. Cu toate că majoritatea sinapselor utilizează transmițători chimici, unele dintre ele opereaza numai pe
cale pur electrică. S-a arătat cu ajutorul microscopului electronic că sinapsele chimice și electrice au morfologii
diferite.

Inputurile primite de neuroni pot fi excitatoare sau inhibitoare

Neurotransmițătorii care cresc permeabilitatea membranei la ionii de sodiu și generează impulsuri nervoase sunt
excitatori. În categoria acestor substanțe intră serotonina, dopamina și norepifrina.
Neurotransmițătorii care diminuează permeabilitatea membranei la ionii de sodiu cresc pragul de stimulare sunt
inhibitori, deoarece micșorează șansa ca un impuls nervos să fie transferat la un neuron învecinat. În rândul
acestor neurotransmițători intră aminoacizii, GABA și glicina.
Un număr de droguri interferează cu acțiunea normală a diferiților neurotransmițători. Astfel, LSD contracarează
funcția serotoninei, cocaina întărește efectele norepinefrinei prin împiedicarea inactivării sale normale, iar
amfetamina accelerează eliberarea excesivă de dopamină. Unele droguri anxiolitice, de tipul diazepamului
(Valium) par a-și exercita efectul prin creșterea eficacității neurotransmițătorului inhibitor GABA. Cafeina,
teina și teobromina aflate în cafea, ceai și respectiv cacao cresc excitabilitatea, prin micșorarea pragului de
excitabilitate al neuronilor. Stricnina, unul dintre cei mai cunoscuti agenți de creștere a excitablitățiii neuronale
acționează prin inhibarea acțiunii unora din mediatorii inhibitori, mai ales a glicinei din măduva spinarii. În
consecință, efectul mediatorilor excitatori este amplificat, iar neuronii devin atat de exitabili incat descarcă rapid
și repetitiv, provocând spasme musculare tonice severe.
Majoritatea anestezicelor cresc pragul membranar de excitabilitate și reduc transmiterea sinaptică în multe
puncte din sistemul nervos prin schimbarea caracteristicilor fizice ale membranelor neuronale. Numărul diferitelor
tipuri de receptori pentru neurotransmițători este foarte mare, dar ei aparțin unui numar mic de familii. De
exemplu, multe subtipuri de receptori pentru acetilcolină, glutamat, GABA, glicina și serotonină localizate pe
diferiți neuroni se deosebesc numai prin cateva proprietăți subtile. Cu această largă varietate de receptori este
posibilă găsirea unei generații noi de medicamente psihoactive care să acționeze selectiv asupra unui set
specific de neuroni pentru înlăturarea bolilor mintale care devastează atât de multa lume. 1% din populația
umana are schizofrenie și alt procentaj de 1% suferă de tulburări biploare (Alberts și colab. 1998).
Inhibiția postsinaptică
Fenomenele electrice din cursul inhibiției neuronale sunt diferite de cele ale excitației. Sinapsele inhibitoare
deschid canalele de potasiu sau de clor în locul celor de sodiu, permițând trecerea cu uşurință a acestor ioni.
Potențialul Nernst pentru potasiu este de -86mV iar cel pentru clor este de -70mV, ambele fiind mai mici decât
cei -65mV prezenți în interiorul membranei neuronale în repaus. Deschiderea canalelor de potasiu şi ieşirea
acestuia din celulă accentuează negativitatea membranei. Deschiderea canalelor de clor şi pătrunderea acestor
sarcini negative în celulă face potențialul membranar mai negativ.Această creştere a gradului de negativitate
intracelulară se numeşte hiperpolarizare. În acest context, neuronul este inhibat deoarece potențialul de
membrană se află acum mai
departe de potențialul prag de excitabilitate. Această creştere a negativității peste nivelul potențialului membranar
de repaus normal se numeşte potențial postsinaptic inhibitor.
Inhibiția presinaptică
Şi în terminațiile presinaptice se poate produce uneori un tip de dezinhibiție, înainte ca semnalele să ajungă la
nivelul sinapsei. Această inhibiție presinaptică se datoreză unor sinapse "presinaptice" aflate pe fibra nervoasă
terminală înainte de contactul său cu neuronul următor (Guyton, 1987). Activarea acestor sinapse ar reduce
capacitatea de deschidere a canalelor de calciu ale terminației şi deci excitația neuronală. Se presupune că în astfel
de situații, sinapsele presinaptice ar elibera un mediator care blochează canalele de calciu.
Conform altei teorii, mediatorul ar inhiba deschiderea canalelor de sodiu, micşorând potentialul de acțiune al
terminatiei. Cum canalele de calciu, activate de voltaj, sunt extrem de sensibile la diferenta de potential, orice
scădere a potentialu-lui de actiune reduce serios intrarea calciului. În acest mod, fibrele nervoase adiacente se
inhibă reciproc, diminuând astfel împrăştierea semnalului de la o fibră la alta.
Codificarea informației
Informația din orice sistem nervos este încorporată şi codificată în întregime prin frecvența şi patternul impulsurilor
nervoase. Sistemele senzoriale şi motorii, percepția, memoria, cogniția, gândirea, emoția, personalitatea, etc.
sunt reprezentate fiecare de anumite tipare, impulsuri nervoase şi de structuri proprii ale SNC. Acest fapt poate fi
demonstrat cu ajutorul electrozilor implantati în neuroni singulari sau în grupe de neuroni cerebrali, cărora le
aplicăm un curent electric foarte slab. Astfel, după efectuarea anesteziei locale şi a unui orificiu cranian, electrozii
pot fi introduşi în creierul diferiților pacienți. Când curenții stimulatori se situează în limitele parametrilor
activitătii fiziologice a SN, atunci putem obtine mimarea funcției normale a unei suprafeţe cerebrale. Dat fiind
faptul că se actionează pe pacientii conştienti, aceştia pot relata în mod veridic experiența subiectivă pe care o
resimt. De-a lungul anilor s-a demonstrat că prin stimularea electrică a unei anumite zone cerebrale pot fi
reproduse senzații vizuale, senzatii auditive, vocalizări automate, senzatii de discomfort sau euforie,
episoade de agresiune, reamintiri, mişcări de membre sau grupuri individuale de muşchi, tahicardie,
bradicardie, creşterea sau scăderea presiunii sanguine, etc. Din cercetările efectuate rezultă că noi posedăm
un creier cu o complexitate inimaginabilă. Fiecare aspect comportamental sau stare fizică îşi are esența în
patternurile potentialelor de actiune.

Tipuri de nervi
O fibră nervoasă este o extensie a unui neuron, iar un nerv este format din grupuri sau mănunchiuri de fibre
nervoase reunite prin pături de țesut conjunctiv. Ca și fibrele nervoase, nervii care conduc impulsurile în măduva
spinării sau în creier sunt numiți nervi senzitivi, iar cei care transmit impulsurile la mușchi sau glande sunt numiți
nervi motori.

Conexiunile sinaptice ne permit să gândim, să acționăm și să ne reamintim


La nivelul unei sinapse chimice, terminalul nervos al celulei presinaptice convertește semnalul electric într-un
semnal chimic, iar celula postsinaptică convertește din nou semnalul chimic în unul electric. Valoarea sinapselor
chimice devine clară atunci cand sunt considerate în contextul funcționării sistemului nervos cu enorma sa rețea
neuronală interconectată prin numeroase căi ce efectuează computații complexe, înmagazinări de memorie și
planuri de acțiune.
Bibliografie 5
Andrei C. Miu; Adrian I. Olteanu – Neuroștiințe. De la mecanisme moleculare și
celulare la comportament și evoluție. Vol.I: Dezvoltarea sistemului nervos, p.
288-291;
Implicațiile mielinizării pentru funcționarea sistemului nervos
Funcția generică a mielinizării este de a creşte viteza impulsurilor
nervoase. Această modificare funcțională este semnificativă din de
vedere evolutiv întrucât viteza de conducere a impulsului nervos
permite, în principiu, o viteză de procesare mai mare, acesta fiind
un avantaj al mamiferelor, general, şi, dintr-o altă perspectivă,
un indicator tot mai discutat în cadrul testelor de inteligență. Mai
mult, mielinizarea este un proces mai ales postnatal.
În acest context, vom explora în continuare modalitățile activității
electrice prin care creierul codează informația. Este de notat că,
atât timp cât explorăm potențiale electrice evocate pe care le
numim "cognitive", pentru că se modifică odată cu modificarea
semnificației cognitive a stimulului care, altfel, rămâne identic din
punct de vedere senzorial (pentru detalii, vezi: Halgren, 1992, pp.
204- 211; Olteanu, Lupu & Miu, 2001, pp. 208-210), nu este radical
să spunem că orice comportament este elaborat şi codat prin
modificări specifice ale activității electrice a creierului. Această
activitate electrică se poate descompune în diverse tipuri de
semnale, care reprezintă semnele care pot forma suficient de multe
combinații pentru a spune că activitatea electrică este codul în care creierul elaborează cogniția. Modul în care
creierul elaborează ceea ce numim generic cogniție şi coordonează comportamentul este conceptualizat
tradițional în două feluri.
1. O primă teorie are la bază activitatea unității morfo-funcționale a sistemului nervos, neuronul. Această
teorie presupune că, între neuroni, comunicarea este posibilă prin succesiuni de impulsuri nervoase care, la
nivelul sinapselor, sunt transformate în analogul lor chimic. Această din urmă ipoteză dispune de suport
experimental întrucât s-a observat că unii neuroni eliberează cuante de neurotransmițător proporțional cu
frecvența şi amplitudinea potențialelor de acțiune (vezi: Ruppersberg & Herlitze, 1996) Mesajele se consideră că
sunt codate prin intervalele dintre aceste impulsuri.
2.O altă teorie a semnalizării neuronale are la bază activitatea electrică a populațiilor de neuroni. Descărcările
neuronilor, ponderate după numeroase reguli (tipul sinapsei, excitator sau inhibitor, istoria descărcărilor unei
celule ş.a.), descriu o activitate electrică reprezentativă pentru populații locale. Şi această teorie dispune de
suport experimental. S-a arătat, de pildă, că neuronii care formează hărți funcționale ca, de pildă, cele susținute
de coloanele de dominanță oculară sau benzile de izofrecvență, care ştim că respectă reguli de organizare
funcțională cum ar fi retinotopia sau tonotopia, conțin neuroni care au descărcări relativ omogene, sugerând că
aceştia codează un anumit aspect al experienței senzoriale. Un indice al acestei activități electrice a unei populații
de neuroni este potențialul de vârf al populației care este corelat temporal şi ca magnitudine cu descărcările
neuronilor Ele reflectă numărul de neuroni care descarcă şi sincronicitatea cu care aceştia descarcă (Andersen,
Bliss & Skrede, 1971).
Activitatea electrică care susţine semnalizarea dintre neuroni trebuie să ia în considerare nu numai potenţialele de
vârf, ci şi alte modificări electrice care se întâmplă într-o populaţie de neuroni. Aceste "semne", deci, componente
ale activităţii electrice a creierului, aşa cum le putem inregistra cu mijloacele actuale și includ şapte tipuri cunoscute
de modificări electrice (Bullock, 1997).
1.Primul tip este potenţialul intracelular, adică potenţialul înregistrat la nivelul unei singure celule, reprezentativ
pentru starea electrică a unui singur compartiment al neuronului, fie corp celular, dendrită, axon, terminal sau
sinapsă. Aceste modificări electrice foarte locale la nivelul unei celule au fost sugerate de o teorie propusă de Robert
Gesell în 1940, care susţinea că există un gradient de repaus între polurile dendritic şi axonal ale neuronului.
2.Alt tip este reprezentat de potenţialele unice de vârf extracelulare care sunt înregistrate la 50-100 μm de
neuronii activi, care descarcă impulsuri.

3. Activitatea electrică multiunitară, alt tip de semn electric, este înregistrată pe un perimetru de 100-200 μm faţă
de câţiva neuroni care au activitate rapidă (frecvenţă >1 Hz), cu potenţiale de vârf.

4.S-a înregistrat și un tip de activitate electrică tot multiunitară, tot rapidă şi cu potenţiale de vârf, dar cu
amplitudini atât de mici încât nu se poate estima activitatea câtor neuroni reflectă.

5.Fluctuaţiile rapide (40-200 Hz), graduale, fără potenţiale de vârf reflectă potenţiale postsinaptice sau alte
evenimente electrice gradate, de la nivelul dendritelor, corpului celular sau terminalului axonic şi sunt un alt tip de
semn electric. Acestea reprezintă o mică parte din modificările care sunt indicate de electroencefalogramă (EEG),
faţă de
6.potenţialele lente (0.1-40 Hz) care reprezintă mare parte din spectrul EEG la vertebrate.
7.Ultima categorie de semne electrice cunoscută sunt potenţialele infralente, care sunt determinate de fluctuaţii
ale potenţialului de repaus, care sunt mai lungi de 10 secunde şi care pot dura chiar minute. Am menţionat toate
aceste forme ale activităţii electrice pentru a sublinia două aspecte.

1.În primul rând, este verosimil să presupunem că, cel puţin prin diversitatea acestor semne electrice şi numărul
imens de combinaţii pe care le pot susţine la nivelul unei populaţii neuronale, activitatea electrică a neuronilor
poate fi codul în care creierul elaborează comporatamentul.

2.În al doilea rând, motivul pentru care am optat să plasăm acest paragraf aici este că modificări ale conductibilităţii
neuronilor prin procese cum ar fi mielinizarea determină maturarea semnalizării în sistemul nervos. De aici, am
putea specula prin a spune că aceste procese susţin şi modificări comportamentale asociate dezvoltării. Pe de
altă parte, mielinizarea presupune un număr foarte mare de variabile electrice, chimice şi mecanice care o pot
influenţa. Fiind un proces postnatal, şi modificările din mediu (de ex., dietă, leziuni nervoase) pot afecta
mielinizarea, ceea ce ne permite să susţinem că este probabil ca nu toate creierele să fie mielinizate la fel.
Mielinizarea şi diversitatea activităţii electrice sunt două dintre motivele pentru a considera că creierele unor indivizi
diferiţi sunt semnificativ diferite.
Rezumat
Maturarea circuitelor nervoase depinde de izolarea electrică a acestora, astfel încât viteza de conducere a
semnalului nervos să crească. Acest proces de dezvoltare se numeşte mielinizare şi se extinde dintr-o perioadă
prenatală timpurie pănă spre pubertate. Neuronii din sistemele nervoase periferic şi central sunt mielinizate de
celule diferite, celulele Schwann şi oligodendrocitele, respectiv. Mielina secretată de aceste celule se diferenţiază
de alte membrane celulare prin raportul dintre dintre lipide şi proteine. Fazele diferenţierii celulelor secretoare de
mielină sunt controlate şi prin expresia limitată spaţio-temporal a unor factori de transcripţie, care semnalizează
trecerea de la o fază a mielinizării la alta. Mielinizarea se bazează pe semnalizarea dintre axon şi celula secretoare
de mielină, programul de mielinizare fiind dependent de semnale de la axon, iar axonul suferind unele modificări
ultrastructurale ca urmare a contactului cu celula secretoare de mielină.
Bibliografie 6
Bryan Kolb; Ian Q. Whishaw - Fundamentals of Human Neuropsychology, 7th
Edition, p.244;
Using a hand, a trunk, or a beak to obtain food and placing it in the
mouth to eat is a widespread action among animals. Placing a hand
in the mouth is also among the earliest motor actions that develop in
human infants. Developing fetuses suck their thumbs, and after birth
babies put
every object
they grasp into
their mouths.
In the course
of their studies
with monkeys,
Umilta and
colleagues
(2001)
recorded the
activity of an exemplar neuron that discharged when a
monkey reached for a food item to place in its mouth for
eating. As shown in the accompanying figure, this neuron
discharged in much the same way when the monkey
observed an experimenter reach for a block (top panel), and
it discharged as robustly when the experimenter reached for
a target hidden behind a screen (center panel).

It was not the movement of the experimenter's hand per


se that excited the neuron, because it did not discharge
robustly when the monkey observed the experimenter
make a reaching movement in the absence of a target
(bottom panel). What turns the mirror neuron on is the act of
obtaining the goal, a conclusion supported by many other experiments. Mirror neurons are equally excited when a
tool rather than a hand is used and are more excited if the goal is valuable, for example, can be eaten. Tools extend
the hand's function and likely were first used to enhance food acquisition. Many evolutionary theories propose
that verbal language may have developed from hand gestures used to signal goal attainment, especially food
goals. The close linkage between mirror neurons and goal attainment may explain the flexibility of our skilled
movements. If we cannot obtain a goal with a hand reach, we can substitute the mouth, a foot, or a neuro-prosthetic
BCI. The discovery of mirror neurons has led to wide-ranging speculation about their role in consciousness and
neurological and psychiatric diseases (Thomas, 2012). A conservative view is that mirror neurons function simply
to represent the goal of motor action. More speculative views propose that they represent our understanding
of actions.
Bibliografie 7
Christian Jarrett – Great Myths of the Brain, p.154-160;
5.Myths About the Physical Structure of the Brain

Myth #25: Mirror Neurons Make Us Human (and Broken Mirror


Neurons Cause Autism)

Back in the 1990s Italian neuroscientists identified cells in the


brains of monkeys with an unusual response pattern. These
neurons in the premotor cortex were activated when the monkeys
performed a given action and, mirror-like, when they saw another
individual perform that same movement. It was a fortuitous
discovery. Giacomo Rizzolatti and his colleagues at the
University of Parma were using electrodes to record the activity
of individual cells in macaque monkey forebrains. They wanted
to find out how the monkeys use visual information about
objects to help guide their movements. It was only when one
of the research team, Leonardo Fogassi, happened one day to
reach for the monkeys’ raisins, that the mirror neurons were
spotted. These cells were activated by the sight of Fogassi’s
reaching movement and when the monkeys made that
reaching movement themselves. Since then, the precise
function and influence of these neurons has become perhaps
the most hyped topic in neuroscience.
The Myth

In 2000, Vilayanur Ramachandran, the charismatic University of California at San Diego neuroscientist, made a bold
prediction: “mirror neurons will do for psychology what DNA did for biology: they will provide a unifying framework
and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments.”

Ramachandran is at the forefront of a frenzy of excitement that has followed these cells ever since their discovery.
They’ve come to represent in many people’s eyes all that makes us human (ironic given they were actually
discovered in monkeys). Above all, many see them as the neural essence of human empathy, as the very
neurobiological font of human culture. Part of Ramachandran’s appeal as a scientific communicator is his infectious
passion for neuroscience. Perhaps, in those early heady years after mirror neurons were discovered, he was just
getting a little carried away? Not at all. For his 2011 book aimed at the general public, The Tell-Tale Brain,
Ramachandran took his claims about mirror neurons even further. In a chapter titled “The Neurons that Shaped
Civilization,” he claims that mirror neurons underlie empathy; allow us to imitate other people; that they
accelerated the evolution of the brain; that they help to explain a neuropsychological condition called anosognosia
(in which the patient denies his or her paralysis or other disability); that they help explain the origin of language;
and most impressively of all, that they prompted the great leap forward in human culture that happened about 60
000 years ago, including the emergence of more sophisticated art and tool use. “We could say mirror neurons served
the same role in early hominid evolution as the Internet, Wikipedia, and blogging do today,” he concludes. “Once the
cascade was set in motion, there was no turning back from the path to humanity.” In 2006, with co-author Lindsay
Oberman, he made a similar claim more pithily: “Mirror neurons allowed humans to reach for the stars, instead of
mere peanuts.”

Ramachandran is not alone in bestowing such achievements on these brain cells. Writing for The Times in 2009
about our interest in the lives of celebrities, the eminent philosopher A. C. Grayling traced it all back to those mirror
neurons. “We have a great gift for empathy”, he wrote. “This is a biologically evolved capacity, as shown by the
function of ‘mirror neurons.’” In the same newspaper in 2012, Eva Simpson wrote on why people were so moved
when tennis champion Andy Murray broke down in tears. “Crying is like yawning”, she said, “blame mirror neurons,
brain cells that make us react in the same way as someone we’re watching (emphasis added).” In a New York Times
article in 2007, about one man’s heroic actions to save another who’d fallen onto train tracks, those cells featured
again: “people have ‘mirror neurons’” Cara Buckley wrote, “which make them feel what someone else is experiencing,
be it joy or distress”. Science stories about the function of mirror neurons often come with profound headlines that
also support Ramachandran’s claims: “Cells that read minds”, was the title of a New York Times piece in 2006; “The
mind’s mirror” introduced an article in the monthly magazine of the American Psychological Association in 2005.

But the hype is loudest in the tabloids. Try searching for “mirror neurons” on the Daily Mail website. To take one
example, the paper ran an article in 2013 that claimed the most popular romantic films are distinguished by the fact
they activate our mirror neurons. Another claimed that it’s thanks to mirror neurons that hospital patients benefit
from having visitors. In fact, there is no scientific research that directly backs either of these claims, both of which
represent reductionism gone mad.

A brief search on Twitter also shows how far the concept of powerful empathy-giving mirror neurons has
spread into popular consciousness. “Mirror neurons’ are responsible for us cringing whenever we see someone
get seriously hurt” the @WoWFactz feed announced to its 398 000 followers with misleading confidence in
2013. “Mirror neurons are so powerful that we are even able to mirror or echo each other’s intentions,” claimed self-
help author Caroline Leaf in a tweet the same year.

If mirror neurons grant us the ability to empathize with others, it only follows that attention should be drawn to
these cells in attempts to explain why certain people struggle to take the perspective of others – such as can happen
in autism. Lo and behold the “broken mirror hypothesis.” Here’s the prolific neuroscience author Rita Carter in the
Daily Mail in 2009: “Autistic people often lack empathy and have been found to show less mirror-neuron activity.” It’s
a theory to which Ramachandran subscribes. After attributing the great leap forward in human culture to these
cells, he claims in his 2011 book: “the main cause of autism is a disturbed mirror neuron system.” You won’t be
surprised to hear that questionable autism interventions have risen on the back of these claims, including
synchronized dance therapy and playing with robot pets. Mirror neurons have also been used by some researchers
to explain smokers’ compulsions (their mirror neurons are allegedly activated too strongly by others smoking-
related actions); as part of a neurobiological explanation for homosexuality (gay people’s mirror neurons are
allegedly activated more by the sight of same-sex aroused genitalia); and to explain many other varieties of human
behavior. According to New Scientist, these clever cells even “control [the] erection response to porn”!

The Reality

There’s no doubt that mirror neurons have fascinating properties, but many of the claims made about them are
wildly overblown and speculative. Before focusing on specifics, it’s worth pointing out that the very use of the
term mirror neurons can be confusing. As we heard, the label was originally applied to cells in motor parts of the
monkey brain that showed mirror-like sensory properties. Since then, cells with these response patterns have been
found in other motor parts of the brain too, including in the parietal cortex at the back of the brain. These days
experts talk of a distributed “mirror neuron system.”

The term “mirror neurons” also conceals a complex mix of cell types. Some motor cells only show mirror like
responses when a monkey (most of this research is with monkeys) sees a live performer in front of them; other cells
are also responsive to movements seen on video. Some mirror neurons appear to be fussy – they only respond to a
very specific type of action; others are less specific and respond to a far broader range of observed movements.

There are even some mirror neurons that are activated by the sound of a particular movement. Others show mirror
suppression – that is, their activity is reduced during action observation. Another study found evidence in monkeys
of touch-sensitive neurons that respond to the sight of another animal being touched in the same location
(Ramachandran calls these “Gandhi cells” because he says they dissolve the barriers between human beings).

Also, there are reports of entire brain systems having mirror-like properties. For example, the brain’s so-called pain
matrix, which processes our own pain, is also activated when we see another person in pain. In this case experts
often talk of “mirror mechanisms,” rather than mirror neurons per se, although the distinction isn’t always clear.
So, not only are there different types of mirror neuron in motor parts of the brain (varying in the extent and manner
of their mirror-like properties), there are networks of non-motor neurons across the brain with various degrees of
mirror-like properties. To say that mirror neurons explain empathy is therefore rather vague, and not a whole
lot more meaningful than saying that the brain explains empathy.

Let’s zoom in on the ubiquitous idea that mirror neurons “cause” us to feel other people’s emotions. It can be traced
back to the original context in which mirror neurons were discovered – the motor cells in the front of the monkey
brain that responded to the sight of another person performing an action. This led to the suggestion, endorsed by
many experts in the field including Vittorio Gallesse and Marco Iacoboni, that mirror neurons play a causal role in
allowing us to understand the goals behind other people’s actions. By representing other people’s actions in the
movement pathways of our own brain, so the reasoning goes, these cells provide us with an instant simulation of
their intentions – a highly effective foundation for empathy. It’s a simple and seductive idea. What the newspaper
reporters (and over-enthusiastic neuroscientists like Ramachandran) don’t tell you is just how controversial it
is.

The biggest and most obvious problem for anyone advocating the idea that mirror neurons play a central role in our
ability to understand other people’s actions is that we are quite clearly capable of understanding actions that we
are not able to perform ourselves. A non-player tennis fan who has never even held a racket doesn’t sit baffled as
Roger Federer swings his way to another victory. They understand fully what his aims are, even though they can’t
simulate his actions with their own racket-swinging motor cells. Similarly, we understand flying, slithering, coiling,
and any number of other creaturely movements, even if we don’t have the necessary flying or other cells to simulate
the act. From the medical literature there are also numerous examples of comprehension surviving after
damage to motor networks – people who can understand speech, though they can’t produce it; others who
recognize facial expressions, though their own facial movements are compromised. Perhaps most awkward of
all, there’s evidence that mirror neuron activity is greater when we view actions that are less familiar – such as
a meaningless gesture – as compared with gestures that are imbued with cultural meaning, such as the victory
sign.

Mirror neuron advocates generally accept that action understanding is possible without corresponding mirror
neuron activity, but they say mirror neurons bring an extra layer of depth to understanding. In a journal debate
published in 2011 in Perspectives in Psychological Science, Marco Iacoboni insists that mirror neurons are important
for action understanding, and he endorses the idea that they somehow allow “an understanding from within”
whatever that means. Critics in the field believe otherwise. Gregory Hickok at the University of California Irvine
thinks the function of mirror neurons is not about understanding others’ actions per se (something we can clearly
do without mirror neurons), but about using others actions’ in the process of making our own choice of how to
act. Seen this way, mirror neuron activity is just as likely a consequence of action understanding as a cause.

What about the grand claims that mirror neurons played a central role in accelerating human social and
cultural evolution by making us empathize with each other? Troublesome findings here center on the fact that
mirror neurons appear to acquire their properties through experience. Research by Cecelia Heyes and others has
shown that learning experiences can reverse, abolish, or create mirror-like properties in motor cells. In one pertinent
study, Heyes and her co-workers had participants respond to the sight of another person’s index finger movement
by always moving their own little finger. This training led to a change in their usual mirror activity. Twenty-four
hours later, the sight of another person’s index finger movement triggered more excitatory activity in the
participants’ own little finger muscle than in their own index finger muscle. This is the opposite to what happened
pre-training and shows how easily experience shapes mirror-like activity in the brain. The reason these findings
are awkward for the bold claims about mirror neurons is that they imply experience affects mirror neuron activity
as much as mirror neuron activity affects the way we process the world. In other words, it can’t reasonably be
claimed that mirror neurons made us imitate and empathize with each other, if the way we choose to behave
instead dictates the way our mirror neurons work. On their role in cultural evolution, Heyes says mirror neurons are
affected just as much, if not more, by cultural practices, such as dancing and music, compared with how much they
likely influenced these practices. Contradicting Ramachandran and others, she wrote in 2011: “mirror mechanisms
are not at one end of a causal arrow between biology and culture – they owe at least as much as they lend to
cultural processes.”

More evidence that mirror neurons are not always positioned at the start of a causal path comes from studies
showing how the activity of these cells is modulated by such factors as the monkey’s angle of view, the reward value
of the observed movement, and the overall goal of a seen movement, such as whether it is intended to grasp an
object or place it in the mouth. These findings are significant because they show how mirror neurons are not merely
activated by incoming sensory information, but also by formulations developed elsewhere in the brain about the
meaning of what is being observed. Of course these results don’t detract from the fascination of mirror neurons,
but they do show how these cells are embedded in a complex network of brain activity. They are as much
influenced by our perception and understanding as the cause of it.

Finally, what about the suggestion that mirror neurons play a role in autism? It’s here that the hype about mirror
neurons is probably the least justified. As Morton Ann Gernsbacher put it in 2011: “Perhaps no other application of
mirror neuron hypothesizing has been characterized by as much speculation as that of the relation between mirror
neurons and the autistic phenotype.” Gernsbacher goes on to review the relevant literature, including numerous
findings showing that most people with autism have no problem understanding other people’s actions (contrary to
the broken mirror hypothesis) and that they show normal imitation abilities and reflexes. What’s more, for a
review paper published in 2013, Antonia Hamilton at the University of Nottingham assessed the results from 25
relevant studies that used a range of methods including fMRI, EEG, and eye-tracking to investigate mirror neuron
functioning in people with autism. She concluded, “there is little evidence for a global dysfunction of the mirror
system in autism”, adding: “Interventions based on this theory are unlikely to be helpful.”

Motor cells that respond to the sight of other people moving are intriguing, there’s no doubt about that. It’s likely
they play a role in important social cognitions, like empathy and understanding other people’s intentions. But to
claim that they make us empathic, and to raise them up as neuroscience’s holy grail, as the ultimate brain-
based root of humanity, is hype. The evidence I’ve mentioned is admittedly somewhat selective. I’ve tried to
counteract the hyperbole and show just how much debate and doubt surrounds exactly what these cells are and
what they do. In fact, it’s worth finishing this section by noting that the very existence of mirror neurons in the
human brain has only been confirmed tentatively.

The first direct evidence for mirror neurons came from the recording of individual cells in the brains of epileptic
patients in a paper published as recently as 2010. Roy Mukamel and his colleagues identified a subset of cells at the
front of the brain that responded to the sight of various facial expressions and head gestures, and to the execution
of those same gestures (but not to words describing those expressions and gestures). However, bear in mind
that the previous year, another human study using fMRI found no evidence that cells in postulated mirror neuron
areas showed signs of adaptation. That is, they didn’t exhibit reduced activity to continued stimulation after
executing and then viewing (or vice versa) the same hand movements, which is what you’d expect them to do if
they had mirror properties. “There is no meaningful evidence for … a mirror neuron system in humans,” the lead
author Alfonso Caramazza at Harvard University told me at at the time. His comments are a reminder of where we
are at in the study of these cells. We are still trying to confirm whether they exist in humans, and if so, where they
are and what exactly they do.

Mirror neurons are fascinating but, for now at least, they aren’t the answer to what makes us human.

Bibliografie 8
Edward E. Smith; Susan Nolen-
Hoeksema; Barbara L. Fredrickson;
Geoffrey R. Loftus – Introducere în
psihologie, ediția a XIV-a, p.53-56;
Neurotransmiţătorii

Peste 70 de neurotransmiţători diferiţi au fost deja


identificaţi, iar alţii aşteaptă încă să fie descoperiţi. Unii
neurotransmiţători se pot cupla cu mai multe tipuri de
receptori, producând efecte diferite la receptori diferiţi.
De exemplu, glutamatul poate activa cel puţin 10 tipuri de
molecule receptoare, permiţând neuronilor să
reacţioneze diferit la acelaşi neurotransmiţător
(Madden, 2002). Unii neurotransmiţători sunt excitatori
pentru anumiţi receptori şi inhibitori pentru alţii,
deoarece moleculele receptoare sunt diferite. În acest
capitol evident că nu putem discuta toţi
neurotransmiţătorii prezenţi în sistemul nervos. În schimb,
ne vom concentra asupra unui grup mai important care influenţează comportamentul.
ACETILCOLINA. Acetilcolina este prezentă în multe sinapse, de la un capăt la altul al sistemului nervos. Are, în
general, o acţiune excitatoare, dar poate acţiona şi ca inhibitor, în funcţie de tipul de molecule receptoare
prezente în membrana neuronului receptor. Acetilcolina este prezentă în cantităţi mari într-o arie a
prozencefalului numită hipocamp, care joacă un rol cheie în înregistrarea amintirilor recente (Eichenbaum,
2000). Acest neurotransmiţător are un rol cheie în boala Alzheimer, o tulburare cu efect devastator, care afectează
multe persoane vârstnice alterând memoria şi alte funcţii cognitive. Neuronii din prozencefal care produc
acetilcolină tind să degenereze la pacienţii cu Alzheimer şi produc cantităţi din ce în ce mai mici din această
substanţă. Cu cât se produce mai puţină acetilcolină, cu atât pierderea memoriei este mai severă. Acetilcolina
este eliberată în fiecare sinapsă în care terminaţiile neuronale ajung la o fibră a musculaturii scheletice. Acetilcolina
este condusă către nişte structuri mici, numite plăci motorii, situate pe celulele musculare. Plăcile motorii sunt
acoperite cu molecule receptoare care, în momentul în care sunt activate de acetilcolină, declanşează o reacţie
moleculară în lanţ în interiorul celulelor musculare, determinând contracţia muşchiului.
Unele medicamente care afectează acţiunea acetilcolinei pot induce o paralizie musculară. De exemplu, toxina
botulinică, secretată de bacteriile prezente în alimentele prost conservate, blochează eliberarea acetilcolinei în
sinapsa neuro-musculară şi poate cauza moartea prin paralizia muşchilor respiratori. Unele gaze neuroparalitice,
produse pentru scopuri militare, ca şi o mare parte din pesticide induc paralizia prin distrugerea enzimei care
descompune acetilcolina după ce neuronul a generat impulsul nervos. Când procesul de degradare a acetilcolinei
nu mai are loc, se produce o acumulare necontrolată de acetilcolină în sistemul nervos şi transmisia sinaptică
normală devine imposibilă
NOREPINEFRINA. Norepinefrina este un neurotransmiţător din clasa monoaminelor. Este produsă, în principal,
de neuronii din trunchiul cerebral. Cocaina şi amfetaminele prelungesc acţiunea norepinefrinei, încetinindu-i
reabsorbţia. Datorită acestei întârzieri a reabsorbţiei, neuronii receptori rămân activaţi pentru perioade mai lungi
de timp, ceea ceconferă efect de stimulare psihologică acestor droguri. Spre deosebire de cocaină, litiul grăbeşte
reabsorbţia norepinefrinei, ducând persoana la depresie. Orice substanţă care reduce sau măreşte cantitatea de
norepinefrină din creier este corelată cu o schimbare echivalentă a dispoziţiei individului.
DOPAMINA. Dopamina este şi ea o monoamină şi are o structură chimică foarte apropiată de cea a norepinefrinei.
Eliberarea dopaminei în anumite zone ale creierului duce la o puternică senzaţie de plăcere; cercetările actuale
investighează rolul dopaminei în instalarea dependenţelor. Un nivel prea mare de dopamină în anumite arii ale
cortexului poate duce la schizofrenie, iar unul prea scăzut în altele poate duce la apariţia bolii Parkinson.
Medicamentele folosite în tratamentul schizofreniei, cum sunt clorpromazina sau clozapina, blochează receptorii
dopaminei. Un efect contrar îl are L-dopa, un medicament prescris în mod curent în tratamentul bolii Parkinson.
Acesta măreşte cantitatea de dopamină din creier.

SEROTONINA. Serotonina este o altă monoamină. Ca şi norepinefrina, serotonina joacă un rol important în
reglarea dispoziţiei. De exemplu, nivelurile scăzute de serotonină au fost asociate cu depresia. Substanţele care
inhibă reabsorbţia serotoninei sunt anti-depresivele. Acestea măresc nivelurile de serotonină în creier, blocând
reabsorbţia (recaptarea) ei în neuroni. Cipralex, Prozac, Zoloft şi Paxil, medicamente prescrise în mod obişnuit în
tratamentul depresiei, sunt inhibitori ai reabsorbţiei recaptării serotoninergice. Deoarece serotonina are şi un rol
important în reglarea somnului şi a apetitului, ea este folosită în tratamentul tulburării comportamentului alimentar
numită bulimie. Interesant este că drogul halucinogen dietilamida acidului lisergic (LSD — lysergic acid
diethylamide) îşi face simţite efectele prin legarea de receptorii serotoninei din creier.

GLUTAMATUL. Glutamatul, un aminoacid cu efect excitant, este neurotransmiţătorul cel mai des întâlnit în
neuronii din sistemul nervos. Glutamatul excită membrana neurală prin depolarizarea neuronului pe care este
eliberat. Dintre cele trei tipuri mari de receptori ai glutamatului, se consideră că NMDA este cel care afectează cel
mai mult învăţarea şi memoria. Numele lui vine de la N-metil-D-aspartat, substanţa chimică folosită pentru
detectarea neurotransmiţătorului. Neuronii din hipocamp (o zonă situată foarte aproape de centrul creierului) sunt
foarte bogaţi în receptori NMDA şi această zonă pare să joace un rol critic în engramarea informaţiilor noi.
Perturbări în echilibrul glutamatului au fost detectate în schizofrenie.

GABA. Un alt aminoacid şi neurotransmiţător important este acidul gama-aminobutiric (GABA). Această
substanţă este unul dintre cei mai importanţi neuroinhibitori; de fapt, cea mai mare parte a sinapselor din creier
folosesc GABA. Picrotoxina, care blochează receptorii GABA, produce convulsii deoarece mişcările musculare nu
pot fi controlate de creier fără influenţa inhibitoare a GABA. Efectul tranchilizant al unor medicamente folosite în
controlul anxietăţii, benzodiazepinele, este rezultatul acţiunii inhibitorii a GABA. Funcţiile acestor
neurotransmiţători sunt prezentate pe scurt în tabelul sinoptic următor.

Neurotransmițătorii și funcțiile lor

NEUROTRANSMIȚĂTOR FUNCȚIE
ACETILCOLINĂ Participă la mecanismele memoriei și atenției; scăderea nivelului de
acetilcolină este asociată cu instalarea bolii Alzheimer. Participă și la
transmiterea semnalelor de la nerv la mușchi
NOREPINEFRINĂ Substanțele psihostimulante măresc cantitatea de epinefrină prezentă
(NORADRENALINĂ) în sistemul nervos. Nivelurile scăzute duc la depresie la unii indivizi.
DOPAMINĂ Mediază efectele gratificărilor naturale (mâncare și sex, de exemplu), ale
drogurilor și este implicată în motivație și halucinațiile din schizofrenie
SEROTONINĂ Importantă în mențienrea dispoziției și comportamentului social.
Medicamentele care ameliorează depresia și anxietatea cresc nivelurile de
serotonină în sinapse, dar nu funcționează la toți pacienții.
GLUTAMAT Unul dintre cei mai importanți neurotransmițători excitatori din creier.
Implicat în mecanismele învățării și memoriei.
GABA Unul dintre cei mai importanți neurotransmițători excitatori din creier.
Medicamentele care reduc anxietatea măresc activtatea GABA.

Sumarul secţiunii

1.Printre cei mai importanți neurotransmițători se numără: acetilcolina, norepinefrina, dopamina, serotonina,
acidul gama-aminobutiric (GABA) și glutamatul.
2.Neurotransmițătorii au asupra neuronilor un efect excitant sau
unul inhibitor, în funcție de tipul de receptor post sinaptic cu care se
cuplează.

Bibliografie 9
David J. Linden – Mintea ca întâmplare. Cum ne-
a oferit evoluția creierului iubirea, memoria, visele
și pe Dumnezeu, p.57-62;
În practică, decizia dacă un neuron declanşează sau nu un impuls
este determinat de acţiunea simultană a mai multor sinapse,
actiunile excitatorii şi inhibitorii cumulându-se pentru a produce
efectul total.

Amintiți-vă că neuronul mediu din creier primeşte 5000 de sinapse.


Din acestea, cam 4500 vor fi excitatorii şi 500 inhibitorii. Deşi este
probabil ca doar un mic număr de neuroni să fie activi la un moment
dat, actiunea scurtă a unei singure sinapse excitatorii nu-i va conduce
pe cei mai multi dintre ei spre declanşarea unui impuls nervos, ci va fi necesară actiunea simultană a 5 până la 20
de sinapse (ori chiar mai multe, pentru unii neuroni).
Glutamatul şi GABA sunt neurotransmiţători cu acțiune rapidă: când leagă receptorii, schimbările electrice pe
care le produc se petrec în câteva milisecunde. Ei sunt cei mai dominanti neurotransmitători rapizi din creier, dar
mai există şi alti neurotransmitători rapizi. Glicina este un neurotransmitător inhibitor ce actionează ca GABA:
deschide un canal ionic asociat cu un receptor pentru a permite ionilor de clorid să pătrundă şi să inhibe neuronul
postsinaptic. Otrava numită stricnină, ce apare deseori în romanele polițiste, blochează receptorii glicinei şi previne
activarea lor. Alt exemplu este acetilcolina, un transmițător excitator care, ca şi glutamatul, deschide un canal ionic
ce permite atât pătrunderea sodiului, cât şi ieşirea potasiului. Ea acționează în unele părți ale creierului, ca şi în
sinapsele dintre neuroni şi muşchi. Curara, otrava sud-americană folosită la săgeți, blochează acest receptor.
Animalele străpunse de o săgeată otrăvită cu curara devin total inerte deoarece comenzile venite de la nervi
nu mai reuşesc să activeze contracția musculară. În plus față de neurotransmițătorii rapizi, cum ar fi glutamatul,
GABA, glicina şi acetilcolina, mai sunt şi alți neurotransmițători ce acționează mai lent. Aceşti neurotransmițători
leagă o clasă diferită de receptori. În loc să deschidă canalele ionice, ei activează procese biochimice din
interiorul neuronilor. Aceste evenimente biochimice produc schimbări ce debutează greu, dar sunt de durată
lungă: în mod tipic de la 200 de milisecunde la 10 secunde. Mulți din aceşti neurotransmițători cu acțiune lentă nu
produc un efect electric direct: potențialul membranei nu se schimbă în direcția pozitivă sau negativă după ce
aceştia se leagă de receptorul lor. Mai exact, ei schimbă proprietățile electrice ale celulei la un nivel detectabil numai
atunci când intervine şi acțiunea neurotransmițătorilor rapizi.

De exemplu, neurotransmițătorul cu acțiune lentă numit noradrenalină poate schimba voltajul la care poate
fi declanşat un impuls, de la nivelul normal de — 60 de mV la — 65 de mV. Într-un neuron în stare de repaus
nu va exista nici o schimbare după eliberarea noradrenalinei, dar când acest neuron primeşte un input
sinaptic rapid, aceasta va apărea. Dacă sinapsele vor elibera glutamat asupra acestui neuron şi acest lucru
schimbă potențialul membranei din starea de repaus de — 70 de mV la — 65 de mV, acest eveniment va avea ca
rezultat declanşarea unui impuls. Aceeaşi acțiune a glutamatului în absența noradrenalinei nu va reuşi să
declanşeze un impuls. În termeni biochimici, am spune că noradrenalina exercită o acțiune modulatoare
asupra generării impulsului: nu determină direct generarea, dar schimbă proprietățile generării impulsului
produs de către alți neurotransmițători. Concluzia este că neurotransmițătorii rapizi sunt buni pentru
transmiterea unei anumite categorii de informație ce necesită semnale rapide, în timp ce neurotransmițătorii
lenți sunt mai buni la setarea tonului şi amplitudinii generale.

Când neurotransmițătorii sunt eliberați în fanta sinaptică, ei într-un final se dispersează, ajungându-se la o
concentrație scăzută. Mai devreme, am evocat imaginea unei picături de vin roşu ce cade într-un pahar plin cu apă
şi care, într-un final, va transforma conținutul paharului într-un roz foarte pal. Acest lucru ar fi în regulă dacă
neurotransmițătorii ar fi eliberați o singură dată. Dar, de-a lungul timpului, dacă se eliberează în mod repetat
molecule neurotransmițătoare, devine necesară existența un mecanism care să curețe neurotransmițătorul
din lichidul cefalorahidian ce înconjoară celulele creierului înainte ca acesta să atingă concentrații periculos de
mari (activarea continuă a receptorilor neurotransmițători poate ucide deseori neuronii). În termenii imaginii
noastre cu paharul, dacă picurăm vin în mod repetat, paharul ar obține în final o nuanță uniformă de roz, apoi roşu.
În esență, când vine vorba de curățenia de după eliberarea neurotransmițătorilor, cineva trebuie „să ducă gunoiul".
Pentru unii neurotransmițători, există soluția get-beget americană: arde gunoiul în curtea din față. De exemplu,
acetilcolina este distrusă în fanta sinaptică de o enzimă construită special în acest scop. Mulți alți
neurotransmițători beneficiază de tratamentul european: sunt reciclați, proces care se numește recaptare.
Moleculele de glutamat, prin acțiunile unor proteine transportoare specializate din membrana externă, sunt luate
în celulele gliale, unde suportă o procesare biochimică înainte de a fi trimise în neuroni pentru a fi reutilizate. Cei
mai mulți din neurotransmițătorii lenți, cum ar fi dopamina şi noradrenalina, sunt duşi înapoi în terminațiile
axonale, unde pot fi „reîmpachetați" în vezicule şi reutilizați. În mod interesant GABA pare să meargă în două
direcții: este
luată atât de terminațiile axonale, cât şi de celulele gliale. Unii transportori de neurotransmițători sunt ținte
excelente pentru anumite medicamente psihoactive (cum ar fi antidepresivul Prozac şi rudele lui) deoarece blocarea
lor determină menținerea neurotransmițătorilor în sinapse şi obținerea unor concentrații mai mari.
Toată informația din creierul tău, de la senzația dată de mirosul unei flori la comenzile care îți mişcă brațul
pentru a lovi tacul de biliard şi la visul in care mergeai dezbrăcat(ă) la şcoală, este codată prin generarea
impulsurilor într-un ocean de neuroni, dens interconectați prin sinapse. Acum că am dobândit o înțelegere de
ansamblu a semnalizării electrice în creier, haide să ne gândim la provocările cu care trebuie să se confrunte creierul
în timp ce încearcă să creeze funcționarea mentală utilizând subansambluri nu prea grozave.
Prima provocare este limitarea ratei de generare a impulsului, cauzată de timpul necesar ionilor de sodiu şi
potasiu sensibili la voltaj pentru a se deschide şi închide. Drept rezultat, neuronii individuali sunt în mod tipic
limitați la o rată maximă de generare în jurul a 400 de impulsuri pe secundă (în comparatie cu 100 de miliarde de
operatii pe secundă pe care le are un computer desktop modern).

A doua provocare este că axonii sunt conductori electrici lenţi şi cu scurgeri, ce propagă în mod tipic impulsul
la viteza relativ mică de 160 km/oră (comparativ cu semnalele electrice dintr-un aparat creat de om, ce se
deplasează aproape cu 1.079.252.848 km/oră).
A treia provocare este că odată ce impulsul a ajuns în terminatia axonală, există o mare probabilitate (în medie
cam de 70%) ca întreaga excursie să fi fost degeaba şi să nu fie eliberat nici un neurotransmitător. Ce afacere
de toată jena! Aceste constrângeri ar fi putut fi tolerabile pentru problemele simple rezolvate de sistemul nervos al
unui vierme sau al unei meduze, dar pentru creierul uman, limitările impuse de funcționarea neuronală electrică
(străveche) sunt considerabile.

Cum reuşeşte creierul să creeze functionarea mentală umană cu neuroni ce sunt subansambluri atât de
prăpădite? Mai la obiect, date fiind comparațiile de mai sus, cum se face că creierul nostru poate îndeplini cu
uşurintă anumite sarcini ce incurcă în mod obişnuit computerele electronice de exemplu, a recunoaşte instantaneu
că imaginea unui Rottweiler pozat din fajă şi alta a unui pudel pitic pozat din spate ar trebui amândouă clasificate
sub eticheta de „câine"? Aceasta este o întrebare profundă, centrală pentru neurobiologie şi al cărei răspuns nu
este la îndemână.

Cu toate acestea, o explicație mai generală pare să fie următoarea: Neuronii individuali sunt procesoare
îngrozitor de lente, ineficiente şi nedemne de încredere. Dar creierul este o aglomerare de 86 de miliarde de
asemenea procesoare, interconectate masiv prin 500 de trilioane de sinapse. Drept rezultat, creierul poate
rezolva probleme dificile utilizând procesarea simultană şi integrarea sub-secvență a multor neuroni. Creierul este
o încropeală în care un număr enorm de procesoare interconectate pot funcţiona impresionant, chiar dacă fiecare
procesor individual prezintă limitări severe. În plus, în timp ce diagrama generală de interconectare a creierului este
înscrisă în codul genetic, interconectarea de fineţe este ghidată de tipare de activitate ce permit experienţei să
modeleze forţa şi tiparul conexiunilor sinaptice, proces numit plasticitate sinaptică (sau neuroplasticitate, pe care
o voi discuta în capitolele 3 şi 5). Arhitectura paralelă şi masiv interconectată a creierului, în combinaţie cu
capacitatea lui de rearanjare fină a conexiunilor, este cea care permite creierului să construiască un mecanism
atât de impresionant din subansambluri atât de prăpădite.
Bibliografie 10
Leon Dănăilă; Mihai Golu – Tratat de neuropsihologie, volumul 1, p.93-94; 112;
Substanțe neurotransmițătoare
În sistemul nervos sunt produse numeroase substanţe
neurotransmiţătoare, fiecare neuron eliberând unul sau chiar
mai multe din acestea. Neurotransmiţătorii sunt împărţiţi în
două grupuri.
1.Din primul grup fac parte mediatorii cu moleculă mică şi
acţiune rapidă.
2.Al doilea grup cuprinde un mare număr de neuropeptide
cu dimensiune moleculară mult mai mare şi cu acțiune mult
mai lentă.
Neurotransmiţătorii din primul grup sunt implicaţi în
majoritatea răspunsurilor prompte ale sistemului nervos, cum
ar fi transmiterea semnalelor senzoriale spre centrii nervoşi şi
a semnalelor motorii înapoi în muşchi. Pe de altă parte,
neuropeptidele produc acţiuni prelungite, de tipul modificării
pe termen lung a numărului receptorilor, închiderii de durată a
canalelor ionice, sau a schimbării pe termen lung a numărului
sinapselor.
În sfera neurotransmiţătorilor putem include:
1.acetilcolina care stimulează excitaţia muşchilor scheletici și
este implicată în procesele atenționale;
2.un grup de substanţe numite monoamine (epinefrina, norepinefrina, dopamina şi serotonina), provenite din
molecule de aminoacizi modificate, implicate în menținerea dispoziției emoționale și a cogniției sociale,
3.numeroşi aminoacizi (glicina, acidul glutamic, acidul aspartic şi acidul gama-aminobutiric - GABA) şi
4.un mare grup de peptide, fiecare din ele constând lanţ relativ scurt de aminoacizi.
Aceste substanțe sunt sintetizate de obicei citoplasma butonilor sinaptici şi înmagazinate în veziculele sinaptice.
După eliberare, unii neurotransmițători sunt distruşi de enzimele prezente în despicătura sinaptică, iar alții sunt
transportați din nou în butonul sinaptic sau sunt preluați de către celulele nevrogliale și duși la neuronii din
vecinătate numeroşi aminoacizi (glicina, acidul glutamic, acidul aspartic şi acidul gama-aminobutiric - GABA) şi un
mare grup de peptide, fiecare din ele constând lanț relativ scurt de aminoacizi. Aceste substanțe sunt sintetizate de
obicei citoplasma butonilor sinaptici şi înmagazinate în veziculele sinaptice. După eliberare, unii neurotransmițători
sunt distruşi de enzimele prezente în despicătura sinaptică, iar alții sunt transportați din nou în butonul sinaptic sau
sunt preluați de către celulele nevrogliale și duși la neuronii din vecinătate.
Concluzii
Din datele cunoscute în prezent reiese că unele substanțe acționează ca neurotransmițători sinaptici, în timp ce
altele sunt neuromodulatori. Ultimii dirijează subtilitățile neurotransmiterii, iar informația vehiculată este codată
și convertită în semnale electrice care se transmit de-a lungul axonului până la terminalul nervos. La acest nivel,
semnalele electrice sunt preluate de unul sau mai mulți mesageri chimici care traversează fanta sinaptică. Ca și
ARN-ul și ADN-ul, niciunul dintre acești mesageri chimic nu transmit numai o singură informație: unii
neurotransmițători au și alte funcții celulare, participând pe cale metabolic la constituirea unor căi biochimice
sau polimerizând aminoacizii în protenie. Glutamatul și GABA acționează ca substrat în metabolismul
intermediar, iar ATP reprezintă factorul principal al energiei metabolice. Prezența neurotransmițătorilor
presupune și existența receptorilor caracteristici fiecăruia dintre aceștia. Odată neurotransmițătorul ajuns în
spațiul sinaptic, acesta difuzează către membrana postsinaptică, unde se combină cu receptorul specific.
Bibliografie 11
Oliver von Bohlen und Halbach; Rolf Dermietzel – Neurotransmitters and
Neuromodulators: Handbook of Receptors and Biological Effects, p.3-5; 16;
Neuroactive Substances
A variety of biologically active substances, as well as
metabolic intermediates, are capable of inducing
neurotransmitter or neuromodulator effects. A large
diversity of neuroactive substances regarding their metabolic
origin exists. The molecular spectrum of neuroactive
substances ranges from ordinary intermediates of amino acid
metabolism, like glutamate and GABA, to highly effective
peptides, proteohormones and corticoids.

Recent evidence indicates that neuronal messengers convey


information in a complex sense entailing a variety of
processes. These include:

reciprocal influence on the synthesis of functionally linked


neuronal messengers; induction of different temporal
patterns in terms of short-term and long-term effects;
shaping of network topology including synaptic plasticity
during long-term potentiation. Chemical
neurotransmission is not restricted to central nervous
synapses but occurs in peripheral tissues as well, including
neuromuscular and neuroglandular junctions.

Neuroactive Substances
Although functional overlap between neurotransmitters and neuromodulators is quite common, this classification
has proven useful for practical purposes.
Neurotransmitters
Neurotransmitters are the most common class of chemical messengers in the nervous system. A neuroactive
substance has to fulfill certain criteria before it can be classified as a neurotransmitter (Werman, 1966).
1.It must be of neuronal origin and accumulate in presynaptic terminals, from where it is released upon
depolarization.
2.The released neurotransmitter must induce postsynaptic effects upon its target cell, which are mediated by
neurotransmitter-specific receptors.
3.The substance must be metabolically inactivated or cleared from the synaptic cleft by re-uptake
mechanisms.
4.Experimental application of the substance to nervous tissue must produce effects comparable to those
induced by the naturally occurring neurotransmitter.
A neuroactive substance has to meet all of the above criteria to justify its classification as a neurotransmitter.
Based on their chemical nature, neurotransmitters can be subdivided into two major groups: biogenic amines and
small amino acids.
Neuromodulators
In contrast to neurotransmitters, neuromodulators can be divided into several subclasses. The largest subclass is
composed of neuropeptides. Additional neuromodulators are provided by some neurobiologically active gaseous
substances and some derivatives of fatty acid metabolism. Neuropeptides are synthesized by neurons and released
from their presynaptic terminals, as is the case for neurotransmitters. Like neurotransmitters, some of the
neuropeptides act at postsynaptic sites, but since they do not meet all of the above criteria they are not classified
as such. These neuropeptides are frequently labeled “putative neurotransmitters” (for example: endorphins). Other
neuropeptides are released by neurons, but show no effects on neuronal activity (e.g. follicle-stimulating hormone
(FSH) produced in gonadotrophs of the anterior pituitary). These neuropeptides target to tissues in the periphery
of the body and can therefore be classified as neurohormones. Consequently, not all neuropeptides function as
neuromodulators. The fact that peptides are synthesized in neurons and are able to induce specific effects via
neuronal receptors led de Wied (1987) to formulate the neuropeptide postulate; in essence, this states that peptides
which are of neuronal origin and exert effects on neuronal activities are classified as neuropeptides.
The Blood–Brain Barrier
Neuroactive substances are not exclusively expressed in the central nervous system, but are also common in
peripheral tissues. One prominent example is represented by the gastro-intestinal peptides which possess a
chimeric function as enteric hormones and neuromodulators. This dual function requires mechanisms that restrict
or facilitate the entry of neuroactive substances from the periphery into the central nervous system. The highly
regulated system for controlled exchange of neurobiologically relevant substances resides at the border between
the general blood circulation and the brain parenchyma in the form of the blood–brain barrier. The existence of a
functional barrier between the blood and the brain was first demonstrated by Paul Ehrlich at the beginning of the
20th century. He observed that, when injected into the circulation, dyes like methylene blue stained
the parenchyma of most organs of the body but not the brain. Injection of dyes into the cerebrospinal
fluid, however, led to a staining of the brain, but not the body. These experiments were the first demonstration
that a
barrier between the blood and the brain exists and that this barrier blocks all free transport, regardless of the
direction from which the barrier is approached by the substance.
In general, substances can cross the blood–brain barrier by four different pathways:
• penetration via pores and trancytosis;
• transmembrane diffusion;
• carrier-mediated mechanisms and transporters;
• retrograde neuronal transport, so by-passing the blood–brain barrier.
Some chemical properties enable substances to cross the blood–brain barrier. The properties which affect
permeability include
• lipid solubility,
• molecular weight and
• the ability to form electro-neutral complexes.
The most convenient route for molecules to cross the blood–brain barrier is by making use of specific receptor-
mediated mechanisms or by transporters. In order to do this, the substance has first to bind to a receptor on the
endothelium; and second, the formed ligand–receptor complex has to be internalized and transferred via
endosomes into the endothelial cytoplasm. Finally, the ligand–receptor complex has to be degraded and the ligand
can then be released by exocytosis on the opposite side of the barrier.

Bibliografie 12
Mihai Ioan Botez – Neuropsihologie clinică
și neurologia comportamentului, p.145-146;
Neurochimia comportamentului
Robert Lalonde
Prezentul capitol are drept scop să lege principalele sisteme
neurochimice de diferite forme de comportament. Categoriile
principale ale neurotransmiţătorilor: substanţe permiţând
transmiterea sinaptică între neuroni, includ
1.aminele biogene,
2.acizii aminaţi şi
3.neuropeptidele.
Rolul acestor neurotransmiţători este discutat în contextul
bolilor neurodegenerative şi simptornelor psihiatrice şi la
subiecţii normali sub efectul psihotropelor. Este bine chiar de
la început să se descrie următorii termeni care sunt necesari
pentru înţelegerea completă a neurochimiei transmisiei
sinaptice: neurotransmiţător, neuromodulator, receptor, agonist și antagonist.
1.Un neurotransmiţător este o substanţă chimică permiţând comunicarea între neuroni. Această moleculă este
eliberată în capul terminal al axonului, depăşeşte spaţiul intersinaptic şi se leagă de un

2.receptor (proteină) situat pe membrana postsinaptică.


TABELUL 8-1
Principalii neurotransmițători, agoniști și antagoniști
Neurotransmițători Agoniști Antagoniști

ACETILCOLNĂ Muscarină, nicotină Scopolamină, atropină


DOPAMINĂ Apomorfină Clorpromazină, haloperidol, pimozidă
NOREPIFRINĂ Alfa1: fenilefrină Alfa1: prazocină
(ADRENALINĂ) Alfa2: clonoidină Alfa2: yohimbină
Beta1: dobutamidă Beta1: practolol
Beta2: zinetrol Beta2: propanolol

SERONTONINĂ Quipazină, busiprenă Metisergid, ketanserină

GABA GABA-A: muscimol GABA-A: bicucullină


GABA-B: baclofen GABA-B: baclofen

GLUTAMAT NMDA, quisqualat Ketamină, dizocilpină


3.Un neuromodulator este o substanţă chimică ce măreşte sau diminuează efectul neurotransmiţătorului.
Receptorul posedă o configuraţie spaţială cu cea a neurotransmiţătorului, acesta din urmă acţionând ca o cheie
într-o broască.

4.Un agonist mimează acţiunea neurotransmiţătorului, favorizând fie activarea, fie inhibarea neuronului.

5.Un antagonist blochează efectul neurotransmiţătorului. În sinteza neurotransmiţătorului există unul sau mai
mulţi precursori metabolici, ale căror conversii prin mijlocirea unor reacţii chimice ajung la neurotransmiţător.
Neurotransmiţătorul e catabolizat, consecutiv, în substanţe
inerte. Aceste reacţii chimice sunt facilitate fie prin enzime de
sinteză, fie prin enzime catabolice. Enzimele acţionează ca nişte
catalizatori, mărind viteza reacţiilor chimice de o manieră
unidirecţională. Exemple ale principalilor neurotransmiţători,
agonişti şi antagonişti, sunt prezentate în tabelul 8-1.

Bibliografie 13
Edward E. Smith; Susan Nolen-Hoeksema;
Barbara L. Fredrickson; Geoffrey R. Loftus
– Introducere în psihologie, ediția a XIV-a,
p.865-867; 872; 69;
Bazele neurologice ale comportamentului
Terapii biologice
Abordarea biologică a comportamentului anormal presupune
că tulburările mentale, ca şi bolile fizice, sunt cauzate de
disfuncţii biochimice sau psihologice ale creierului. Terapiile biologice includ folosirea medicamentelor şi şocurile
electroconvulsive.
Medicaţia psihotropă
De departe terapia biologică cu cel mai mare succes este folosirea medicamentelor pentru modificarea
dispoziţiei sau comportamentului. Descoperirea, la începutul anilor'50, a medicamentelor care atenuau unele din
simptomele schizofreniei a reprezentat un progres major în tratamentul indivizilor cu manifestări severe ale bolii.
Pacienţii extrem de agitaţi nu au mai trebuit restricţionaţi fizic în cămăşile de forţă, iar pacienţi care îşi
petrecuseră cea mai mare parte a timpului halucinând şi având un comportament bizar au devenit mai sensibili
şi funcţionali. Ca urmare, saloanele de psihiatrie au devenit mai uşor de condus şi pacienţii au putut fi externaţi mai
repede. Câtiva ani mai târziu, descoperirea medicamentelor care puteau atenua depresia severă a avut un efect
benefic similar asupra organizării tratamentului şi populației spitalelor
Tabelul 2 – Principalele tipuri de medicație psihotropă utilizate în clinică
Clasă de medicație Efect terapeutic Mod de acțiune
ANTIPSIHOTICE (haloperidol, Reducerea simptomatologiei Blocarea receptorilor
clorpromazină, risperidonă, etc.) psihotice (halucinații și delir) dopaminergici
ANTIDEPRESIVE (fluoxetină, Reducerea simptomatologiei Mărirea concentrației sinaptice
paroxetină, (es)citalopram depresive de serotonină și norepinefrină
LITIU Timostabilizator pentru Încă neclar
simptomatologia bipolară
ANTICONVULSIVANTE (acid valproic, Tratamentul crizelor convulsive și Blocarea canalelor de sodiu sau
carbamazepină, gabapentin, etc.) timostabilizatoare creșterea activității GABA
ANXIOLITICE (alprazolam, diazepam, Reducerea simptomatologiei Reducerea activității SNC
clonazepam, temazepam, etc.) anxietății
STIMULANTE (metilfenidat, Mărirea capacității de Neclar (posibil creșterea
amfetamină, etc.) concentrare nivelurilor dopaminergice)

MEDICAȚIA ANTIPSIHOTICĂ
Primul medicament care s-a descoperit că atenuează simptomele schizofreniei aparținea familiei fenotiazinelor.
Primul tip de medicație care atenuează simptomatologia psihotică a schizofreniei, clorpromazina, a fost descoperit
întâmplător în anii ’50 și aparținea clasei fenotiazinelor.
Aceste medicamente au fost denumite inițial tranchilizante majore, dar acest termen este inadecvat în realitate,
deoarece medicamentele nu acționează asupra sistemului nervos în acelaşi mod ca barbituricele sau anxioliticele.
Ele pot provoca o oarecare somnolență sau letargie, dar nu induc somn profund, nici chiar la doze mari. De
asemenea, rareori provoacă o senzație plăcută, uşor euforică, când sunt asociate cu doze mici de medicamente
anxiolitice. De fapt, efectele psihologice ale medicamentelor antipsihotice, când sunt administrate indivizilor
normali, sunt, de regulă, neplăcute. Rareori se face abuz de aceste medicamente.
În capitolul 15, am discutat teoria conform căreia schizofrenia este provocată de o activitate excesivă a
neurotransmițătorului dopamină. Medicamentele antipsihotice blochează receptorii dopaminei. Pentru că
moleculele din medicație au o structură similară cu cea a moleculelor de dopamină, ele se leagă de receptorii
postsinaptici ai neuronilor dopaminergici, prin aceasta blocând accesul dopaminei la receptorii ei. (medicamentul
în sine nu activează receptorii). O singură sinapsă are multe molecule receptoare. Dacă toate sunt blocate,
transmisia prin sinapsa respectivă va eşua. Dacă numai unele dintre ele sunt blocate, transmisia fi atenuată.
Intensitatea efectului clinic unui medicament antipsihotic este direct legată de capacitatea de a concura pentru
receptorii dopaminergici.
Medicamentele antipsihotice sunt eficiente în diminuarea halucinațiilor şi a confuziei şi în restabilirea proceselor
gândirii raționale. Aceste medicamente nu vindecă schizofrenia şi cei mai mulți pacienți trebuie să continue să
folosească medicamentul şi în afara spitalului. Multe dintre simptomele caracteristice schizofreniei —
insensibilitatea emoţională, izolarea, dificultăţile de menţinere a atenţiei — rămân. Cu toate acestea,
medicamentele antipsihotice scurtează durata de timp în care pacientul trebuie să rămână internat şi previn
recidivele.
Din nefericire, medicamentele antipsihotice nu ajută toţi bolnavii cu schizofrenie. În plus, medicamentele au
efecte secundare neplăcute — uscăciune a gurii, înceţoşare a vederii, dificultate de concentrare — care determină
mulţi pacienţi să îşi întrerupă medicaţia. Unul dintre cele mai serioase efecte secundare este o dereglare neurologică
cunoscută sub numele de dischinezie tardivă, care implică mişcări involuntare ale limbii, feţei, gurii sau
maxilarului. Pacienţii cu această afecţiune pot, involuntar, să îşi plescăie buzele, să facă zgomote de supt, să scoată
limba, să îşi umfle obrajii sau să facă alte mişcări bizare, iar şi iar. Dischinezia tardivă este frecvent ireversibilă şi
poate apărea la peste 20% dintre indivizii care folosesc medicamente antipsihotice pe perioade lungi de timp.
(Morgenstern, Glazer şi Doucette, 1993).
La finalul anilor ’70 au fost dezvoltate antipsihoticele atipice, o clasă de medicație la fel de eficientă în reducerea
simptomatologiei psihotice, despre care inițial s-a crezut că ar putea provoca mai puține efecte secundare. Printre
aceste medicamente se numără clozapina şi risperidona. Ele par să acţioneze legându-se de un alt tip de receptori
dopaminergici decât celelalte medicamente, deşi influenţează, de asemenea, alţi câţiva neurotransmiţători,
inclusiv serotonina. Deși inițial promovate ca producând mai puține efecte secundare decât antipsihoticele de primă
generație (în special haloperidolul), optimismul legat de subiect a fost, din păcate, hazardat, puține dintre tipurile
de medicație din clasa antipsihoticelor atipice fiind cu adevărat mai eficiente sau producând mai puține efecte
secundare, acestea având o eterogenitate clinică prea ridicată pentru a fi, în realitate, cuprinse în aceeași clasă
(Leucht și colab, 2009, 2013).
MEDICAȚIA ANTIDEPRESIVĂ
Medicamentele antidepresive ajută la restabilirea dispoziţiei indivizilor cu depresie. Aceste medicamente oferă
energie aparent prin creşterea disponibilităţii a doi neurotransmiţători (norepinefrina şi serotonina) ale căror
niveluri sunt deficitare în unele cazuri de depresie. Medicamentele antidepresive acţionează în diverse moduri
pentru a creşte nivelurile neurotransmiţătorilor.
1.inhibitorii de monoamin-oxidază (MAOi) blochează activitatea unei enzime care poate neutraliza atât
norepinefrina, cât serotonina, mărind astfel în creier concentraţia acestor doi neurotransmiţători.

2.antidepresivele triciclice previn reabsorbţia serotoninei și norepinefrinei, prelungind astfel acţiunea


neurotransmiţătorului. (Amintiţi-vă că reabsorbţia este procesul prin care neurotransmiţătorii sunt recaptaţi
în terminaţiile nervoase care i-au eliberat.)

Ambele clase de medicamente s-au dovedit eficiente în reducerea depresiei. Ca şi medicamentele antipsihotice,
antidepresivele pot produce câteva efecte secundare nedorite. Cele mai frecvente sunt: uscăciunea gurii, vederea
înceţoşată, constipaţia şi retenţia de urină. Ele pot, de asemenea, să provoace hipotensiune arterială ortostatică
severă, la fel ca şi modificări ale ritmului şi frecvenţei cardiace. O supradoză de antidepresive triciclice poate fi
fatală, un motiv serios de îngrijorare, căci un pacient depresiv poate deveni sinucigaş. Inhibitorii MAO pot
interacţiona cu anumite alimente, ca brânza, ciocolata şi vinul roşu, pentru a crea probleme cardiace severe.
Cercetarea pentru descoperirea de medicamente care să fie mai eficiente, să aibă mai puţine efecte secundare şi o
acţiune mai rapidă, s-a intensificat în ultimii 20 de ani. Rezultatul este că apar pe piaţă medicamente noi aproape
zilnic. Inhibitorii reabsorbţiei serotoninei măresc selectiv nivelurile serotoninei, blocându-i reabsorbţia. Câteva
exemple sunt: Prozac (fluoxetină), Paroxetină şi Cipralex (escitalopram).

Medicația și mai recentă mărește disponibilitatea atât a serotoninei, cât şi a norepinefrinei (ca, de exemplu,
Remeron). Pe lângă reducerea depresiei, aceste medicamente s-au dovedit de ajutor în tratamentul tulburărilor
anxioase, inclusiv tulburarea obsesiv-compulsivă şi tulburarea de panică (Schatzberg, 2000). Ele tind să producă mai
puţine efecte secundare decât alte antidepresive, cu toate că pot provoca inhibiţia organismului, greaţă şi diaree,
vertij şi nervozitate. Indivizii cu tulburări bipolare iau frecvent medicaţie antidepresivă pentru a-şi controla depresia,
dar trebuie să ia şi alte medicamente, pentru controlul maniei.

Litiul reduce fluctuaţiile extreme ale dispoziţiei si readuce individul stare emoţională mai aproape de normal.
Pare să facă acest lucru prin stabilizarea unui număr de sisteme de neurotransmiţători, printre care se numără
serotonina şi dopamina, de asemenea putând stabiliza şi nivelurile neurotransmiţătorului glutamat (Hollon şi colab.,
2002). Indivizii cu tulburare bipolară care iau litiu trebuie să îl ia chiar şi atunci când nu suferă de manie acută. În caz
contrar, circa 80% vor cădea în noi episoade de manie sau depresie (Maj, Pirozzi, Magliano şi Bartoli, 1998). Din
nefericire, doar circa 30 până la 50% dintre indivizii cu tulburări bipolare răspund la litiu (Bowden, 2000). În plus,
acesta poate avea efecte secundare severe, care includ: dureri abdominale, greaţă, vomă, diaree, tremurături şi
ticuri (Jamison, 1995). Pacienţii se plâng de vedere înceţoşată şi de probleme cu concentrarea şi atenţia, care le
afectează capacitatea de muncă. Litiul poate cauza disfuncţii renale, malformaţii congenitale şi o formă de diabet,
dacă este luat de femei în primul trimestru de sarcină.

Medicaţia anticonvulsivantă (carbamazepina și acidul valproic) este frevent folosită acum în tratarea tulburărilor
bipolare. Aceste medicamente pot avea eficienţă înaltă în reducerea simptomelor maniei severe şi acute, dar nu par
să fie ia fel de eficiente ca litiul, în tratamentul pe termen lung al tulburărilor bipolare. Efectele secundare
anticonvulsivelor includ: vertij, greaţă şi somnolenţă. Medicamentele antipsihotice pot fi, de asemenea prescrise
indivizilor care suferă de manie severă (Tohen şi colab., 2001).

MEDICAȚIA ANXIOLITICĂ

Câteva medicamente tradiţional folosite pentru a trata anxietatea aparţin cunoscute a benzodiazepinelor. Ele sunt
cunoscute sub numele comun de tranchilizante şi sunt comercializate sub diverse denumiri comerciale, ca:
Diazepam, Anxiar (Lorazepam) şi Xanax (Alprazolam).

Medicamentele anxiolitice reduc încordarea şi produc somnolenţă. La fel ca alcoolul şi barbituricele, ele reduc
activitatea sistemului nervos central. Aceste medicamentele sunt folosite, de asemenea, pentru a trata tulburările
de anxietate, sindromul de sevraj la alcool şi tulburările organice legate de stres. De exemplu, în tratamentul unei
fobii, medicaţia anxiolitică poate fi combinată cu desensibilizarea sistematică, pentru a ajuta individul să se relaxeze
când se confruntă cu situaţia de care se teme. Deşi tranchilizantele pot fi folositoare pe termen scurt, beneficiile lor
globale sunt discutabile şi ele sunt, în mod clar, supra-prescrise şi greşit folosite. Până de curând (înainte ca
unele riscuri să devină evidente), Valium şi Librium erau două din cele mai des prescrise medicamente în Statele
Unite (Julien, 1992). Folosirea pe termen lung a tranchilizantelor poate duce la dependenţă fizică.

Deşi tranchilizantele nu dau dependenţă la fel de mult ca barbituricele, în cazul folosirii repetate ele duc la toleranţă,
iar individul manifestă simptome severe de sevraj la întreruperea folosirii medicamentului. În plus, tranchilizantele
slăbesc capacitatea de concentrare, inclusiv capacitatea de a conducere auto, şi pot produce moartea, dacă se
combină cu alcoolul. În ultimii ani, cercetătorii au descoperit că anumite medicamente antidepresive reduc şi
simptomele de anxietate. Aceasta este valabil îndeosebi pentru inhibitorii recaptării serotoninei, discutaţi
anterior. Aceste medicamente pot reduce anxietatea, la fel ca şi depresia, pentru că acţionează asupra
dereglărilor biochimice care sunt comune ambelor boli.

STIMULANTELE

Medicamentele stimulante sunt folosite pentru a trata problemele de atenţie ale copiilor cu tulburare de deficit
de atenţie și hiperactivitate (ADHD). Unul dintre stimulantele cele mai des folosite are denumirea comercială
Concerta. Cu toate că poate părea ciudat să dai un stimulant unui copil hiperactiv, între 60 şi 90% din copii cu
ADHD răspund la aceste medicamente prin reduceri ale comportamentului disruptiv şi creşterea atenţiei
(Gadow, 1992). Medicamentele stimulante pot acţiona prin creşterea nivelurilor dopaminergice în sinapsele
creierului. Folosirea Concerta este un subiect controversat, deoarece unele şcoli şi unii medici s-au grăbit să
diagnosticheze ADHD la copiii şcolari şi să le prescrie acestora medicație stimulantă (Hinshaw, 1994).
Medicamentele stimulante au efecte secundare semnificative, incluzând: insomnie, cefalee, ticuri şi greaţă (Gadow,
1991, 1992). Trebuie diagnosticată cu certitudine prezenţa ADHD la copii înainte de a le prescrie medicamente
stimulante. În concluzie, tratamentul medicamentos a redus severitatea unor tipuri de tulburări mentale. Mulţi
indivizi care, altfel, ar fi necesitat spitalizare, pot fi integraţi în comunitate cu ajutorul acestor medicamente. Pe de
altă parte, există limite ale aplicabilităţii tratamentului medicamentos. Toate substanţele terapeutice pot produce
efecte secundare nedorite. Mulţi indivizi cu probleme medicale, ca şi femeile gravide sau care alăptează, adeseori
nu pot lua medicamente psihoactive. În plus, mulţi psihologi consideră că aceste medicamente ameliorează
simptomele fără a solicita pacientul să îşi confrunte problemele personale care ar putea contribui la tulburarea lui
sau ar putea să fi fost provocate de această tulburare (ca, de exemplu, problemele conjugale cauzate de
comportamentele unei persoane cu tulburare maniacală).

TRATAMENTUL ELECTROCONVULSIVANT

În terapia electroconvulsivă (ECT — Electroconvulsive Therapy), cunoscută, de asemenea, ca terapia prin


electroşoc, se aplică creierului un curent electric slab, pentru a produce un atac similar convulsiei epileptice. ECT a
fost un tratament popular între 1940 şi 1960, înainte ca medicamentele antipsihotice şi antidepresive să fie
disponibile. Astăzi se foloseşte mai ales în cazurile de depresie severă, când pacientul nu a răspuns la
tratamentul medicamentos. ECT a fost subiectul multor controverse. intr-o anumită perioadă era folosită fără
discriminare în spitalele de boli mentale, pentru a trata tulburări ca alcoolismul şi schizofrenia, motiv pentru care nu
dădea rezultate benefice. Înainte de dezvoltarea procedurilor mai rafinate, ECT era o experienţă înspăimântătoare
pentru pacient, care era adeseori treaz până când curentul electric declanşa atacul şi producea pierderea
momentană a cunoştienţei. Pacientul suferea frecvent de confuzie şi pierderi de memorie după aceea. Uneori,
intensitatea spasmelor musculare care însoţeau atacul cerebral provoca leziuni fizice.

Astăzi, ECT este mult mai sigură. Pacientului i se administrează un anestezic cu durată scurtă de acţiune și i se
injectează un miorelaxant. Un curent electric de scurtă durată şi foarte slab se aplică pe scalp, la nivelul tâmplei
aflată pe partea emisferei cerebrale nedominante. Se administrează curentul minim necesar producerii unui atac
cerebral, deoarece acesta, şi nu electricitatea, constituie factorul terapeutic. Miorelaxantul previne spasmele
musculare convulsive. Individul se trezeşte în câteva minute şi nu îşi aminteşte nimic despre tratament. Se
administrează, de obicei, patru până la şase tratamente perioadă de câteva săptămâni.

Cel mai problematic efect secundar al ECT este pierderea memoriei. Unii pacienţi raportează pierderi mnezice faţă
de evenimentele intâmplate cu până la şase luni înainte de ECT, ca şi deteriorarea capacităţii de a reţine informaţii
noi timp de o lună sau două după tratament. Totuşi, dacă se folosesc doze foarte mici de electricitate (cantitatea
este calibrată cu atenţie pentru fiecare pacient pentru a nu depăşi cantitatea necesară producerii atacului) şi se
administrează numai emisferei nedominante a creierului, problemele de memorie sunt minime (Lerer și colab.,
1995). Nimeni nu ştie cum reduc depresia atacurile induse electric. Atacurile cerebrale produc o eliberare masivă de
norepinefrină şi serotonină şi deficitul acestor neurotransmiţători poate fi un factor important în unele cazuri de
depresie. În prezent, cercetătorii in cearcă să determine similarităţile şi deosebirile dintre ECT şi medicamentele
ant-depresive, în ceea ce priveşte felul în care acestea afectează neurotransmiţătorii. Indiferent cum ar acţiona, ECT
este eficientă în eliminarea stărilor de depresie severă, imobilizantă, având o acţiune mai rapidă decât terapia
medicamentoasă.
Bibliografie 14
Rudi A.J.O. Dierckx; Andreas Otte; Erik F.J. de Vries; Aren van Waarde; Johan
A. den Boer – PET and SPECT in Psychiatry, p.v; 4-8; 11;
Foreword

The progress achieved in this area has been staggering. The two
images below, obtained half a century apart, speak a thousand
words.

On the left , a 1969 blood–brain barrier scan in cerebral


lymphoma and on the right a PET/MRI study in frontotemporal
dementia. Institute of Nuclear Medicine 2012.Part 1: Basics

Neuroimaging in Psychiatric Drug Development and


Radioligand Development for New Targets

Akihiro Takano, Christer Halldin, and Lars Farde

Drug development requires considerable investments of time


and money. Since the technique of binding assay was introduced in the late 1950s (Yalow and Berson, 1959),
numerous compounds have been selected based on in vitro affinity data, evaluated in preclinical models and
subsequently tested for efficacy in psychiatric diseases such as schizophrenia and mood disorder. However, as the
pathophysiology of psychiatric diseases has not been fully understood, the industrial drug projects have had an
evident element of "trial and error” Lack of or insufficient efficacy is thus a major reason for attrition and adds to
failure for safety reasons (Arrowsmith 2011 a, b). In some drug projects, the failure may be related to difficulties
with dose finding. In other words, the doses used in preclinical and clinical trials were too low or too high. The
fundamental question is thus whether the drug failed due to suboptimal brain exposure and target
engagement of the drug or whether the target was invalid.

Positron emission tomography (PET) is an imaging modality by which it is possible to measure physiological
and biochemical markers in the brain by using appropriate radioligands. Most PET radioligands are labeled with
radionuclides having a short half-life such as C-11 (half-life, 20.4 min) or F-18 (109.8 min). Following the successful
introduction of PET for neuroreceptor imaging in the 1980s (Farde et al. 1986), the technique has been widely used
to visualize and quantify drug target sites, mainly neuroreceptors, enzymes, and transporters in the human brain in
vivo. In this chapter, we will focus on the major applications of PET in drug discovery and development. The need
for development of novel radioligands for new targets will be given particular attention.

1.2.2 PET Receptor Occupancy to Demonstrate Target Engagement and in Relation to Pharmacodynamics
A number of PET radioligands have been developed for several key targets related to neurotransmission (Table 1.1
and Fig. 1.1) (Halldin et al. 2001). Using these tools, it is possible to map and quantify the in vivo distribution of the
target neuro-receptors or transporters. Details on quantification of the radioligand binding are described elsewhere
in this textbook (Chapter 2).

Table 1.1 Representative PET radioligands for neurotransmitter systems

Neurotransmitter system PET radioligand


Dopamine D1 [11C]SCH23390
[11C]NNC112
D2 [11C]raclopride
[11C]FLB457
Transporter [11C]PE2I
[18F]FEPE2I
Serotonine (5HT) 1A [11C]WAY10065
1B [11C]AZ10419369
2A [11C]MDL100907
Transporter [11C]MADAM
[11C]DASB
GABA-Benzodiazepine [11C]Flumazenil
[18F]Flumazenil
[11C]Ro15-4513
Norepinephrine Transporter [18F]FMeNER-D2
Cannabinoid CB1 [11C]MePPEP
[11F]FMPEP-d2

The change of radioligand binding between baseline and after drug administration is used to calculate the drug
occupancy at the target neuroreceptor, transporter, or enzyme (Figs. 1.2 and 1.3). PET determination of
receptor occupancy has been most extensively applied for antipsychotic drug binding to the dopamine D2
receptor (Farde et al. 1988). The relationship between in vivo dopamine D2 receptor occupancy and
antipsychotic drug effect was early established. More than 65-70 % of dopamine D2 receptor occupancy
is required to obtain antipsychotic efficacy, but at more than 80% of occupancy, there is a high risk
for extrapyramidal symptoms (Farde et al. 1986; Kapur et al. 2000). The atypical antipsychotic clozapine
is an exception since this drug has antipsychotic effect at lower dopamine D2 occupancy (Farde et al.
1992; Nordström et al. 1995). The PET occupancy approach has now become widely applied to drug
development and extended to several other targets including the serotonin and noradrenaline
neurotransmission systems and enzymes such as monoamine oxidase B (Meyer et al. 2004; Hirvonen et al.
2009; Sekine et al. 2010). The target occupancy by a new candidate drug is usually estimated for different
doses, so that the curvilinear relationship between dose/plasma level and occupancy can be established. This
key information will help efficient dose setting in phase II and III studies by avoiding doses that are too low or
too high. For new targets, the relationship between target occupancy and clinical efficacy or side effects may be
insufficiently understood. In such cases, the relationship between occupancy and pharmacodynamics can only be
established after phase II and III studies when clinical data becomes available. A recent successful example of the
occupancy approach is [11C]AZ10419369, a PET radioligand for the serotonin 5HT1B receptor subtype.
This radioligand was developed in a collaboration between Karolinska Institute and AstraZeneca and has been
used for the occupancy measurement by AZD3783, a candidate
drug for treatment of depression (Pierson et al. 2008; Varnäs et al. 2011). The occupancy estimations were
first performed in NHP and later in human subjects.

The relationship between the dose and 5HT1B occupancy by AZD3783 was similar between nonhuman primates
and human subjects (Varnäs et al. 2011). Despite the value demonstrated for nonhuman primate studies of
AZD3783 to predict binding in the human brain, some caution must be exercised whenever making such
predictions for new drug targets.

1.2.3 Pathophysiology Biomarkers for Diagnosis or Efficacy Studies

For most psychiatric disorders, there are not generally accepted biomarkers in spite of considerable efforts to
reveal the pathophysiologies. The recent progress in neuroimaging of psychiatric disorders will be discussed in
detail in other sections of this textbook.

A general approach applied in drug development is to use PET to measure physiological parameters such as
cerebral blood flow or brain glucose metabolism using [15O]H2O or [18F]FDG. Change in cerebral blood flow or
brain glucose metabolism at drug treatment can thereby be detected, which indirectly serves to confirm a drug
effect in the brain. The combined study of occupancy at a biochemical marker and a physiological biomarker has
a promising potential to further confirm target engagement but has so far been utilized in a few studies only
(Halldin et al. 2001). In a back-translational approach, animal models for psychiatric disorders can be
investigated using micro-PET (Higuchi et al. 2010 ; Klunk et al. 2004). As the animal does not have to
be sacrificed after each PET measurement, longitudinal evaluation of chronic administration of the
candidate drugs can be performed. Such translational approaches have potential to validate animal models
in relation to the pathophysiology and clinical treatment of psychiatric disorders. In the field of neurology,
amyloid imaging in Alzheimer disease (AD) has been successful (Klunk et al. 2004; Rinne et al. 2010; Jack et al.
2011; Cselényi et al. 2012; Gelosa and Brooks 2012; Mathis et al. 2012). Building on the historical observation of
amyloid deposits in AD brains postmortem, the reference radioligand [11C]PIB was developed to allow for in vivo
imaging (Klunk et al. 2004). The PET radioligands have initiated a series of studies on the pathophysiology and
clinical diagnosis of AD (Jack et al. 2011). In addition, PET imaging of amyloid deposits in AD has shown potential
to detect effects of drug treatments aimed at reducing the amyloid plaque burden (Rinne et al. 2010). There is
however no established biomarker for the diagnosis of psychiatric disorders based on pathological
evaluation postmortem. Despite that, there might be some potential to develop imaging biomarkers.
Recently imaging of the translocator protein (TSPO), which indicates activated microglia activity, has
shown higher binding in schizophrenia (van Berckel et al. 2008; Doorduin et al. 2009) as well as correlation
between TSPO binding and clinical symptoms of schizophrenia (Takano et al. 2010). Such development
of new imaging markers could also be useful as efficacy biomarker in psychiatric drug development.

As discussed above, the discussed PET approaches can provide unique information to facilitate drug
development. However, the success of the PET study depends on the development of appropriate PET
radioligands. As shown in Table 1.1 , the availability of PET radioligands is not yet sufficient. As the list of
candidates for the drug development has expanded diversely, the need for novel PET radioligand
development for new targets becomes critical. PET measures the regional radioactivity concentration without
being able to distinguish the chemical forms or environments in which the radioactivity resides. For a clearly
interpretable signal, it is therefore necessary that radiometabolites do not contribute to specific binding.
Thus, radioligands should be preferably resistant to rapid metabolism over the period of data acquisition.
Furthermore, radiometabolites should not be taken up in the target area. This requirement may have important
consequences concerning the elaboration of a radiolabeling strategy. In fact, the position of the radiolabel
within a molecule might be crucial for the in vivo usefulness of a radioligand, as the major drawback
for not useful radioligands is radiometabolites also penetrating the blood–brain barrier (BBB).
Bibliografie 15
Robert Sapolsky – Behave. Biologia Ființelor umane în ipostazele lor cele mai
bune și cele mai rele, p.132-143;
Oxitocina și vasopresina, un vis pentru comercianți
Dacă ideea secțiunii precedente a fost că testosteronul a primit
un verdict nedrept de aspru, ideea care urmează este că oxitocina
se bucură de o nemeritată bună reputație. Conform științei
convenționale, oxitocina face ca organismele să fie mai puțin
agresive, mai sociabile, mai încrezătoare și mai empatice. Indivizii
tratați cu oxitocină devin parteneri mai fideli și părinți mai grijulii.
Prin administrarea oxitocinei, șoarecii de laborator devin mai
darnici și mai buni ascultători, iar musculițele de oțet cântă
precum Joan Baez. Firește că lucrurile sunt mai complicate și că
oxitocina are și o instructivă fațetă respingătoare.

Elemente de bază

Din punct de vedere chimic, oxitocina și vasopresina sunt niște


hormoni asemănători: secvențele de ADN care constituie genele
lor sunt similare, iar cele două gene se găsesc una în apropierea
celeilate pe același cromozom. A existat o singură genă
ancestrală care, acum câteva sute de milioane de ani a fsot
accidental “duplicată” în genom, iar secvențele de ADN din cele
două copii ale genei s-au deplasat independent, evoluând în două gene strâns înrudite (rămâneți pe recepție
pentru mai multe amănunte în Capitolul 8). Această duplicare a genelor a survenit odată cu apariția
mameferelor, iar alte vertebrate posedă numai gena ancestrală, numită vasotocină, care, din punct de vedere
structural se situează între cei doi hormoni distincți ai mamiferelor.

Pentru neurobiologii din secolul 20 oxitocina și vasopresina erau destul de plicticoase. Sunt produse de neuronii
hipotalamici, care își proiectează axonii către suprarenala superioară. Acolo sunt eliberate în circulația sangvină,
prin aceasta dobândind statut de hormoni și neavând niciodată vreo legătură cu creierul. Oxitocina stimulează
contracția uterină în timpul travaliului și alăptarea postnatală. Vasopresina (cunoscută și sub numele de hormon
antidiuretic) reglează retenția apei în rinichi. Și, reflectând structurile lor similare, fiecare produce versiuni atenuate
ale efectelor celeilalte. Sfârșitul poveștii.

Ce au remarcat neurobiologii

Lucrurile au devenit interesante odată cu descoperirea faptului că acei neuroni hipotalamici care produc oxitocina
și vasopresina trimit, de asemenea, proiecții în tot creierul, inclusiv spre tegmentul ventral legat de dopamină și
spre nucleul accumebens, hipocamp, corpul amigdalian și cortexul frontal, toate fiind regiuni cu numeroși
receptori ai hormonilor. În plus, s-a dovedit că oxitocina și vasopresina sunt sintetizate și secretate și în alte regiuni
din creier. Acești doi hormoni periferici, clasici și plicticoși, afectează de fapt funcționarea creierului și
comportamentul. Au început prin urmare să fie numiți neuropeptide – mesageri neuroactivi cu structură peptidică
– un mod extravagant de a spune că sunt niște mici proteine (și, ca să evit să scriu la nesfârșit oxitocină și
vasopresină, le voi denumi neuropeptide, însă rețineți că există și alte neuropeptide).
Primele descoperiri legate de efectele lor comportamentale aveau sens. Oxitocina pregăteşte corpul unei femele
de mamifer pentru naştere şi lactație; logic, ea facilitează, de asemenea, comportamentul matern. Când naşte o
femelă de şoarece, creierul amplifică producția de oxitocină grație unui circuit hipotalamic, având funcții accentuat
diferite la femele şi la masculi. În plus, tegmentul ventral îşi intensifică sensibilitatea fată de neuropeptide sporind
numărul receptorilor de oxitocină. Injectati oxitocină în creierui unei şoricioaice virgine şi aceasta va acționa matern
- cocoloşind, giugiulind şi lingând puişorii. Blocați actiunea oxitocinei la o femelă de rozător cu pui şi
comportamentele ei materne, inclusiv alăptatul, vor înceta.

Oxitocina acționează în sistemul olfactiv, ajutând o proaspătă mămică să învețe mirosul progeniturilor sale. În acest
timp, vasopresina are efecte similare, dar mai atenuate. Curând au fost investigate şi alte specii. Oxitocina le
permite oilor să invete mirosul mieilor lor şi facilitează tendința femelelor de maimute de a curăta blănița puilor.
Pulverizați oxitocină in nasul unei femei (un mod de a trece neuropeptida de bariera hematoencefalică ca să ajungă
în creier) şi se va simți mai atrasă de bebeluşi. În plus, în cazul femeilor cu variatii genetice care produc niveluri mai
înalte de oxitocină sau mai mulți receptori ai neuropeptidei se înregistrează frecvențe medii mai mari ale unor
gesturi precum atingerea sugarilor ori privirea galeşă sincronizată cu a lor. Aşadar, oxitocina joacă un rol central în
alăptatul mamiferelor femele, în dorința lor de a-şi alăpta puii şi în a-şi aminti care este puiul lor. Pe urmă intră în
scenă masculii, vasopresina jucând un rol în comportamentul patern. Când naşte o femelă de rozător, în tot corpul,
inclusiv în creierul tatălui de lângă ea, cresc nivelurile vasopresinei şi ale receptorilor acestui hormon. Printre
maimuțe, tații experimentați au mai multe dendrite ale neuronilor din cortexul frontal care contin receptori de
vasopresină. În plus, administrarea de vasopresină accentuează comportamentul patern. Un avertisment etologic
totuşi: procesul survine doar la acele specii în care masculii sunt paterni (de exemplu, şoarecii de prerie şi
maimuțele marmosete).

Cu zeci de milioane de ani în urmă, unele rozătoare şi specii de primate au dezvoltat independent în decursul
evoluției perechea monogamă odată cu neuropeptidele cu rol central în acest fenomen. Printre maimuțele
marmosete şi titi, ambele monogame, oxitocina întăreşte legătura dintre parteneri, accentuând preferința
individului de a se cuibări lângă partener, şi nu lângă o maimuță străină. După care a urmat un studiu cu rezultate
stânjenitor de asemănătoare cu ceea ce ştim despre şablonul cuplurilor umane. Printre maimuțele monogame
tamarin, dese descâlciri ale blănii şi frecvente contacte fizice indică niveluri înalte de oxitocină la femele. Care era
predictorul unor niveluri inalte de oxitocină la masculi? Sex din belşug.

Frumoasele cercetări de pionierat efectuate de Thomas Insel de la National Institute of Mental Health, Larry Young
de la Emory University şi Sue Carter de la University of Illinois au făcut dintr-o specie de şoareci de câmp poate cea
mai bine cunoscută specie de rozătoare de pe pământ. Majoritatea şoarecilor de câmp (cum sunt cei montani, de
exemplu) sunt poligami. Din contra, în onoarea lui Garrison Keillor, şoarecii de prerie formează perechi monogame
pe viață. Fireşte, nu este chiar aşa - în vreme ce practică „monogamia socială" cu relația lor permanentă, masculii
nu se pot lăuda cu o „monogarnie sexuală" perfectă, căci toți mai calcă şi pe de lături. Cu toate acestea, şoarecii de
prerie sunt mai monogami decât alți şoareci de câmp, făcăndu-i pe Insel, Young şi Carter să încerce a găsi o
explicație a fenomenului.

• Prima descoperire: sexul emite oxitocină şi vasopresină în nucleul accumbens al şoarecilor de câmp, atât
la femele, cât şi la masculi. Teorie evidentă: şoarecii de prerie emit în timpul sexului mai mulți hormoni
decât cei poligami, ceea ce le aduce recompensa unui plus de excitație şi încurajează indivizii să rămână
alături de partenerii lor. Dar şoarecii de prerie nu secretă mai multe neuropeptide decât şoarecii de
munte. În schimb, ei posedă în nucleul accumbens mai mulți receptori ai hormonilor decât cei poligami.
În plus, masculii de prerie cu o variație a genei receptorilor de vasopresină care produce mai mulți
receptori în nucleul accumbens erau mai ataşați de partenerele lor.
• Pe urmă, cercetătorii au efectuat două studii impresionante.
o În primul rând, in urma unor operații de inginerie genetică, în creierul masculilor de munte a fost
reprodusă versiunea receptorilor de vasopresină din creierul şoarecilor de prerie şi masculii astfel
tratați s-au despăducheat şi s-au giugiulit cu femelele care le erau familiare (dar nu cu unele
necunoscute).

o Pe urmă, savanții au făcut, tot prin inginerie genetică, să existe în creierul şoarecilor de munte mai
mulţi receptori de vasopresină în nucleul accumbens; masculii au devenit mult mai ataşati de niste
femele individuale. Ce se poate spune despre genele receptorilor de vasopresină la alte specii?
În comparaţie cu cimpanzeii, maimutele bonobo posedă o variație asociată cu mai mulți receptori
şi cu legături sociale mult mai strănse între femele şi masculi (deşi, în contrast cu şoarecii de prerie,
maimuţele bonobo sunt oricum ați vrea să le spuneți, dar nu monogame).

Dar oamenii? Sunt greu de studiat, fiindcă aceste neuropeptide nu se pot măsura în regiuni minuscule din
creierul uman; în schimb trebuie examinate nivelurile lor din circulația sangvină, o măsurare cât se poate de
indirectă. Totuşi, se constată că aceste neuropeptide joacă un rol în formarea cuplurilor umane. În primul rând,
nivelurile de oxitocină din sânge sunt înalte la scurt timp după ce partenerii s-au îndrăgostit. Mai departe, cu cât
nivelurile sunt mai înalte, cu atăt afectiunea fizică este mai intensă, comportamentele sunt mai sincronizate, relația
este mai durabilă şi fericiții chestionați evaluează mai pozitiv relația de cuplu.

Și mai interesante au fost studiile în care subiecților li s-a administrat intranazal oxitocină (sau altă substanță
vaporizată în cazul grupului de control). Într-un studiu amuzant, li se cere cuplurilor să discute despre un conflict
între parteneri; pulverizați oxitocină în nările lor şi observatorii vor aprecia că ei comunică mai pozitiv şi secretă mai
puțini hormoni de stres. Un alt studiu sugerează că oxitocina consolidează inconştient cuplul. Voluntari bărbați
heterosexuali, cărora li s-a pulverizat sau nu oxitocină în cavitatea nazală, au interacționat cu o cercetătoare
atrăgătoare, în timp ce rezolvau o sarcină absurdă. În rândul bărbatilor care aveau o relație stabilă, oxitocina a mărit
distanța dintre ei şi cercetătoare în medie cu 10 până la centimetri. În cazul celibatarilor, niciun efect. (De ce
oxitocina nu i-a făcut să se apropie? Cercetătorii au menționat că distanța dintre ei şi cercetătoare era deja la limita
inferioară a decenței.) Dacă experimentul era condus de un bărbat, niciun efect. În plus, oxitocina i-a făcut pe
bărbații care aveau o relație stabilă să petreacă mai puțin timp privind fotografiile unor femei atrăgătoare. Aspect
important, oxitocina nu i-a făcut pe bărbați să subaprecieze frumusețea acelor femei; pur şi simplu, acestea le-au
stărnit mai puțin interesul. Oxitocina şi vasopresina facilitează legătura dintre părinți şi copii pe lângă aceea dintre
partenerii cuplului.

Şi acum ceva cu adevărat incăntător despre o invenție recentă a evoluției. Cândva în ultimii cincizeci de mii de ani
(adică mai puțin de 0,1% din timpul scurs de când există oxitocina), în creierul oamenilor şi în cel al lupilor domesticiți
s-a format evolutiv o nouă reacție față de oxitocină; când interacționează un câine şi stăpânul său (dar nu un străin),
amândoi secretă oxitocină. Cu cât durează mai mult schimbul de priviri dintre om şi animal, cu atât nivelul oxitocinei
creşte mai mult. Dați câinilor oxitocină şi ei ei vor sta mai mult timp cu privirea pe stăpânii lor, ceea ce ridică nivelurile
de oxitocină ale oamenilor. lată cum un hormon care a evoluat ca să întărească legătura dintre mamă şi nou-născut
joacă un rol în această bizară formă fără precedent de legătură între specii. În concordanță cu efectele sale asupra
ataşamentului, oxitocina inhibă corpul amigdalian central, atenuează frica şi anxietatea, activând „calmul şi
vegetativul" sistem nervos parasimpatic. În plus, oamenii care posedă o variație a genei receptorilor de oxitocină,
asociată cu o mai mare sensibilitate parentală, au, de asemenea, o reacție cardiovasculară mai atenuată când sunt
luați prin surprindere.
După cum spune Sue Carter, expunerea la oxitocină este „o metaforă fiziologică a sentimentului de siguranță". Mai
departe, oxitocina reduce agresivitatea rozătoarelor, iar şoarecii al căror sistem de reglare a oxitocinei a fost
eliminat (eliminând gena pentru oxitocină sau gena pentru receptorii acesteia) sunt anormal de agresivi.

Alte studii au arătat că oamenii apreciază chipurile celorlalți ca fiind mai demne de încredere şi sunt mai încrezători
în jocurile economice când li se administrează oxitocină (hormonul nu are niciun efect dacă subiectul crede că joacă
împotriva unui computer, ceea ce arată că fenomenul ține de comportamentul social). Această sporită încredere în
ceilalți este interesantă. În mod normal, dacă celălalt jucător se comportă duplicitar, în rundele următoare subiecții
sunt mai puțin dispuşi să aibă încredere în el; din contră, investitorii tratați cu oxitocină nu şi-au modificat
comportamentul în acest fel. Într-o formulare ştiintifică, „oxitocina le-a inoculat investitorilor aversiunea față de
trădare"; într-o exprimare caustică, oxitocina îi face pe oameni nişte nătărăi iraționali; vorbind in termeni mai
angelici, oxitocina îi face pe oameni să întoarcă şi celălalt obraz.

Au ieşit la iveală şi alte efecte prosociale ale oxitocinei. Aceasta îi face pe oameni mai apți să detecteze figurile
fericite (nu şi chipurile furioase, amenințătoare sau neutre), precum şi cuvintele care au conotații sociale pozitive
(nu şi negative) când acestea le sunt prezentate foarte rapid. În plus, oxitocina îi face pe oameni mai darnici. Indivizii
cu versiunea genei receptorilor de oxitocină care este asociată cu o mai mare sensibilitate parentală au fost evaluați
de observatori ca fiind mai prosociali (în timpul unei discuții despre o perioadă de suferințe personale), precum şi
mai sensibili față de validarea socială. Neuropeptida i-a făcut pe oameni mai reactivi față de întărirea socială,
ameliorându-le performanța în rezolvarea unei sarcini când răspunsurile corecte sau greşite erau însoțite de un
zâmbet sau respectiv, de o privire încruntată (în vreme ce nu s-a observat niciun efect când răspunsurile corecte sau
greşite erau însoțite de diferite luminițe colorate). Așadar, oxitocina stimulează comportamentul prosocial și este
emisă când avem experiența comportamentului prosocial (fiind tratați cu încredere într-un joc, primind o atingere
călduroasă și așa mai departe). Cu alte cuvinte, o buclă de feedback afectuoasă și calină.

Evident, oxitocina și vasopresina sunt cei mai grozavi hormoni din univers. Turnați-le în rețeaua de alimentare cu
apă și oamenii vor fi mai darnici, mai încrezători și mai empatici. Vom fi niște părinți mai buni și vom face
dragoste, nu război (deși mai ales amor platonic, de vreme ce oamenii care au relații stabile vor ține la mare
distanță pe oricine altcineva în afară de partenerii lor). Lucrul cel mai bun dintre toate este că vom cumpăra tot
felul de rahaturi inutile, dând crezare reclamelor din magazine de îndată ce sistemul de ventilație a început să
pulverizeze oxitocină.

Okay, a venit vremea să ne calmăm puțin.

Prosociabilitate versus sociabilitate

Sunt oxitocina și vasopresina legate de prosociabilitate sau de competența socială? Ne fac acești hormoni să
vedem pretitutindeni chipuri fericite ori să devenim interesați în a culege informații sociale exacte despre figurile
celorlalți? Ultima întrebare nu este neapărat prosocială; la urma urmei, având informații exacte despre emoțiile
oamenilor, îi putem manipula mai ușor.

Școala Neuropeptidelor Cool susține ideea prosociabilității omniprezente. Dar neuropeptidele stimulează, de
asemenea, interesul social și competența socială. Ele îi determină pe oameni să privească mai îndelung în ochii
celorlalți, ceea ce sporește precizia cu care le descifrează emoțiile. În plus vasopresina stimulează activitatea în
joncțiunea temporoparietală, profund implicată în teoria minții, atunci când oamenii au de rezolvat un test de
recunoaștere socială. Oxitocina sporește precizia estimării gândurilor celorlalți, cu o diferență de gen – femeile
devin tot mai abile în detectarea relațiilor de rudenie, în vreme ce bărbații devin tot mai abili în detectarea
figurilor de dominație. În plus, tot oxitocina sporește precizia amintirii figurilor și a expresiilor emoționale, iar
oamenii cu variația genetică a receptorului de oxitocină care le induce “sensibilitatea parentală” sunt deosebit
de abili în
estimarea emoțiilor. Tot astfel, hormonii le facilitează rozătoarelor învățarea mirosului specific al unui individ, dar
nu și învățarea unor mirosuri nesociale.

Cercetările de neuroimagistică arat că aceste neuropeptide sunt legate atât de componenta socială, cât și de
prosociabilitate. De exemplu, variații ale unei gene legate de semnalizarea oxitocinei sunt asociate cu diferite grade
de activare a zonei fusiforme faciale atunci când se observă figurile celorlalți.

Astfel de descoperiri sugerează că anomaliile acestor neuropeptide sporesc riscul unor tulburări de sociabilitate
deficientă, mai exact spectrul tulburărilor autiste, pacienții cu aceste diagnostice prezentând reacții ale ariei faciale
fusiforme diminuate atunci când privesc figuri umane. Este remarcabil că tulburările spectrului autist au fost legate
de variații ale genelor oxitocinei și vasopresinei, de mecanisme nongenetice de dezactivare a genei receptorului de
oxitocină și de un număr redus de receptori ai oxitocinei. În plus, neuropeptidele îmbunătățesc abilitățile sociale ale
unor indivizi cu tulburări de spectru autist, spre exemplu, stimulând contactul vizual.

Așadar, uneori oxitocina și vasopresina ne fac mai prosociali, dar uneori ne fac culegători mai avizi de informații
sociale precise. Cu toate acestea, există o atracție sporită față de chipurile fericite, întrucât acuratețea este mai
mare în cazul emoțiilor pozitive.

A venit vremea pentru mai multe complicații.

Efectele contingente ale oxitocinei și vasopresinei

Un factor deja menționat este genul: oxitocina amplifică diferite aspecte ale competenței sociale la bărbați și la
femei. În plus, efectele calmante ale oxitocinei asupra corpului amigdalian sunt mai accentuate la bărbați decât la
femei. În mod previzibil, neuronii care produc aceste neuropeptide sunt reglați atât de estrogen, cât și de
testosteron.

Un efect contingent realmente interesant este faptul că oxitocina amplifică generozitatea, dar numai la oamenii
care erau deja generoși. Este reflectarea în oglindă a faptului că testosteronul amplifică a greisivtatea doar la
indivizii cu predispoziții agresive. Hormonii acționează rareori în afara contextului individului și al mediului său.

În sfârșit, un studiu fascinant prezintă contingențele culturale ale acțiunilor oxitocinei. În perioade de stres,
americanii caută sprijin emoțional (de exemplu, vorbind cu un prieten despre problemele lor) mai des decât oamenii
din estul Asiei. Într-un studiu, au fost identificate la niște subiecți americani și coreeni variații genetice ale
receptorilor oxitocinei. În situații lipsite de stres, nici profilul cultural, nici variația receptorului nu au afectat
comportamentul de căutare a susținerii emoționale. În perioade stresante, nevoia de sprijin s-a manifestat în rândul
subiecților cu variața receptorului care este asociată cu o mai mare sensibilitate față de feedback-ul social și față de
aprobare - dar numai în cazul americanilor (inclusiv al americanilor de origine coreeană). Ce face oxitocina pentru
comportamentul de căutare al sprijinului emoționale? Depinde de starea subiectului – dacă este sau nu stresat. Și
de variația genetică a receptorilor săi de oxitocină. Și de cultura subiectului. Mai multe despre subiect în capitolele
8 și 9.

Și partea urâtă a acestor neuropeptide

După cum am văzut, oxitocina și vasopresina atenuează agresivitatea rozătoarelor femele. Face excepție
agresivitatea cu care își apără puii, pe care neuropeptida o amplifică prin intermediul efectelor produse în corpul
amigdalian central (implicat în frica instinctivă).

Fenomenul concordă cu proprietatea acestor neuropeptide de a intensifica atitudinea maternă, inclusiv gestul
matern de a mârâi către agresorii potențiali ai puilor. Tot astfel, vasopresina intensifică agresivitatea paternă a
șoarecilor de prerie. Această descoperire este asociată cu o familiară contingență suplimentară. Cu cât șoarecele
de preerie este mai agresiv, cu atât se atenuează mai puțin agresivitatea lui după blocarea sistemului său
de vasopresină – exact așa cum, în cazul testosteronului, odată cu acumularea de exeperiență, agresivitatea
este menținută de învățarea socială mai degrabă decât de un hormon / o neuropeptidă. În plus, vasopresina
sporește agresivitatea îndeosebi la rozătoarele masculi care sunt deja agresivi – un alt efect biologic care
depinde de contextul individual și social.

Iar acum ne îndreptăm realmente spre încheierea viziunii noastre despre aceste neuropeptide minunate. În primul
rând, să revenim la faptul că oxitocina sporește încrederea și cooperarea într-un joc economic – dar nu și dacă
celălalt jucător este anonim și se află în altă încăpere. Când subiectul joacă împotriva unor necunoscuți, oxitocina
diminează cooperarea, amplifică invidia când jocul merge prost și intensifică bucuria lacomă când jocul merge bine.

În sfârșit, frumoasele studii efectuate de Carsten de Dreu de la Universitatea din Amsterdam au arătat cât de
neprietenoasă și de contondentă poate fi oxitocina. În primul studiu, niște bărbați au format două echipe. Fiecare
subiect a decis câți dintre banii lui să depună într-un fond comun, la care au contribuit toți ceilalți coechipieri. Ca de
obicei, oxitocina i-a sporit generozitatea. Pe urmă, participanții au jucat dilema prizonierului cu cineva din cealaltă
echipă. Când mizele financiare erau mari, subiecții fiind motivați, oxitocina a sport probabilitatea de a-l trăda
preventiv pe celălalt jucător. Astfel, oxitocina te face mai prosocial în relațiile cu cei ca tine (coechipierii), dar în mod
spontan devii mârșav față de ceilalți, care constituie o amenințare. După cum subliniază de Dreu, poate că oxitocina
a evoluat ca să dezvolte competența socială necesară pentru a ne spori abilitatea de a-l identifica pe acela care este
unul dintre noi.

În cel de-al doilea studiu efectuat de De Dreu, un grup de studenți olandezi au dat testul de asumpții implicite pentru
depistarea unor preconcepții părtinitoare inconștiente, iar oxitocina a exagerat prejudecățile față de două grupuri
externe care cuprindeau persoane din orientul mijlociu sau germani. A urmat o a doua parte a studiului, cea cu
adevărat relevantă. Subiecților li s-a cerut să ia răspundă la o variantă a dilemei tramvaiului, în care trebuiau să
decidă daca este corect să ucizi o singură persoană ca să salvezi alte cinci. Conform scenariului, numele individului
sacrificat era tipic olandez (Dirk sau Peter), tipic german (Markus
sau Helmut) sau tipic islamic (Ahmed sau Youssef). Cei cinci
indivizi aflați în pericol care trebuiau salvați nu aveau nume. Este
remarcabil că oxitocina a redus probabilitatea ca subiecții să îl
sacrifice pe bunul Dirk sau Peter în comparație cu Helmut sau
Ahmed.

Oxitocina, hormonul drăgăstos, ne face mai prosociali față de


„Noi”, dar și mult mai răi față de „Ei”. Nu este o prosociabilitate
generică. Este etnocentrism și xenofobie. Cu alte cuvinte,
acțiunile acestor neuropeptide depind în mod dramatic de
context – cine ești, în ce mediu trăiești și cine este persoana cu
care interacționezi. După cum vom vedea, principiul se aplică și
reglării genelor relevante pentru aceste neuropeptide.

Bibliografie 16
Christian Jarett – Great Myths of the Brain,
p.306-307;
The Pleasure Chemical and Cuddle Hormone
“Beer Makes Brain Release Pleasure Chemical Dopamine” announced a headline in the Huffington Post in 2013. The
article was about a brain imaging study that found small sips of alcohol-free beer were sufficient to increase
dopamine activity – an effect described by the journalist as “a pleasurable response in the brain.” Similar news articles
appear almost weekly. People’s favorite music evokes the same feelings as good food or drugs, claimed The
Guardian in 2011, because it releases “the brain’s reward chemical dopamine.”

In reality, most neuroscientists today agree that to describe the neurotransmitter as a “reward chemical” is a huge
distortion and oversimplification. Dopamine is involved in many functions, including the control of movement,
and it’s found in many different brain pathways, only some of which are related to reward. Dopamine is released
not just when we score a high from food, sex, or money, but also when we fail to get a reward we were expecting.
And there’s research that found increased dopamine activity in so-called “reward pathways” when distressed
bereaved people looked at pictures of a lost relative. This dopamine activity went hand in hand with participants’
reports of sadness and yearning, certainly not pleasure in the usual sense of the word.

A more accurate characterization of dopamine is to say that it is important for motivation and finding salience in
the world. Indeed, schizophrenia is associated with excess dopamine activity and one theory proposes that many
of the illness’s problematic symptoms, such as paranoia, are related to reading too much importance and meaning
into largely random situations.

Another brain chemical that’s acquired its own mythology is oxytocin, produced by the hypothalamus, and
nicknamed variously as the “cuddle hormone,” “love molecule,” or “moral molecule.” It’s released in extra doses when
we hug (hence one of its nicknames) and have sex; and giving people more of the hormone via a nasal spray has
been linked with a range of positive emotions from trust and empathy to generosity. No wonder it’s been subject
to such hype – the io9 web-site even ran a feature: “10 Reasons Why Oxytocin Is The Most Amazing Molecule In The
World.” Some excitable scientists are already speculating that the hormone could form the basis of a treatment for
a range of psychological problems from shyness to personality disorder.

The trouble is, as with dopamine, the truth about oxytocin is far more nuanced than the media’s “hug hormone”
epithet would suggest. One paper showed that extra oxytocin can increase feelings of envy. Another found that
the effects varied from person to person – people with a healthy attachment to their mother felt even more
positively toward her when given extra oxytocin; by contrast, giving oxytocin to people with an anxious style of
attachment led them to view their mothers less positively. Still further research has shown the hormone can
increase trust toward people we know well, but reduce it toward outsiders. And here’s one more – a 2014 study
found that oxytocin can increase an aggressive person’s inclination to be violent toward their partner. It’s clear
there’s a lot more we’ve yet to learn about the cuddle hormone. As usual the true picture is more complicated, but
ultimately more fascinating than the media myths would have us believe.

S-ar putea să vă placă și