Sunteți pe pagina 1din 5

Question: Recent advances in neuroscience allow psychologists and neuroscientists to record, analyse and decode the neural activity

of the human brain. Such techniques can reveal an individuals covert mental states, independent of overt behaviour, raising important ethical questions. These mind-reading techniques are capable of affecting individuals sense of privacy, autonomy and identity. Critically discuss the ethical problems associated with these new research possibilities and present ways in which the psychological community can effectively address such concerns. Ethics are fundamental in any area of science. They ensure that the aims of psychological research are guided towards a goal that would benefit society and are used for good intentions (Jacobson, 1995). Ethics in neuroscience have only recently made a prominent emergence, most likely as a result of advanced technology used to reveal individuals mental states. This certainly raises ethical issues which the essay will discuss with particular focus on the individuals privacy, autonomy and identity. Currently there has been much research into lie-detection. Several studies have shown evidence to suggest there are certain brain areas and neural processes involved when lying. Bles and Haynes (2008) made a general conclusion on Spencer et als (2004) studies, stating lying overall appears to require more cognitive effort. Neuroimaging techniques such as positron emission tomography (PET) and functional brain imaging (fMRI) can be used to detect lies as it reveals increased blood flow and oxygen which in turn indicates increased brain activity. This could relate to the extra cognitive effort required for lying as suggested in Spencer et als (2004) studies. Carrion, Keenan and Sebanz (2010) examined event-related brain potentials (ERPs) when participants spoke a lie or had the intention to deceive. They found that the brain area associated when lying involves the medial frontal negative deflection (N450). Starkstein and Robinson (1997) found increased brain activity in the ventroletral and dorsolateral prefrontal cortex, which was further supported in Phan et als (2005) study where participants pressed yes and no buttons in response to true and deceptive answers. At foremost, it does not seem that such research is actively infringing on individuals privacy. Neuroimaging is at most only able to detect simple mental states on a general level. It is ludicrous to presume these techniques can actually read peoples minds. On the other hand, it is possible that future development in technology might make mind-reading possible. This would require strict ethical guidelines to be implemented or the techniques may be prohibited to use if such high-level technology was reached. Presently, there is already research that has investigated brain activity relating to words and images (Mitchell, Shinkareva, Carlson, Chang, Malave, Mason & Just, 2008). If this line of research were to further advance then it would certainly allow a deeper insight into the individuals mind and despite the brilliance of such an achievement, it is nonetheless a direct breach of ones mental privacy. Nevertheless, there are some potential benefits. Some of the research paradigms were implemented in mock court trials and the idea of efficiently apprehending guilty perpetrators is certainly appealing. Through the use of electroencephalography (EEG) performed on the guilty knowledge task (GKT), we can find out if the individual has knowledge of the crime. It may be ethically acceptable to have a guilty perpetrator undergo a lie-detection procedure however this would be unfair to an innocent person. This presents an ethical conflict. Furthermore, the techniques themselves have flaws, which lead to inaccuracies that would make it unreliable to use in the real-world i.e. justice courts. Courts decide the outcomes of peoples lives and this example emphasises neuroscientists not only have an ethical but a 1

social responsibility when gaining data. This refers to obtaining accuracy, reliability and validity. Another possible flaw, although not very obvious, is that there appears to be more than just one brain area associated with lying i.e. medial frontal negative deflection (N450), ventroletral and dorsolateral prefrontal cortex. This suggests that it is not suitable to convict people of crimes simply on the basis of their brain activating in certain areas. This is unsurprising as the brain is very complex and demonstrates multiplicity. In the paradigm where participants were asked to deliberately deceive, the extra effort may have been a result of having to think of a lie relatively quickly, thus feeling pressurised. In the real world, people have more time to mentally prepare and formulate an undetectable lie. These criticisms show to an extent that the decoding carried out by neuroimaging techniques is clearly limited and does not truly breach an individuals sense of mental privacy. Alternatively, it is a different case if we were to consider future possibilities, but this does not seem anytime soon. We should ask whether it is worth invading peoples mental privacy for something such as liedetection. Even if it were for a good cause such as applying it in criminal justice systems there will always be room for miscalculation or error. Hence, lie-detection techniques may only be good as humans. We need to consider if there are enough benefits into researching deception to compensate for the costs of overstepping ethical boundaries. On a further note, just because a lie-detector may pick up a lie, it does not necessarily reveal the truth. Neuromarketing violates the ethical principles of autonomy and privacy. There are already companies that aim to exploit lie-detecting technology e.g. NoLieMRI (Bles & Haynes, 2008). If at some point in the future, lie-detection becomes highly accurate and reliable, then this would raise serious ethical issues of privacy as it would allow access to individuals mental states, which is unacceptable. The psychological community should prohibit such commercialisation. We need to realise that lying and deception is part of human nature and the thought of incorporating a lie-detector in everyday life is sure to make the public apprehensive. Hence this raises the ethical issue of autonomy. People should feel secure and not feel mentally threatened. Neuroimaging techniques are no doubt a result of advanced technology, and by all means it should be appreciated; however the misuse of such techniques is where the ethical issues arise (Illes & Bird, 2006). This sort of misapplication is demonstrated in neuromarketing for example, brain imaging is being used to reveal consumers desire and motivation towards certain products (Farah, 2005). The ethical issue of autonomy is relevant in this case, as personal information on consumers preferences is being used for marketers personal gain. This sort of exploitation misuses neuroscientific techniques where its primary purpose should be to collect data that would benefit society. To deal with such ethical issues, fully informed consent from individuals should be obtained. This would ensure that the individual is at least aware that their mental privacy may be infringed and it also give more control to the participant as they are able to make their own decision. Obtaining informed consent can be also applied to lie-detectors if they were to become commercialised. In addition, lie-detectors should only be used in difficult criminal cases as it would be very expensive for every trial to undergo lie-detection. Advances in neuroscience have challenged the mind-brain problem, which poses not only a philosophical question but raises the ethical issues of identity and autonomy. With the use of fMRI, PET and other neuroimaging techniques, there is much evidence to suggest that our mental states are a result of our neurobiological states. In other words, neuroscience has the potential to explain how the brain can produce human thought, feeling and action (Farah, 2005, p. 38). Libet (1985, cited in Brysbaert & Rastle, 2009) discovered electrical signals were made in the brain well before participants had even consciously planned an action. We 2

would like to think our consciousness informed the brain to make a movement, but evidence suggest otherwise. It was found that certain areas of participants brain activated whilst reading motor-related words kick, lick and pick (Pulvermuller, 2005 cited in Brysbaert & Rastle, 2009). The activation of the words meaning and matching motor movement is perhaps why humans are capable of understanding rich conscious experience. This only further indicates the mind and brain are not really separate entities. The implications are heavy as it suggests we lack free will. It forces us to reconsider our identity, the nature of consciousness and the responsibility of our actions (Chan & Harris, 2011). Such a dilemma needs to be handled carefully. The best approach in dealing with this is to ensure the public understands the limitations of neuroscientific evidence. Several experiments designed to record brain activity whilst making a conscious decision have been criticised, for example, the readiness potential found in the supplementary motor area (SMA) only signifies the later stages of motor planning (Soon, Brass, Heinze and Haynes, 2008). Therefore the claim that brain activity precedes our conscious decisions is not strong or necessarily right. Further research is needed to investigate other possible neural processes involved in making a decision. It is also important that neuroscientists and other psychologists take the time to appropriately present their findings to the public. One can argue that lack of free will is not really what Libet et al (1983) and Soon et al (2008) are revealing in their experiments as their research mainly focuses on motor movement it does not take into account of how we make long-term decisions e.g. what job we want to do, what our ambitions and goals will be etc. Surely free will is more relevant towards these decisions compared to planning motor movements. From this view, the ethical issue of identity and autonomy no longer seems to be a major problem, or at least the public should be aware of this fact so that they feel less anxious. Neuroscience techniques have demonstrated they are able to record, decode and analyse neural activity and have the capacity to reveal covert mental states. This certainly encourages numerous research possibilities, which would no doubt also raise further ethical issues; however for now, neuroimaging techniques are not especially advanced to the extent that the ethical issues of identity, privacy and autonomy are a major problem. However, it is clear these will be a main concern if such techniques do improve and expose mental states on a deeper level. Even though mind-reading techniques sound implausible, there is potential. Davatzikos et al (2005) trained a pattern classification algorithm that could differentiate between truthful and deceptive responses based on different brain activity patterns using multivariate techniques and fMRI. They achieved 90% accuracy when brain activity patterns were identified for new participants. If this accuracy increases further then this trained classification algorithm could be used in legal systems. We need to realise that as these techniques become stronger in accuracy and reliability, the more prominent these ethical issues will become. In spite of this, future research should not be hindered as there is potential benefit to be gained. For example, people with psychopathic personality disorder have been shown to have abnormalities in the prefrontal and limbic systems (Blair, 2004). Thus neuroscience may potentially establish some biological basis of personality. If research in this area were to further develop, then it may be possible to identify harmful criminals such as psychopaths ahead of time and prevent their future crimes. Although this no doubt increases the ethical issues involved. Social implications are substantial, and it is certain people will feel strongly against arresting or withholding someone from something they have not even committed no matter what neuroimaging evidence may suggest. The bottom line appears to suggest that there must be a balance between the potential benefits 3

of research and the costs of breaking ethical principles. Compromises may need to be made, and the design of future studies should aim to minimise possible ethical issues. As neuroscientifc techniques advance, new ethical guidelines may have to be made. Organisations such as the US Congressional Office of Technology Assessment (OTA) and UNESCOs International Bioethics Committee (IBC) should both gather if new regulations are required (Illes & Bird, 2006). Projects with great potential benefits but may risk breaching ethical principles should have an ethical committee examine the aims and rationale of their research. More importantly, researchers should state what they will with the findings and how they intend to apply it in the real world. In contrast, it can be argued from a cynical or pessimistic perspective that technology can only go so far, therefore we may never need to worry about the ethical issues of privacy, autonomy and identity being breached. Reading the contents of peoples minds would be difficult as the brain is very complex and the technique of content-based classification involves the brain mapping of contents and pattern of activity being tailored to each individual participant (Bles & Haynes, 2008). Since the number of possible cognitive states is infinite, real mind-reading would require an infinite amount of labelled data sets and time to train the classifier (Bles & Haynes, 2008, p. 89). Such a technique to accomplish this feat is highly unlikely to exist however current technology can perhaps advance to the point of identifying recognition, emotional states and other forms of feelings that may allow a fairly accurate inference which can be considered as almost mind-reading. If we reach that situation, ethical considerations are sure to be a main priority.

References Blair, R.J. (2004). The roles of orbital frontal cortex in the modulation of antisocial behaviour. Brain Cogn. 55, 198208 Bles M, Haynes JD (2008) Detecting concealed information using brain-imaging technology. Neurocase 14(1):82-92 Brysbaert, M. & Rastle, K. (2009). Historical and Conceptual Issues in Psychology. London, England: Prentice Hall, Pearson Carrion, R., Keenan, J., & Sebanz, N. (2010). A Truth Thats Told with Bad Intent: An ERP Study of Deception. Cognition, 114(1), 105-110 Chan, S. & Harris, J. (2011). Neuroethics. In Royal Society, Brain Wave Modules 1: Neuroscience, society and policy, p. 77-86 Farah M (2005) Neuroethics: the practical and the philosophical. Trends in Cognitive Science, Vol 9, No 1, p. 3430 Haynes, J. D., Sakai, K., Rees, G., Gilbert, S., Frith, C., Passingham, R. E. (2007). Reading hidden intentions in the human brain. Current Biology. 17(4):323-8 Illes J, Bird SJ. (2006) Neuroethics: a modern context for ethics in neuroscience. Trends in Neuroscience. 29(9):511-7. 4

Jacobson, M. (1995). Foundations of Neuroscience. New York and London: Plenum Press Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activities (readiness-potential): The unconscious initiation of a freely voluntary act. Brain, 106, 623642 Mitchell, S. Shinkareva, A. Carlson, K. Chang, V. Malave, R. Mason, and M. Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320:1191 1195. Spence, S. A., Hunter, M. D., Farrow, T. F., Green, R. D., Leung, D. H., Hughes, C. J., et al. (2004). A cognitive neurobiological account of deception: Evidence from functional neuroimaging. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 359(1451), 17551762 Soon, C. S., Brass, M., Heinze, H., & Haynes, J. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience

S-ar putea să vă placă și