This post is a part of our Bioethics in the News series
By Laura Cabrera, PhD
Imagine that you are returning from your holidays and suddenly you are detained in the airport for carrying drugs in your suitcase. You remember that your friend asked you to bring a small package back with you to deliver to his family. Was it reckless of you to have accepted the request to help your friend? A different situation would have been if your friend had told you that the package contained drugs and you still had accepted it. In the first situation, you didn’t intend to cross the border with drugs; in the second you knew what you were doing.
A crucial factor influencing prison sentences is connected to criminal intent: whether you carried out an action in a state of knowledge compared to a state of recklessness. Knowing actors are considered guilty to a greater degree and thus punished more harshly than reckless actors, yet for the most part we rely on human ability (jurors) to infer the real intentions behind a person’s actions or words. But are there more “objective” ways to distinguish between different criminal intentions? A recent news article in The Guardian discussed a neuroimaging study (Vilares et al, 2017) looking at the tantalizing possibility of using a brain scan to distinguish between these two types of criminal intent.
The prospect of using brain scans to bypass the peripheral nervous system and get at the seat of thoughts, intention, and knowledge is not new (Haynes and Rees, 2005). The media often portrays the use of brain imaging for getting at the neural correlates of preferences, morals, or intentions as “mind reading.” But should we be worried about the use of neurotechnologies for this purpose? A focus of neuroethics is to determine the real nature of the threats involved and to evaluate the ethical implications, many of which could have wide-ranging legal and social ramifications. Neurolaw is another field at the intersection of neuroscience and law that aims to better understand human behavior through new developments in neuroscience, or insights from neuroscientific research, and incorporate those insights into legal studies.
In the study covered in The Guardian’s article, researchers scanned the brains of forty subjects using functional magnetic resonance imaging (fMRI). The participant’s task was to decide whether to carry a suitcase with “valuable content” across a border. Researchers varied both the probability that the suitcase had contraband, as well as the level of information available to participants regarding the risk of being searched at customs (e.g. how many checkpoints they might have to go through). Using a new machine learning method, the researchers looked for multiregional brain activity patterns that could collectively predict “culpable” mental states. They reported that with the machine learning method they could predict with “high accuracy” those participants who knowingly broke the law vs. those who simply took a risk. However, the high predictive ability was strongly dependent on the amount of risk information available to the individuals. This raises a number of questions such as what is involved in criminal intent? Intention involves complex mental activity, including an interpersonal relation and often a moral dimension. For practical purposes the law has a number of categories to classify different degrees of intent (Shen et al.2011), but to what extent is the difference between knowledge and recklessness reflected in unique brain activity? It is one thing to say that brain scanners can correlate certain behaviors with a certain neural basis, and another to interpret such a correlation as the cause of the behavior.
Moreover, in the study intent was only measured when a potential criminal activity was being committed, thus it is uncertain that a person’s mental state when they committed a past crime could be recreated. In addition, these were mock crimes, so one can question the validity of assessing the criminal intentions of someone who is lying in a brain scanner. For a more reliable and ecologically valid reading we will need devices that can monitor our brain 24/7, so that the brains of those individuals suspected of committing an offense could be measured while they commit the act. This takes us to a key issue: mental privacy. The mere thought of someone accessing our minds with the purpose of disclosing attributes we might not want others to know is very troublesome. Imagine then the privacy issues that would arise from having a device that could monitor a wider range of our daily mental lives. In this regard, proponents of cognitive liberty (Sententia 2013) argue that the right of a person to liberty, autonomy, and privacy over their own intellect is situated at the core of what it means to be a free person. Yet for others, the thought of having a device monitoring and perhaps even recording our mental lives is not such a disturbing possibility. For those of you who watch the British television series Black Mirror (Episode 3), you might recall the episode with the memory implant, portraying some of the issues that could arise if such a device were to be available.
There are also various technology issues to consider, such as sensitivity, spatial precision, and false positives (Roskies, 2015). A high accuracy rate for scientific purposes is not always similarly applicable to forensic or civil purposes, in particular in cases where an individual’s civil liberties, and as such their autonomy, might be at stake.
Finally, while the study appeared to support current legal classifications, it is far from certain how different brain states influence people’s behavior. That is, how do we separate intentions from actual actions? We might have an intention to do something that is not according to the law, but we never act on the intention.
Considering all these points, it is clear that brain scans will not be replacing juries anytime soon. Of course, future advances might make worries about mind-reading and constant monitoring of our mental lives more pressing. For now, it is not a bad idea to start engaging in the discussion of the broader ethical and societal impact of neuroscientific research and neurotechnologies on the law and beyond.
Laura Cabrera, PhD, is an Assistant Professor in the Center for Ethics and Humanities in the Life Sciences and the Department of Translational Science & Molecular Medicine at Michigan State University.
Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, April 20, 2017. With your participation, we hope to create discussions rich with insights from diverse perspectives.
You must provide your name and email address to leave a comment. Your email address will not be made public.
More from Dr. Cabrera: Forgetting about fear: A neuroethics perspective
- Haynes, J. and R. G. 2005, Predicting the stream of consciousness from activity in human visual cortex, Current Biology, 15(14): 1301–7. http://dx.doi.org/10.1016/j.cub.2005.06.026
- Roskies, A.L. 2015, Mind reading, lie detection, and privacy, in Clausen, J. and Levy, N. (eds.) Handbook of Neuroethics, Netherlands: Springer. pp. 679–95.
- Sample, I. 2017. Brain scans can spot criminals, scientists say. The Guardian. https://www.theguardian.com/science/2017/mar/13/brain-scans-can-spot-criminals-scientists-say
- Shen FX, Hoffman MB, Jones OD, et al. 2011 Sorting guilty minds. NYU Law Rev, 86(5):1306–1360.
- Sententia, W. 2013, Freedom by design: Transhumanist values and cognitive liberty. In More, M. and Vita-More, N. (eds.) The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology and Philosophy of the Human future. John Wiley & Sons. Pp. 355–60.
- Vilares, I. Wesley, M.J., Ahn, W. et al. 2017. Predicting the knowledge–recklessness distinction in the human brain. PNAS, 114 (12): 3222-3227.