Who “owns” the healthcare data about you?

Bioethics Public Seminar Series purple and teal icon

The 2020-2021 Bioethics Public Seminar Series continues next month on March 24. You are invited to join us virtually to learn about artificial intelligence and healthcare data ownership. Our seminars are free to attend and open to all individuals.

Healthcare Artificial Intelligence Needs Patient Data: Who “Owns” the Data About You?

Adam Alessio photo
Adam M. Alessio, PhD

Event Flyer
Zoom registration: bit.ly/bioethics-alessio

Artificial intelligence (AI) is increasingly used in modern medicine to improve diagnostics, therapy selection, and more. These computer algorithms are developed, trained, and tested with our patient medical data. Certainly beyond the healthcare space, many companies—from Facebook to Amazon to your local pub—are using our consumer data. This is data about you, but is it your data? What rights do you have versus the owners of the data? Does medical data used for the benefit of future patients deserve different treatment than consumer data? This lecture will explore examples of AI and an evolving view of data ownership and stewardship in medicine.

March 24 calendar icon

Join us for Dr. Alessio’s online lecture on Wednesday, March 24, 2021 from noon until 1 pm ET.

Adam M. Alessio, PhD, is a professor in the departments of Computational Mathematics, Science, and Engineering (CMSE), Biomedical Engineering (BME), and Radiology. He earned a PhD in Electrical Engineering at the University of Notre Dame and then joined the University of Washington faculty where he was a Professor in the Department of Radiology until 2018. He moved to MSU to be part of the new CMSE and BME departments and the Institute for Quantitative Health Science and Engineering. His research is focused on non-invasive quantification of disease through Artificial Intelligence-inspired algorithms. Dr. Alessio’s research group solves clinically motivated research problems at the intersection of imaging and medical decision-making. He is the author of over 100 publications, holds 6 patents, and has grant funding from the National Institutes of Health and the medical imaging industry to advance non-invasive cardiac, cancer, and pediatric imaging. Dr. Alessio is also the administrative director of the new Bachelor of Science in Data Science at MSU and is looking for partners in the development of a data ethics curriculum at MSU.

Can’t make it? All webinars are recorded! Visit our archive of recorded lecturesTo receive reminders before each webinar, please subscribe to our mailing list.

Listen: Considering Consciousness in Neuroscience Research

No Easy Answers in Bioethics logoNo Easy Answers in Bioethics Episode 19

What can neuroscience tell us about human consciousness, the developing brains of babies, or lab-grown brain-like tissue? How do we define “consciousness” when it is a complex, much-debated topic? In this episode, Michigan State University researchers Dr. Laura Cabrera, Assistant Professor in the Center for Ethics and Humanities in the Life Sciences, and Dr. Mark Reimers, Associate Professor in the Neuroscience Program, discuss the many layers of consciousness. Examining recent research on lab-grown brain organoids, they discuss moral and ethical considerations of such research, including how future technologies could challenge our definitions of consciousness and moral agency. They distinguish consciousness from intelligence, also discussing artificial intelligence.

Ways to Listen

This episode was produced and edited by Liz McDaniel in the Center for Ethics. Music: “While We Walk (2004)” by Antony Raijekov via Free Music Archive, licensed under a Attribution-NonCommercial-ShareAlike License. Full episode transcript available.

About: No Easy Answers in Bioethics is a podcast series from the Center for Ethics and Humanities in the Life Sciences in the Michigan State University College of Human Medicine. Each month Center for Ethics faculty and their collaborators discuss their ongoing work and research across many areas of bioethics. Episodes are hosted by H-Net: Humanities and Social Sciences Online.

Should we trust giant tech companies and entrepreneurs with reading our brains?

This post is a part of our Bioethics in the News seriesBioethics-in-the-News-logo

By Laura Cabrera, PhD

The search for a brain device capable of capturing recordings from thousands of neurons has been a primary goal of the government-sponsored BRAIN initiative. To succeed would require developing flexible materials for the electrodes, miniaturization of the electronics and fully wireless interaction. Yet this past summer, it was corporately funded Facebook and Elon Musk’s Neuralink that stepped forward with announcements regarding their respective technological investment to access and read our human brains.

mental-health-3866035_1280
Image description: A black and white graphic of a person’s head with an electric plug extending out of the brain and back of the head. Image source: Gordon Johnson from Pixabay

Elon Musk, the eccentric technology entrepreneur and CEO of Tesla and Space X, made a big announcement while at the California Academy of Sciences. This time it was not about commercial space travel or plans to revolutionize city driving. Instead Musk presented advances on a product under development at his company Neuralink. The product features a sophisticated neural implant which aims to record the activities of thousands of neurons in the brain, and write signals back into the brain to provide sensory feedback. Musk mentioned that this technology would be available to humans as early as next year.

Mark Zuckerberg’s Facebook is also funding brain research to develop a non-invasive wearable device that would allow people to type by simply imagining that they are talking. The company plans to demonstrate a prototype system by the end of the year.

These two corporate announcements raise important questions. Should we be concerned about the introduction of brain devices that have the capacity to read thousands of neurons and then send signals to our brains? The initial goal for both products is medical, to help paralyzed individuals use their thoughts to control a computer or smartphone, or in the case of Facebook to help those with disabling speech impairments. However, these products also are considered to be of interest to healthy individuals who might wish to “interact with today’s VR systems and tomorrow’s AR glasses.” Musk shared his vision to enable humans to “merge” with Artificial Intelligence (AI), enhancing them to reach superhuman intelligence levels.

Time will tell whether or not these grand visions, that currently veer into science fiction, will be matched by scientific progress. However, if they ultimately deliver on their promise, the products could change the lives of those affected by paralysis and other physical disabilities. Yet, if embraced by healthy individuals such technologies could radically transform what it means to be human. There are of course sound reasons to remain skeptical that they will be used. First off there are safety issues to be considered when implanting electrodes in the brain, including damage to the vasculature surrounding the implant as well as tissue response surrounding the device. And that is what is currently known about inserting brain-computer interfaces with only a couple of electrode channels. Consider what might happen with thousands of electrodes. There remain simply too many unknowns to endorse this intervention for human use in the next year or so. There also are salient issues regarding brain data collection, storage, and use, including concerns connected to privacy and ownership.

artificial-intelligence-2228610_1280
Image description: a black and grey illustration of a brain in two halves, one resembling a computer motherboard, the other containing abstract swirls and circles. Image source: Seanbatty from Pixabay

Beyond these concerns, we have to think about what happens when such developments are spearheaded by private companies. Privately funded development is at odds with the slow, careful approach to innovation that most medical developments rely upon, where human research subject regulations and safety measures are clear. It is the “move fast and break things” pace that energizes start-up companies and Silicon Valley entrepreneurs. The big swings at the heart of these entrepreneurial tech companies also bring considerable risks. When addressing sophisticated brain interfaces, the stakes are quite high. These products bring to mind scenarios from Black Mirror, a program that prompts a host of modern anxieties about technology. On one hand, the possibility of having a brain implant that allows hands-free device interaction seems exciting, but consider the level of information we then would be giving to these companies. It is one thing to track how individuals react to a social media post by clicking whether they “like” it or not, or by how many times it has been shared. It is another thing altogether to capture which parts of the brain are being activated without us having clicked anything. Can those companies be trusted with a direct window to our thoughts, especially when they have a questionable track record when it comes to transparency and accountability? Consider how long it took for Facebook to start addressing the use of customer’s personal information. It remains unclear just how much financial support Facebook is providing to its academic partners, or whether or not volunteers are aware of Facebook’s involvement in the funding-related research.

The U.S. Food and Drug Administration as well as academic partners to these enterprises may act as a moderating force on the tech industry, yet recent examples suggest that those kinds of checks and balances oftentimes fail. Thus, when we hear about developments by companies such as Facebook and Neuralink trying to access the thoughts in our brains, we need to hold on to a healthy skepticism and continue to pose important challenging questions.

Laura Cabrera photoLaura Cabrera, PhD, is an Assistant Professor in the Center for Ethics and Humanities in the Life Sciences and the Department of Translational Neuroscience at Michigan State University.

Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, September 26, 2019. With your participation, we hope to create discussions rich with insights from diverse perspectives.

You must provide your name and email address to leave a comment. Your email address will not be made public.

More Bioethics in the News from Dr. Cabrera: Should we improve our memory with direct brain stimulation?Can brain scans spot criminal intent?Forgetting about fear: A neuroethics perspective

Click through to view references

International Neuroethics Society on “Neuroethics at 15”

Laura Cabrera photoThe International Neuroethics Society (INS) Emerging Issues Task Force has a new article in AJOB Neuroscience on “Neuroethics at 15: The Current and Future Environment for Neuroethics.” Center Assistant Professor Dr. Laura Cabrera is a member of the task force, which advises INS by providing expertise, analysis, and guidance for a number of different audiences.

Abstract: Neuroethics research and scholarship intersect with dynamic academic disciplines in science, engineering, and the humanities. On the occasion of the 15th anniversary of the formation of the International Neuroethics Society, we identify current and future topics for neuroethics and discuss the many social and political challenges that emerge from the converging dynamics of neurotechnologies and artificial intelligence. We also highlight the need for a global, transdisciplinary, and integrated community of researchers to address the challenges that are precipitated by this rapid sociotechnological transformation.

The full text is available online via Taylor & Francis Online (MSU Library or other institutional access may be required to view this article).

Can Big Data and AI Improve End-of-Life Care?

Bioethics in the News logoThis post is a part of our Bioethics in the News series

By Tom Tomlinson, PhD

A recently reported study claims to more accurately predict how much longer patients will live. Researchers at Stanford University assigned a neural network computer the task of training itself to develop an artificial intelligence model that would predict if a patient would die within 3-12 months of any given date. The computer trained on the EMR records of 177,011 Stanford patients, 12,587 of whom had a recorded date of death. The model was validated and tested on another 44,273 patient records. You can find the wonky details here.

The model can predict with 90% accuracy whether a patient will die within the window.

Now this is a lot better than individual physicians typically do. It’s not just that such predictions are fraught with uncertainty, given how many complex, interacting factors are at work that only a computer can handle. If uncertainty were the only factor, one would expect physicians’ prognostic errors to be randomly distributed. But they are not. Clinicians overwhelmingly err on the optimistic side, so the pessimists among them turn out to be right more often.

The study takes accurately predicting death to be a straightforwardly useful thing. It gives patients, families and clinicians more reliable, trustworthy information that is of momentous significance, better informing critical questions. Will I be around for my birthday? Is it time to get palliative or hospice care involved?

The study’s authors are particularly hopeful that the use of this model will prompt more timely use of palliative care services, and discourage overuse of chemotherapy, hospitalization, and admission to intensive care units in the last months of life—all well-documented problems in the care of terminally ill people, especially those dying of cancer. So this is a potentially very significant use of “big data” AI research methods to address major challenges in end-of-life care.

But making real progress toward these goals will take a lot more than this model can deliver.

36302888621_ad3b5fc2e1_h
Image description: A graphic on a blue gradient background shows the silhouette of an individual in the foreground containing colorful computer motherboard graphics. In the background are silhouettes of twelve more individuals standing in a line and containing black and white computer motherboard graphics. Image source: Maziani Sabudin/Flickr Creative Commons.

The first question is how it could inform decisions about what to do next. The limitation here is that the model uses events from my medical history occurring prior to the time it’s asked to predict my survival. Perhaps the decision I’m facing is whether to go for another round of chemotherapy for metastatic cancer; or whether instead to enter a Phase 3 clinical trial for a new therapeutic agent. The question (one might think) is what each option will add to my life expectancy.

Now if the training database had some number of patients who took that particular chemotherapy option, then that factor would have somehow been accounted for when the computer built the model. Assuming the model reliably predicted the mortality of those earlier patients, all we’d need to do is add that factor to my medical record as a hypothetical, run the model again, and see whether the prognosis changed.

But is there something about the chemotherapy being offered that is different than the regimens on which the computer trained? Then the model will not be able to assess the significance of that difference for the patient’s survival. Obviously, this limitation will be even more radical for the experimental treatment option. So in the individual case, the model’s helpfulness in making prospective treatment decisions could be quite limited. It would have to be supplemented, or even supplanted, by old-fashioned clinical judgment, or alternative algorithmic prognostic tools.

This may be one reason the study authors imagine a different use: identify patients with 3-12 months life expectancy and refer them for a palliative care consultation. The idea is to push against the tendency already noted for physicians to wait too long in making palliative care or hospice referrals. Assuming the model is running all the time in the background, it could trigger an alert to the attending physician, or even an automatic palliative care referral for all those the model flagged.

Now, in my ethics consultation experience, getting an appropriate palliative care or hospice referral only one month preceding death would be a stunning accomplishment, let alone three months prior. But the key word here is “appropriate,” since the need for palliative care is not dictated by life-expectancy alone, but more importantly, by symptoms. Not every patient with a projected life expectancy between 3 and 12 months will be suffering from symptoms requiring palliative care expertise to manage. Automatic referrals requiring palliative care evaluations could overwhelm thinly-staffed palliative care services, drawing time and resources away from patients in greater need.

Part of the problem here is the imprecision of the model, and the effects this may have on patient and provider acceptance of the results. A 90% chance of death within 3-12 months sounds ominous, but it leaves plenty of wiggle-room for unrealistic optimism: lots of patients will be confident that they are going to fall at the further end of that range, or that they will be among the 10% of cases the model got wrong altogether. And it’s not just patients who will be so affected. Their treating physicians will also be reluctant to conclude that there is nothing left to do, and that everything they did to the patient before has been in vain. Patients aren’t the only ones prone to denial.

And the nature of the AI-driven prognosis will make it more difficult to respond to patient skepticism with an explanation anyone can understand. As the authors point out, all we really know is that the model can predict within some range of probability. We don’t know why or how it’s done so. The best we can do is remove a feature of interest from the data (e.g., time since diagnosis), rerun the model, and see what effect it has on the probability for the patient’s prognosis. But the model offers no reasons to explain why there was a change, or why it was of any particular magnitude. The workings of Artificial Intelligence, in other words, are not always intelligible. Acceptable explanations will still be left to the clinician and their patient.

Tom Tomlinson photoTom Tomlinson, PhD, is Director and Professor in the Center for Ethics and Humanities in the Life Sciences, College of Human Medicine, and Professor in the Department of Philosophy at Michigan State University.

Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, March 8, 2018. With your participation, we hope to create discussions rich with insights from diverse perspectives.

You must provide your name and email address to leave a comment. Your email address will not be made public.

Click through to view references

Inductive Risks, Inferences, and the Role of Values in Disorders of Consciousness

cabrera-crop-2015Assistant Professor Dr. Laura Cabrera has a new article in the current issue of AJOB Neuroscience (Volume 7, Issue 1). Dr. Cabrera’s open peer commentary, “Inductive Risks, Inferences, and the Role of Values in Disorders of Consciousness,” is available online through Taylor & Francis.