The Promises and Perils of Using Collective Data to Monitor COVID-19

Listen to this story:

This post is a part of our Bioethics in the News seriesBioethics-in-the-News-logo

By Laura Cabrera, PhD

In a state of public health emergency, such as the one brought on by COVID-19, different countries have invoked extra powers to help mitigate the public health threat. These special powers would under normal circumstances be considered infringements on our liberty and privacy. A recent Wired article addressed that big tech companies like Google and Facebook are having discussions with the White House to share collective data on people’s movement during the current pandemic. For example, using phone location data or private social media posts to help track whether people are remaining at home and keeping a safe distance to stem the outbreak, and to measure the effectiveness of calls for social distancing. In the U.S., the government would generally need to obtain a user’s permission or a court order to acquire that type of user data from Google and Facebook. But as mentioned above, the government has broader powers in an emergency.

Obtaining this data could help governments prepare for the coming weeks of this public health emergency. For example, smart phone location data analysis from the New York Times has shed light on the disparities regarding which groups can afford to stay home limiting their exposure to the coronavirus. This is certainly useful to better understand the spread of the disease in different areas and across different socioeconomic groups. Facebook is working with Chapman University and other collaborators to develop maps that show how people are moving between areas that are hotspots of COVID-19 cases and areas that are not, and such maps could be useful in understanding the spread of the disease. Announced in a news release this month, Apple and Google have launched a joint effort to help governments and health agencies reduce the spread of the virus by using application programming interfaces and operating system-level technology to assist in enabling “contact tracing.”

brain-magnifying-glass
Image description: an illustrated image of a pink brain coming out of the top of a person’s head; a magnifying glass with a brown handle, silver rim, and blue lens is above the brain as if looking into it.  The background is light green. Image source: artwork from vecteezy.com.

While this sounds promising, one of the main obstacles has to do with concerns over the privacy of users whose data might be handed over by the companies. It would be unprecedented for the government to openly mine user movement data on this scale. To add to the issue, the current state of affairs where many more people now rely on digital tools to work or attend classes remotely, as well as to stay connected with family and friends, makes the amount and type of data gathered richer. However, as pointed out in a New York Times editorial, we should not sacrifice our privacy as a result of this pandemic.

Another relevant concern related to the use of collective data is government surveillance. For example, the use of mobile data to track the movement of individual coronavirus patients in China or South Korea can be seen as more controversial uses of the collected data.

It is certain that during this challenging time, data sharing and collaboration between academia, governments, civil society and the private sector is key to monitor, understand and help mitigate this pandemic. However, without rules for how companies should anonymize the data, and without clear limits on the type of data they can collect and how the data could be used and kept secure by researchers and governments, the perils might be greater than the promises. Furthermore, we need a clear path for what happens after all of this is over. For example, people should be given the option to delete user profiles they created as part of new work and school arrangements.

Given past scandals around privacy and transparency surrounding these big tech companies (in addition to the several scandals with the current government administration), it is hard to trust that the idea would be to only gather aggregate trends, and that they would not collect any identifying information about users, or track people over long periods beyond the scope of the pandemic.

Civil groups and academics have discussed the need to protect civil liberties and public trust, arguing for the need to identify best practices to maintain responsible data collection, processing, and use at a global scale.

The following are some of the key ideas that have been discussed:

  • In a public health emergency like the one we are living, some privacy intrusions might be warranted, but they need to be proportionate. For example, it would not be proportionate to gather 10 years of travel history of all individuals for the type of two-week incubation disease we are dealing with.
  • This type of government and big tech company partnership needs to have a clear expiration date, as there is a hazard for improper surveillance that could come with continuation of data gathering after the crisis is over. Given the historical precedents on how life-saving programs used in a state of emergency have continued after the state of emergency was resolved, we as a society need to be very cautious with how to ensure that such extraordinary measures do not become permanent fixtures in the landscape of government intrusions into daily life.
  • The collection of data should be based on science, and without bias based on nationality, ethnicity, religion, or race (unlike bias present in other government containment efforts of the past).
  • There is a need to be transparent with the public about any government use of “big tech data” and provide detailed information on items such as the information being gathered, the retention period, tools used, and the ways in which these guide public health decisions.
  • Finally, if the government seeks to limit a person’s rights based on the data gathered, the person should have the opportunity to challenge those conclusions and limits.

A few weeks ago the European Data Protection Board issued a statement on the importance of protecting personal data when used in the fight against COVID-19. The statement highlighted specific articles in the General Data Protection Regulation legislation. For example, Article 9 mentions that processing of personal data “for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health” is allowed, provided such processing is proportionate to the aims pursued. In the U.S. we are far from having such a framework to start discussing data collection, sharing, and use under the current circumstances.

There is no doubt as to potential public health benefits associated with analysis of such data and surveillance. For example, the utility of identifying individuals who have traveled to hotspot areas, or tracing and isolating contacts of those infected. However, without a clear framework on how digital data collection companies will address privacy and surveillance concerns, the more cautious we should be about access to other areas of our life, access that would also be shared with governments. Without due caution, not only will public trust continue to be undermined, but additionally people will be less likely to follow public health advice or recommendations, leading to even worse public health consequences.

Laura Cabrera photoLaura Cabrera, PhD, is an Assistant Professor in the Center for Ethics and Humanities in the Life Sciences and the Department of Translational Neuroscience at Michigan State University.

Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, May 7, 2020. With your participation, we hope to create discussions rich with insights from diverse perspectives.

Article narration by Liz McDaniel, Communications Assistant, Center for Ethics.

You must provide your name and email address to leave a comment. Your email address will not be made public.

More Bioethics in the News from Dr. Cabrera: Should we trust giant tech companies and entrepreneurs with reading our brains?; Should we improve our memory with direct brain stimulation?Can brain scans spot criminal intent?Forgetting about fear: A neuroethics perspective

Click through to view references

Should we trust giant tech companies and entrepreneurs with reading our brains?

This post is a part of our Bioethics in the News seriesBioethics-in-the-News-logo

By Laura Cabrera, PhD

The search for a brain device capable of capturing recordings from thousands of neurons has been a primary goal of the government-sponsored BRAIN initiative. To succeed would require developing flexible materials for the electrodes, miniaturization of the electronics and fully wireless interaction. Yet this past summer, it was corporately funded Facebook and Elon Musk’s Neuralink that stepped forward with announcements regarding their respective technological investment to access and read our human brains.

mental-health-3866035_1280
Image description: A black and white graphic of a person’s head with an electric plug extending out of the brain and back of the head. Image source: Gordon Johnson from Pixabay

Elon Musk, the eccentric technology entrepreneur and CEO of Tesla and Space X, made a big announcement while at the California Academy of Sciences. This time it was not about commercial space travel or plans to revolutionize city driving. Instead Musk presented advances on a product under development at his company Neuralink. The product features a sophisticated neural implant which aims to record the activities of thousands of neurons in the brain, and write signals back into the brain to provide sensory feedback. Musk mentioned that this technology would be available to humans as early as next year.

Mark Zuckerberg’s Facebook is also funding brain research to develop a non-invasive wearable device that would allow people to type by simply imagining that they are talking. The company plans to demonstrate a prototype system by the end of the year.

These two corporate announcements raise important questions. Should we be concerned about the introduction of brain devices that have the capacity to read thousands of neurons and then send signals to our brains? The initial goal for both products is medical, to help paralyzed individuals use their thoughts to control a computer or smartphone, or in the case of Facebook to help those with disabling speech impairments. However, these products also are considered to be of interest to healthy individuals who might wish to “interact with today’s VR systems and tomorrow’s AR glasses.” Musk shared his vision to enable humans to “merge” with Artificial Intelligence (AI), enhancing them to reach superhuman intelligence levels.

Time will tell whether or not these grand visions, that currently veer into science fiction, will be matched by scientific progress. However, if they ultimately deliver on their promise, the products could change the lives of those affected by paralysis and other physical disabilities. Yet, if embraced by healthy individuals such technologies could radically transform what it means to be human. There are of course sound reasons to remain skeptical that they will be used. First off there are safety issues to be considered when implanting electrodes in the brain, including damage to the vasculature surrounding the implant as well as tissue response surrounding the device. And that is what is currently known about inserting brain-computer interfaces with only a couple of electrode channels. Consider what might happen with thousands of electrodes. There remain simply too many unknowns to endorse this intervention for human use in the next year or so. There also are salient issues regarding brain data collection, storage, and use, including concerns connected to privacy and ownership.

artificial-intelligence-2228610_1280
Image description: a black and grey illustration of a brain in two halves, one resembling a computer motherboard, the other containing abstract swirls and circles. Image source: Seanbatty from Pixabay

Beyond these concerns, we have to think about what happens when such developments are spearheaded by private companies. Privately funded development is at odds with the slow, careful approach to innovation that most medical developments rely upon, where human research subject regulations and safety measures are clear. It is the “move fast and break things” pace that energizes start-up companies and Silicon Valley entrepreneurs. The big swings at the heart of these entrepreneurial tech companies also bring considerable risks. When addressing sophisticated brain interfaces, the stakes are quite high. These products bring to mind scenarios from Black Mirror, a program that prompts a host of modern anxieties about technology. On one hand, the possibility of having a brain implant that allows hands-free device interaction seems exciting, but consider the level of information we then would be giving to these companies. It is one thing to track how individuals react to a social media post by clicking whether they “like” it or not, or by how many times it has been shared. It is another thing altogether to capture which parts of the brain are being activated without us having clicked anything. Can those companies be trusted with a direct window to our thoughts, especially when they have a questionable track record when it comes to transparency and accountability? Consider how long it took for Facebook to start addressing the use of customer’s personal information. It remains unclear just how much financial support Facebook is providing to its academic partners, or whether or not volunteers are aware of Facebook’s involvement in the funding-related research.

The U.S. Food and Drug Administration as well as academic partners to these enterprises may act as a moderating force on the tech industry, yet recent examples suggest that those kinds of checks and balances oftentimes fail. Thus, when we hear about developments by companies such as Facebook and Neuralink trying to access the thoughts in our brains, we need to hold on to a healthy skepticism and continue to pose important challenging questions.

Laura Cabrera photoLaura Cabrera, PhD, is an Assistant Professor in the Center for Ethics and Humanities in the Life Sciences and the Department of Translational Neuroscience at Michigan State University.

Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, September 26, 2019. With your participation, we hope to create discussions rich with insights from diverse perspectives.

You must provide your name and email address to leave a comment. Your email address will not be made public.

More Bioethics in the News from Dr. Cabrera: Should we improve our memory with direct brain stimulation?Can brain scans spot criminal intent?Forgetting about fear: A neuroethics perspective

Click through to view references

New commentary from Dr. Tomlinson in ‘American Journal of Bioethics’

Tom Tomlinson photoCenter Professor Dr. Tom Tomlinson has a new commentary in the September 2018 American Journal of Bioethics. In “Getting Off the Leash,” Dr. Tomlinson discusses “digital medicines” and what such technologies mean for patient privacy. He asks questions such as: do patients have a right to deceive their clinicians? Why would patients be deceptive?

The full text is available via Taylor & Francis Online (MSU Library or other institutional access may be required to view this article).

Worried about your privacy? Your genome isn’t the biggest threat.

Bioethics in the News logoThis post is a part of our Bioethics in the News series

By Tom Tomlinson, PhD

It was good news to learn last month that the “Golden State Killer” had at last been identified and apprehended. A very evil man gets what he deserves, and his victims and their families get some justice.

The story of how he was found, however, raised concerns in some quarters. The police had a good DNA sample from the crime scenes, which with other evidence supported the conclusion that the crimes were committed by the same person. But whose DNA was that? Answering that question took some clever detective work. Police uploaded the DNA files to a public genealogy website, GEDmatch, which soon reported other users of GEDmatch who were probably related to the killer. More ordinary police work did the rest.

Most of the concern was over the fact that the police submitted the DNA under a pseudonym, in order to make investigative use of a database whose members had signed up and provided their DNA only for genealogical purposes.

2588449650_d1e974b555_o
Image description: a black and white photo of the panopticon inside of Kilmainham Gaol in Dublin, Ireland. Image source: Craig Sefton/Flickr Creative Commons.

My interest in this story, however, is the way it both feeds and undermines a common narrative about our DNA—that it is uniquely identifying, and that therefore any uses of our DNA pose special threats to our privacy. As The New York Times expressed this idea, “it is beginning to dawn on consumers that even their most intimate digital data—their genetic profiles—may be passed around in ways they never intended.”

It’s true that a sample of DNA belongs uniquely to a particular individual. But the same is true of a fingerprint, a Social Security number, or an iris. More importantly, by themselves none of these pieces of information reveals who that unique individual is.

As the Golden State Killer story illustrates, it’s only when put in the context of other information that any of these admittedly unique markers becomes identifying. If the GEDmatch database contained nothing but genetic profiles, you could determine which genomes the killer was related to. But you’d have no idea who those genomes belonged to, and you’d be no closer to finding the killer.

Although an individual genome can’t by itself be identifying, it can provide a link that ties together different information sources which include that genome. It can then be that collection that points to an individual, or narrows the list of possibilities to increase the odds of identification, and the threats to privacy. Imagine the state police maintains a database of forensic DNA linked to records of criminal convictions, and provides that database to criminologists, stripped of any names or other direct identifiers. Imagine as well that one of the hospitals provides researchers with DNA from their patients along with their de-identified medical records (which can include patients’ age, race, first 3 ZIP numbers, and other demographic information).

If we put those together we can do some interesting research: use the DNA link to identify those who both committed various crimes and had a psychiatric history, so we can compare them to convicted felons without a psychiatric history.

But now it may take very little additional information to identify someone in that combined database and invade their privacy. If I’m a researcher (or hacker) who knows that my 56-year-old neighbor was convicted of assault, I can now also find out whether he has a record of psychiatric illness—and a lot more besides. What he had thought private, is no longer so.

The point of this somewhat fanciful example is that as more information is collected about us, from more sources, the threats to our privacy will increase, even if what’s contained in individual sources offers little or no chance of identification.

For this reason, the prospect of merging various data sources for “big data” health research will challenge the current research regulatory framework. Under both the current and the new rules (which haven’t yet gone into effect), the distinction between identifiable and non-identifiable research subjects is critical. Research using information that can be linked to an individual’s identity requires that person’s consent. To avoid this requirement, research data must be “de-identified”. De-identification is the regulatory backbone on which much of the current “big data” research relies, allowing the appropriation of patient medical records and specimens for use in research without consent; and it provides the regulatory basis for uploading the data collected in NIH-supported research into a large NIH-sponsored database, the database of Genotypes and Phenotypes (dbGaP), which most NIH-supported genomic studies are required to do. Data from dbGaP can then be used by other researchers to address other research questions.

The possibilities of merging such “de-identified” databases together for research purposes will only increase, including facial recognition databases being collected online and on the street. As the mergers increase, it will become more difficult to claim that the people represented in those databases remain non-identifiable. As Lynch and Meyer point out in the Hastings Center Report, at this point there will be two choices. We can require that all such research will need at least broad consent, which will have to be reaffirmed every time a person’s data is used in new contexts that make identification possible. Or we will have to fundamentally reassess whether privacy can play any role at all in our research ethics, as the very idea of “privacy” evaporates in the panopticon of everyday surveillance.

Tom Tomlinson photoTom Tomlinson, PhD, is Director and Professor in the Center for Ethics and Humanities in the Life Sciences in the College of Human Medicine, and Professor in the Department of Philosophy at Michigan State University.

Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, July 12, 2018. With your participation, we hope to create discussions rich with insights from diverse perspectives.

You must provide your name and email address to leave a comment. Your email address will not be made public.

Click through to view references

Would you donate to a biobank?

Tom TomlinsonCenter Director Dr. Tom Tomlinson and Raymond G. De Vries, Co-Director of the Center for Bioethics and Social Sciences in Medicine at University of Michigan, have co-authored the article “Americans want a say in what happens to their donated blood and tissue in biobanks.” The authors discuss biobank donations, precision medicine, genetics, privacy, and consent.

The last time you went to a hospital, you probably had to fill out forms listing the medications you are taking and updating your emergency contacts. You also might have been asked a question about what is to be done with “excess tissues or specimens” that may be removed during diagnosis or treatment. Are you willing to donate these leftover bits of yourself (stripped of your name, of course) for medical research?

If you are inclined to answer, “Sure, why not?” you will join the majority of Americans who would agree to donate, allowing your leftovers, such as blood or unused bits from biopsies or even embryos, to be sent to a “biobank” that collects specimens and related medical information from donors.

But what, exactly, will be done with your donation? Can the biobank guarantee that information about your genetic destiny will not find its way to insurance companies or future employers? Could, for example, a pharmaceutical company use it to develop and patent a new drug that will be sold back to you at an exorbitant price?

These questions may soon become a lot more real for many of us.

Read the full piece at The Conversation.

Related reading: “Understanding the Public’s Reservations about Broad Consent and Study-By-Study Consent for Donations to a Biobank: Results of a National Survey” published July 14, 2016 in PLoS ONE, an open-access peer-reviewed journal. Authors: Raymond Gene De Vries, Tom Tomlinson, Hyungjin Myra Kim, Chris Krenz, Diana Haggerty, Kerry A. Ryan, Scott Y. H. Kim.

The face of Zika: women and privacy in the Zika epidemic

Bioethics-in-the-News-logoThis post is a part of our Bioethics in the News series.

By Monica List, DVM, MA

A quick online search for “Zika” reveals two kinds of images, those of vectors and those of victims. Images of Aedes sp. mosquitoes, vectors of the Zika, Dengue, and Chikungunya viruses, dominate the virtual landscape, followed closely by images of infants and children with microcephaly, as well as women; some pregnant, some caring for children with Zika-related congenital conditions. This abundance of images of children and mothers affected by Zika is not surprising, as they are, in fact, the populations most seriously harmed by the virus. There is now enough evidence to infer that prenatal Zika virus infection can cause microcephaly and other brain anomalies in the developing fetus (Rasmussen et al 2016). Embedded in public health communiques and media coverage, these publicly available images are undoubtedly an important part of the narrative of the Zika epidemic; they provide a face and a human dimension to a risk often perceived as distant and abstract by residents of latitudes currently unaffected by the virus. In this sense, the images perform an important role; they humanize, contextualize, and help raise awareness of a serious disease that is expected to continue to spread (CDC, 2016). Regardless of the many ways in which these images can improve our understanding of how Zika is affecting the lives of people worldwide, and may eventually affect ours, we must exercise caution in our use of them. These images are not just representations of bodies, they are instances of a person’s life, and furthermore, often present facets of that life that are considered private to some extent, such as pregnancy and illness.

26355934521_c00f7cb90c_z
Adult, male mosquitoes are inspected by an IAEA technician at the Agency’s Insect Pest Control Laboratory in Seibersdorf, Austria. Image source: Dean Calma / IAEA via Flickr.

While privacy is a central topic in bioethics, the kind of privacy at stake in this case may not fit neatly into bioethical definitions and guidelines. Forms of privacy relevant to bioethics include informational privacy, physical privacy, decisional privacy, proprietary privacy, and relational privacy (Beauchamp and Childress 2013). It is plausible to say that media representations of these mothers and children include most of these forms, but also other aspects that may fall outside of these definitions. A BBC News report on babies born with microcephaly attributed to in-utero Zika infection in Pernambuco, Brazil, not only shares private information such as details on the health and pregnancies of the featured mothers, but also allows us a very close look at their private lives; their homes, bedrooms, and even the intimate suffering caused by disease. Even if subjects have consented to the use of their likenesses and stories, is this an encroachment on privacy, and if so, one that would fall within the purview of bioethics?

In bioethics, discussions of privacy generally focus on situations involving patients or subjects in medical care or research settings. Beyond these settings, safeguards such as Institutional Review Board approval are in place for most other research done in an academic setting or for academic purposes, but the same does not apply to non-academic research, including journalistic investigation. While journalism has its own set of ethical principles and guidelines, these tend to be more flexible regarding notions of privacy; the sharing of information that might be considered private in a healthcare setting is allowed in the media under the freedom of press principles. A common protection available to subjects in both healthcare and media settings is informed consent; in both cases one of the main roles of informed consent is to protect subjects from harm, acting on the principle of respect for autonomy (Beauchamp and Childress, 2013). Considerations of autonomy and harm are central to ethical dilemmas in journalism, but they tend to be open to interpretation, arguably more so than in bioethics (Richards, 2009). Furthermore, this interpretation is influenced by where loyalties lie in each case; for healthcare practitioners and researchers, the patient or subject is the priority, while journalists have a prima facie duty to the public interest (Canadian Association of Journalists, Ethics Advisory Committee, 2014). Another issue with consent in this case is that while consent may change over time, it would be practically impossible to completely remove the availability of images and stories that have already been made public, especially with the growing use of the internet and digital communications. One can assume that even if the content is eventually removed from the digital sphere, the images and stories of these mothers and children can be downloaded, saved, and shared without limit.

24906688791_dc27c67654_z
In Recife, Brazil, Minister Teresa Campello talks to residents of Recife, Brazil, as part of a Zika prevention campaign “Dia Nacional de Mobilização Zika Zero.” Image source: Ministério do Desenvolvimento Social e Combate à Fome on Flickr.

In this case, the insufficiency of consent is further complicated by the social location of people being portrayed. Although in theory privacy protections apply equally to all, privacy is a markedly gendered and raced concept. For example, since fertility and reproduction can be said to play a role in a nation’s sustainability, many governments see it as their business to legislate over women’s bodies, dictating if and how women have a right to make autonomous reproductive choices (Holloway, 2011). Notably, the majority of women portrayed by the media coverage of the Zika epidemic are poor women of color; part of the reason is simply due to the geographical location of the epidemic, as well as the fact that poor populations generally have access to less resources to prevent and treat disease. Populations whose medical care is attached to some aspect of identity, for instance age, gender, race, or ethnicity, and whose autonomy is diminished in relation to this identity are considered vulnerable (CIOMS, 2002; Holloway, 2011). In research and medical care, these individuals are subject to protections regulated at the international and national levels. While ethical guidelines in journalism also include special considerations for vulnerable subjects, definitions of vulnerability and how to manage it are often vague, and always weighed against the need to provide a public service (Richards, 2009). Usually, the only protection vulnerable subjects have access to is informed consent, which is of little use if their autonomy is already compromised. We must also consider that vulnerability goes beyond diminished autonomy due to one’s age, gender, or race; it also includes, for example, an increased exposure to social risks that can result in discrimination, loss of opportunity, and even violence. Zika-infected pregnant women portrayed by the media can find themselves under even greater public scrutiny, and may be denied access to options such as elective abortion and participation in research that could potentially help women in similar situations (Harris et al, 2016). But, this should not mean that we cannot access or share information about vulnerable subjects, it just means that as is often the case, consent is not enough, especially when this information is likely to be widely broadcasted.

Given the importance of these stories in shaping a public narrative of the Zika epidemic, and the risk of harming vulnerable subjects, bioethicists should take an active part in laying out guidelines for the sharing of this information. Some aspects to consider in these conversations are cultural assumptions about privacy, our purposes for sharing this information, and whether or not it is contextualized by a broader story—not simply a voyeuristic look into the suffering of others. Additionally, breadth of scope should generally be considered in order to assure that images and stories do not focus on one aspect of a broader problem. A more balanced coverage of the Zika epidemic should also focus on the stories of researchers, government officials, tourists, and others touched in some way by this crisis. Finally, the kinds of information outlets we are using should be of primary concern; information shared on the internet, a generally open and unregulated source, can widely expose intimate aspects of a person’s life, and cause harm beyond what we can imagine.

list-cropMonica List, DVM, MA, is a doctoral student in the Department of Philosophy at Michigan State University. She earned a veterinary medicine degree from the National University of Costa Rica in 2002, and an MA degree in bioethics, also from the National University, in 2011.

Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, June 2, 2016. With your participation, we hope to create discussions rich with insights from diverse perspectives.

You must provide your name and email address to leave a comment. Your email address will not be made public.

Click through to view references