Listen to this story:
This post is a part of our Bioethics in the News series
By Laura Cabrera, PhD
In a state of public health emergency, such as the one brought on by COVID-19, different countries have invoked extra powers to help mitigate the public health threat. These special powers would under normal circumstances be considered infringements on our liberty and privacy. A recent Wired article addressed that big tech companies like Google and Facebook are having discussions with the White House to share collective data on people’s movement during the current pandemic. For example, using phone location data or private social media posts to help track whether people are remaining at home and keeping a safe distance to stem the outbreak, and to measure the effectiveness of calls for social distancing. In the U.S., the government would generally need to obtain a user’s permission or a court order to acquire that type of user data from Google and Facebook. But as mentioned above, the government has broader powers in an emergency.
Obtaining this data could help governments prepare for the coming weeks of this public health emergency. For example, smart phone location data analysis from the New York Times has shed light on the disparities regarding which groups can afford to stay home limiting their exposure to the coronavirus. This is certainly useful to better understand the spread of the disease in different areas and across different socioeconomic groups. Facebook is working with Chapman University and other collaborators to develop maps that show how people are moving between areas that are hotspots of COVID-19 cases and areas that are not, and such maps could be useful in understanding the spread of the disease. Announced in a news release this month, Apple and Google have launched a joint effort to help governments and health agencies reduce the spread of the virus by using application programming interfaces and operating system-level technology to assist in enabling “contact tracing.”
While this sounds promising, one of the main obstacles has to do with concerns over the privacy of users whose data might be handed over by the companies. It would be unprecedented for the government to openly mine user movement data on this scale. To add to the issue, the current state of affairs where many more people now rely on digital tools to work or attend classes remotely, as well as to stay connected with family and friends, makes the amount and type of data gathered richer. However, as pointed out in a New York Times editorial, we should not sacrifice our privacy as a result of this pandemic.
Another relevant concern related to the use of collective data is government surveillance. For example, the use of mobile data to track the movement of individual coronavirus patients in China or South Korea can be seen as more controversial uses of the collected data.
It is certain that during this challenging time, data sharing and collaboration between academia, governments, civil society and the private sector is key to monitor, understand and help mitigate this pandemic. However, without rules for how companies should anonymize the data, and without clear limits on the type of data they can collect and how the data could be used and kept secure by researchers and governments, the perils might be greater than the promises. Furthermore, we need a clear path for what happens after all of this is over. For example, people should be given the option to delete user profiles they created as part of new work and school arrangements.
Given past scandals around privacy and transparency surrounding these big tech companies (in addition to the several scandals with the current government administration), it is hard to trust that the idea would be to only gather aggregate trends, and that they would not collect any identifying information about users, or track people over long periods beyond the scope of the pandemic.
Civil groups and academics have discussed the need to protect civil liberties and public trust, arguing for the need to identify best practices to maintain responsible data collection, processing, and use at a global scale.
The following are some of the key ideas that have been discussed:
- In a public health emergency like the one we are living, some privacy intrusions might be warranted, but they need to be proportionate. For example, it would not be proportionate to gather 10 years of travel history of all individuals for the type of two-week incubation disease we are dealing with.
- This type of government and big tech company partnership needs to have a clear expiration date, as there is a hazard for improper surveillance that could come with continuation of data gathering after the crisis is over. Given the historical precedents on how life-saving programs used in a state of emergency have continued after the state of emergency was resolved, we as a society need to be very cautious with how to ensure that such extraordinary measures do not become permanent fixtures in the landscape of government intrusions into daily life.
- The collection of data should be based on science, and without bias based on nationality, ethnicity, religion, or race (unlike bias present in other government containment efforts of the past).
- There is a need to be transparent with the public about any government use of “big tech data” and provide detailed information on items such as the information being gathered, the retention period, tools used, and the ways in which these guide public health decisions.
- Finally, if the government seeks to limit a person’s rights based on the data gathered, the person should have the opportunity to challenge those conclusions and limits.
A few weeks ago the European Data Protection Board issued a statement on the importance of protecting personal data when used in the fight against COVID-19. The statement highlighted specific articles in the General Data Protection Regulation legislation. For example, Article 9 mentions that processing of personal data “for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health” is allowed, provided such processing is proportionate to the aims pursued. In the U.S. we are far from having such a framework to start discussing data collection, sharing, and use under the current circumstances.
There is no doubt as to potential public health benefits associated with analysis of such data and surveillance. For example, the utility of identifying individuals who have traveled to hotspot areas, or tracing and isolating contacts of those infected. However, without a clear framework on how digital data collection companies will address privacy and surveillance concerns, the more cautious we should be about access to other areas of our life, access that would also be shared with governments. Without due caution, not only will public trust continue to be undermined, but additionally people will be less likely to follow public health advice or recommendations, leading to even worse public health consequences.
Laura Cabrera, PhD, is an Assistant Professor in the Center for Ethics and Humanities in the Life Sciences and the Department of Translational Neuroscience at Michigan State University.
Join the discussion! Your comments and responses to this commentary are welcomed. The author will respond to all comments made by Thursday, May 7, 2020. With your participation, we hope to create discussions rich with insights from diverse perspectives.
Article narration by Liz McDaniel, Communications Assistant, Center for Ethics.
You must provide your name and email address to leave a comment. Your email address will not be made public.
More Bioethics in the News from Dr. Cabrera: Should we trust giant tech companies and entrepreneurs with reading our brains?; Should we improve our memory with direct brain stimulation?; Can brain scans spot criminal intent?; Forgetting about fear: A neuroethics perspective