There are strict regulations around the handling of medical information—regulations intended to protect patient privacy, although they can also make it harder for patients to access their own health records. But data from the rest of our life, including information about where we go, what we buy, and what websites we frequent, is routinely bought and sold by advertisers and data vendors. A study published by the Yale Privacy Lab in 2017, for example, revealed that the vast majority of Android apps contain trackers, which collect and presumably share information about user preferences and activities.
With the arrival of the COVID-19 pandemic, there has been extensive interest in the use of some of these tracking measures to provide information relevant to public health. This can include both general measures of travel, as in this recent, public example shared by the company Cuebiq, as well as more troubling examples. South Korea, for instance, uses cell-phone data, closed-circuit television cameras, and bank transactions to generate “a digital diary of [an infected] individual’s life a few days before contracting the coronavirus,” information that’s “made public on a Ministry of Health and Welfare website,” the Wall Street Journal reports. As of March 31, at least twenty countries had implemented some form of digital tracking.
At one level, it makes a certain amount of intuitive sense to take information that is already being collected and use it for a public good. That’s the argument Rajiv Shah, head of the Rockefeller Foundation, makes to Marketplace Tech’s Molly Wood:
Shah: We live in a present, and we’re going to live in a future, where knowledge about where people are, knowledge about the practices that are ongoing, knowledge about what they’re searching for are all available to a whole host of actors, and turning that knowledge and information into something that can help prevent, and in this case, accelerate a response to a pandemic threat that may kill millions and millions of people around the world, has to be a first priority.
Wood: Short-term, some privacy may be trampled in the interest of saving lives.
Shah: I would argue [that] our privacy is already being trampled in the interest of selling shoes or selling whatever item that somebody wants to sell you. We have long since lost that basic privacy based on those advertiser codes and our geolocation information and lots of other elements of our digital footprint and digital presence. The reality is, a lot of that information could right now be used to help optimize the response to this pandemic.
In other words, that ship has sailed: We’ve “long since lost that basic privacy,” Shah says, so we might as well put the information to good use.
This privacy defeatism worries health care data scientists like Duke University’s Eric Perakslis. “The argument that privacy is poor is never a good argument for making it poorer,” he told me. “In health care, we can consent, or at least inform, the public of necessary surveillance in times of crisis such as pandemics—instances where privacy is temporarily lapsed or redefined. These should be treated as temporary exceptions, not allowed to default to the new normal.”
As Jennifer Goldsack, executive director of the Digital Medicine Society (DiME) puts it, “My main concern is false positive risk.” If an advertiser mistakenly thinks you like penny loafers, she says, who cares—at worst, you’re served a lot of irrelevant ads. But a false positive related to health can have much more troubling consequences. Goldsack explains:
If you triangulate a few data points and incorrectly infer that I have been exposed to COVID, had COVID, am bipolar, am an alcoholic, etc., etc., the consequences could be catastrophic.
Don’t get me wrong. Even if I was a person who was exposed to COVID, am bipolar, and am an alcoholic, that’s none of your damn business and no one should be violating my privacy, profiting, and harming me by sharing that data. But the idea that a bogus profile exists about me is also really worrying.
Of course, the companies trafficking in ad-tech data generally assert that the information is all de-identified—that is, it isn’t shared in a fashion that’s associated with a specific person.
The government could order such re-identification, giving it access to reams of very detailed information about citizens. Perhaps under normal circumstances, this concern might seem far-fetched, but in an emergency—like a pandemic—norms can change in an instant. As a recent New York Times headline observed, “For Autocrats, and Others, Coronavirus Is a Chance to Grab Even More Power,” adding “Leaders around the world have passed emergency decrees and legislation expanding their reach during the pandemic. Will they ever relinquish them?” Consider how rapidly standards are already shifting in other areas in the present crisis, like in intensive care units, where physicians are wrestling with the possibility of discarding once-sacrosanct advance medical directives so they can allocate scarce resources. Similarly, it’s easy to imagine rules regarding the use of ad-tech data abruptly shifting.
The real dilemma here is that the sort of information already available to advertisers could be incredibly helpful to medical research and patient care. As public-health researcher Melody Goodman famously suggested, “your zip code is a better predictor of your health than your genetic code.” Health care providers could formulate more effective treatments if they had a richer understanding of the context of patients’ lives: information about how people actually go about their days, the choices they make, the places they go.
Even under the difficult conditions of a pandemic, it’s critical to ensure that data—especially data that might be associated with individual health—is collected with transparency and real consent (not typical “terms of service” consent), and that its use is governed in a fashion that protects individual privacy and gives people power to correct errors.
What a win for public health it would be if the pandemic served as a catalyst for demonstrating that, as author Yuval Noah Harari so eloquently framed it, we can achieve these goals through empowered, informed citizens choosing to share well-protected data, not from authoritarian surveillance that arrogates our personal information in the name of a greater good.