The technologies the world is using to track coronavirus — and people
Now that the world is in the thick of the coronavirus pandemic, governments are quickly deploying their own cocktails of tracking methods. These include device-based contact tracing, wearables, thermal scanning, drones, and facial recognition technology. It’s important to understand how those tools and technologies work and how governments are using them to track not just the spread of the coronavirus, but the movements of their citizens.
Contact tracing and smartphone data
Contact tracing is one of the fastest-growing means of viral tracking. Although the term entered the common lexicon with the novel coronavirus, it’s not a new practice. The Centers for Disease Control and Prevention (CDC) says contact tracing is “a core disease control measure employed by local and state health department personnel for decades.”
Traditionally, contact tracing involves a trained public health professional interviewing an ill patient about everyone they’ve been in contact with and then contacting those people to provide education and support, all without revealing the identity of the original patient. But in a global pandemic, that careful manual method cannot keep pace, so a more automated system is needed.
That’s where device-based contact tracing (usually via smartphone) comes into play. This involves using an app and data from people’s smartphones to figure out who has been in contact with whom — even if it’s just a casual passing in the street — and alerting everyone who has been exposed to an infected individual.
But the devil is in the details. There are obvious concerns about data privacy and abuse if that data is exposed or misused by those who hold it. And the tradeoffs between privacy and measures needed to curb the spread of COVID-19 are a matter of extensive debate.
The core of that debate is whether to take a centralized or decentralized approach to data collection and analysis. To oversimplify: In either approach, data is generated when people’s phones come into contact with one another. In a centralized approach, data from the phones gets uploaded into a database, and the database matches a user’s records with others and subsequently sends out alerts. In a decentralized approach, a user’s phone uploads only an anonymized identifier, other users download the list of anonymous IDs, and the matching is done on-device.
The advantage of decentralization is that data stays private and essentially unexploitable, and users remain anonymous. Centralization offers richer data, which could help public health officials better understand the disease and its spread and allow government officials to more effectively plan, execute, and enforce quarantines and other measures designed to protect the public.
But the potential disadvantages of centralized data are downright dystopian. Governments can exploit the data. Private tech companies may be able to buy or sell it en masse. Hackers could steal it.
And even though centralized systems anonymize data, that data can be re-identified in some cases. In South Korea, for example, a failure to keep contact tracing data sufficiently anonymous led to incidents of public shaming. An Israel-based company called the NSO Group provides spyware that could be put to such a task. According to Bloomberg, the company has contracts with a dozen countries and is embroiled in a lawsuit with WhatsApp, accused of delivering spyware via the popular messaging platform.
That’s not to mention various technical challenges — notably that Apple doesn’t allow the tracking apps to run in the background, as well as some Android bugs that contact tracing app developers have encountered. To obviate some of these issues, Apple and Google forged a historic partnership to create a shared API. But the debate between centralized and decentralized approaches remains riddled with nuance.
A deep dive into the situation in France provides a microcosm of the whole issue, from the push/pull between governments and private companies to technical limitations to issues of public trust and the need for mass adoption before contact tracing can be effective. But even with these growing pains, the urgent need to ease lockdowns means various forms of contact tracing have already been employed in countries around the world, and in the U.S. from state to state.
Examples include:
- In the U.S., absent a clear federal contact tracing plan (for now), states have moved forward on their own. A multi-state group that includes New York, New Jersey, and Connecticut is creating its own tracing program.
- South Korea’s Ministry of the Interior and Safety developed a GPS-tracking app that requires citizens who have been ordered to quarantine to stay in touch with a case worker.
- In China, citizens are required to use an app that color-codes people based on their health level (green, yellow, or red) to dictate where they’re allowed to be. A New York Times report said the app shares data with law enforcement.
- India’s government mandated that all workers use its Aarogya Setu app (which uses Bluetooth and GPS for contact tracing), ostensibly to maintain social distancing measures as the nation lifts restrictions and sends people back to work.
- Singapore was early to contact tracing with its TraceTogether app, but low adoption has spurred a push to merge it with a tool called SafeEntry that would force people to check in electronically at businesses and other places.
- Both Australia and New Zealand have employed contact tracing apps based on Singapore’s TraceTogether.
- MIT Technology Review is building a database tracker of all the government-backed automated contact tracing apps.
- Iceland’s Rakning C-19 contact tracing app uses GPS and has achieved 38% adoption, but a government official said it hasn’t made a significant impact on contact tracing efforts.
- Michigan has chosen to rely on traditional manual contact tracing in lieu of an app.
- The U.K.’s NHS contact tracing app is rolling out for testing and will be used along with traditional manual contact tracing methods, but the app’s centralized approach has privacy advocates concerned.
Wearables and apps
One method cribbed from law enforcement and the medical field is the use of wristbands or GPS ankle monitors to track specific individuals. In some cases, these monitors are paired with smartphone apps that differ from traditional contact tracing apps in that they’re meant to specifically identify a person and track their movements.
In health care, patients who are discharged may be given a wristband or other wearable that’s equipped with smart technology to track their vitals. This is ideal for elderly people, especially those who live alone. If they experience a health crisis, an app connected to the wristband can alert their caregivers. In theory, this could help medical professionals keep an eye on the ongoing health of a recovered and discharged COVID-19 patient, monitoring them for any secondary health issues. Ostensibly, this sort of tracking would be kept between the patient and their health care provider.
Law enforcement has long used ankle monitors to ensure that people under house arrest abide by court orders. In recent years, mobile apps have seen similar use. It’s not a big jump to apply these same technologies to tracking people under quarantine.
A judge in West Virginia allowed law enforcement to put ankle monitors on people who have tested positive for COVID-19 but have refused to quarantine, and a judge in Louisville, Kentucky did the same. According to a Reuters report, Hawaii — which needs to ensure that arriving airline passengers quarantine for 14 days after entering the state — was considering using similar GPS-enabled ankle monitors or smartphone tracking apps but shelved that idea after pushback from the state’s attorney general.
Remote monitoring via AI offers a potentially more attractive solution. A group of Stanford researchers proposed a home monitoring system designed for the elderly that would use AI to noninvasively (and with a layer of privacy) track a person’s overall health and well-being. Its potential value during quarantine, when caregivers need to avoid unnecessary contact with vulnerable populations, is obvious.
Apps can also be used to create a crowdsourced citizen surveillance network. For example, the county of Riverside, California launched an app called RivCoMobile that allows people to anonymously report others they suspect are violating quarantine, hosting a large gathering, or flouting other rules, like not wearing facemasks inside essential businesses.
As an opt-in choice for medical purposes, a wearable device and app could allow patients to maintain a lifeline to their care providers while also contributing data that helps medical professionals better understand the disease and its effects. But as an extension of law enforcement, wearables raise a far more ominous specter. Even so, it’s a tradeoff, as people with COVID-19 who willfully ignore stay-at-home orders are putting lives at risk.
Examples include:
- In Poland, the government’s Home Quarantine app let police check that people are abiding by forced quarantines. Users had to check in using a phone number and SMS code, and they had to take a photo that’s verified with facial recognition. Quarantine breakers could receive fines.
- Those entering Kenya via the Jomo Kenyatta International Airport were required to self-quarantine for 14 days. The government monitored these people’s movements using their phones, and those who broke quarantine could be apprehended by police.
- The Southern Nevada Health District used an app to track those who have been tested and presumed to have COVID-19. They’re supposed to report daily symptoms, and if they fail to do so, the app will notify a “disease investigator.”
- Washington state’s Providence St. Joseph Health hospital deployed remote monitoring from Twistle to care for confirmed and suspected COVID-19 patients.
- In New York and New Orleans, LSU Healthcare Network is leveraging AI to remotely monitor cardiac patients vulnerable to the coronavirus.
- MIT’s Emerald monitoring device uses Wi-Fi and AI to track patients’ vitals, sleep, and movements.
- Current Health partnered with the Mayo Clinic on remote patient monitoring.
Thermal scanning
Thermal scanning has been used as a simple check at points of entry, like airports, military bases, and businesses of various kinds. The idea is that a thermal scan will catch anyone who is feverish — defined by the CDC as having a temperature of at least 100.4 degrees Fahrenheit — in an effort to flag those potentially stricken with COVID-19.
But thermal scanning is not in itself diagnostic. It’s merely a way to spot one of the common symptoms of COVID-19, although anyone flagged by a thermal scan could, of course, be referred to an actual testing facility.
Thermal scanners range from small handheld devices to larger and more expensive multi-camera systems. They can and have been installed on drones that fly around an area to hunt for feverish individuals who may need to be hospitalized or quarantined.
Unlike facial recognition, thermal scanning is inherently private. Scanner technology doesn’t identify who anyone is or collect other identifying information. But some thermal imaging systems add — or claim to add — AI to the mix, like Kogniz and Feevr.
And thermal scanners are highly problematic, mainly because there’s little evidence of their efficacy. Even thermal camera maker Flir, which could cash in on pandemic fears, has a prominent disclaimer on its site about using its technology to screen for COVID-19. But that hasn’t stopped some people from using Flir’s cameras for this purpose anyway.
Thermal scanning can only spot people who have COVID-19 and are also symptomatic with a fever. Many people who end up testing positive for the disease are asymptomatic, meaning a thermal scan would show nothing out of the ordinary. And a fever is present in some but by no means all symptomatic cases. Even those who contract COVID-19 and do experience a fever may be infected for days before any symptoms actually appear, and they remain contagious for days after.
Thermal scans are also vulnerable to false positives. Because it merely looks at a person’s body temperature, a thermal scan can’t tell if someone has a fever from a different illness or is perhaps overheated from exertion or experiencing a hot flash.
That doesn’t even take into account whether a given thermal scanner is precise enough to be reliable. If its accuracy is, say, +/- 2 degrees, a 100-degree temperature could register as 98 degrees or 102 degrees.
Although false negatives are dangerous because they could let a sick person through a checkpoint, false positives could result in people being unfairly detained. That could mean they’re sent home from work, forced into quarantine, or penalized for not abiding by an ordered quarantine, even though they aren’t sick.
Tech journalists’ inboxes have been inundated with pitches for various smart thermometers and thermal cameras for weeks. But it’s reasonable to wonder how many of these companies are the equivalent of snake oil peddlers. Allegations have already been made against Athena Security, a company that touted an AI-powered thermal detection system.
Facial recognition and other AI
The most invasive type of tracking involves facial recognition and other forms of AI. There’s an obvious use case there. You can track many, many people all at once and continue tracking their movements as they are scanned again and again, yielding massive amounts of data on who is sick, where they are, where they’ve been, and who they’ve been in contact with. Enforcing a quarantine order becomes a great deal easier, more accurate, and more effective.
However, facial recognition is also the technology that’s most ripe for dystopian abuse. Much ink has been spilled over the relative inaccuracy of facial recognition systems on all but white males, the ways governments have already used it to persecute people, and the real and potential dangers of its use within policing. That’s not to mention the sometimes deeply alarming figures behind the private companies making and selling this technology and concerns about its use by government agencies like ICE or U.S Customs and Border Protection.
None of these problems will disappear just because of a pandemic. In fact, rhetoric about the urgency of the fight against the coronavirus may provide narrative cover for accelerating the development or deployment of facial recognition systems that may never be dismantled — unless stringent legal guardrails are put in place now.
Russia, Poland, and China are all using facial recognition to enforce quarantines. Companies like CrowdVision and Shapes AI use computer vision, often along with Bluetooth, IR, Wi-Fi, and lidar, to track social distancing in public places like airports, stadiums, and shopping malls. CrowdVision says it has customers in North America, Europe, the Middle East, Asia, and Australia. In an emailed press release, U.K.-based Shapes AI said its camera-based computer vision system “can be utilized by authorities to help monitor and enforce the behaviors in streets and public spaces.”
There will also be increased use of AI within workplaces as companies try to figure out how to safely restart operations in a post-quarantine world. Amazon, for example, is currently using AI to track employees’ social distancing compliance and potentially flag ill workers for quarantine.
But deploying facial recognition systems during the pandemic raises another issue, which is that they tend to struggle with masked faces (at least for now), significantly reducing their efficacy.
The drone problem
Drones fall within a Venn diagram of tracking technology and present their own regulatory problems during the coronavirus pandemic. They’re a useful delivery system for things like medical supplies or other goods, and they may be used to spray disinfectants — but they’re also deployed for thermal scanning and facial recognition.
Indeed, policing measures — whether they’re called surveillance, quarantine enforcement, or something else — are an obvious and natural use case for drones. And this is deeply problematic, particularly when it involves AI casting an eye from the sky, exacerbating existing problems like overpolicing in communities that are predominately home to people of color.
The Electronic Frontier Foundation (EFF) is emphatic that there must be guardrails around the use of drones for any kind of coronavirus-related surveillance or tracking, and it wrote about the dangers they pose. The EFF isn’t alone in its concern, and the ACLU has recently gone so far as to take the issue of aerial surveillance to court.
Drone applications include the following examples:
- UPS subsidiary UPS Flight Forward (UPSFF) and CVS have partnered to use Matternet’s M2 drone system to fly medications from the pharmacy to a retirement community in Florida.
- Baltimore, Maryland police are planning to use drones to track the movements of people in the city.
- Zipline will deliver personal protective equipment (such as masks) around the campuses of the Novant Health medical network in Charlotte, North Carolina. The company’s drones are also flying COVID-19 test samples from rural areas of Ghana to Accra, the nation’s capital.
- Through its Disaster Recovery Program, DJI is conducting remote outreach to homeless populations in Tulsa, Florida and helping enforce social distancing guidelines in Daytona Beach.
- In China, medical delivery drones supplied by Antwork and others have been used to transport quarantine supplies and medical samples.
- Paris police are facing backlash from privacy groups after using drones to surveil those who break the city’s lockdown rules.
- Flytrex launched a small drone delivery deployment in Grand Forks, North Dakota that’s designed to deliver medicine, food, and other supplies from businesses to homes.
- Police in Mumbai are using drones in some areas of the city to find and help disperse gatherings that violate social distancing rules.
In some roles, drones can help save lives, or at least reduce the spread of the coronavirus by limiting person-to-person contact. As surveillance mechanisms, they could become part of an oppressive police state.
They may even edge close to both at the same time. In an in-depth look at what happened with Draganfly, VentureBeat’s Emil Protalinski unpacked how the drone company went from trying to provide social distancing and health monitoring services from the air to licensing computer vision tech from a company called the Vital Intelligence and launching a pilot project in Westport, Connecticut aimed at flattening the curve. Local officials abruptly ended the pilot after blowback from residents, who objected to the surveillance drones and their ties to policing.
Post a Comment