Mastodon Twitter Instagram Youtube
Jul 21, 20

COVID-19 and Uprisings for Black Liberation Used as a Pretext for Advancing Mass Surveillance

Kris Hermes discusses the explosion of corporate and State surveillance that has grown in recent months. Originally posted to Agency.

On the night of June 1, amidst mass uprisings against police violence and structural racism, the Federal Bureau of Investigation (FBI) deployed it’s most advanced spy plane, equipped with sophisticated long-range video cameras that can capture footage day or night, see through haze, and provide infrared thermal imaging, in order to spy on Black Lives Matter protesters in Washington, DC.

The Cessna Citation, which flew in a 7-mile circle above the city, continued its high-altitude observations for the next few days and was joined, at a slightly lower altitude, by an RC-26 surveillance aircraft deployed by the Air National Guard to provide “airborne situational awareness.”

Around the same time, Governor Tim Walz (D-MN) asked the Secretary of Defense and the Chair of the Joint Chiefs of Staff to help gather “signal intelligence” on Minneapolis protesters. In response, the Department of Homeland Security (DHS) sent one of its Predator drones to spy on protesters at 20,000 feet.

Instead of recognizing the need for systemic change, the need to dismantle the architecture of racist oppression that we have been building for centuries, the federal government has responded to protests in the ways it knows how—with threats of repression and violence.

The Justice Department (DOJ) has equated the unrest following George Floyd’s death by a white Minneapolis cop as “domestic terrorism.” The FBI has begun “investigations,” including door-knocks and workplace intimidation, across the country. The DOJ has granted the Drug Enforcement Administration sweeping new authority to conduct covert surveillance on people seeking justice for George Floyd.

In addition to issuing federal indictments against dozens of people allegedly connected to the unrest, Attorney General William Barr recently announced the formation of a federal task force focused on what Barr vaguely refers to as “anti-government extremists,” and establishing a dragnet that can be used against anti-racist activists.

The type of surveillance used by federal law enforcement in some of the recent indictments just scratches the surface of the type of widespread surveillance capabilities currently available to the state. In cities across the country, protesters are being targeted based on surveillance of their and others’ social media accounts and internet histories.

Much of this is open-source information, accessible to anyone who takes the time to search for it. The same open-source surveillance methods were used in part to prosecute more than 200 anti-capitalist and anti-fascist protesters arrested on the day of Trump’s inauguration, January 20, 2017.

But, these practices pale in comparison to the broader, more clandestine, and invasive types of surveillance that underpin the state’s response to current and past uprisings for Black liberation.

The Spy in Your Pocket

Imagine the state using location data to identify and target activists at mass demonstrations, or to intimidate witnesses and coerce defendants to plead guilty. This is not far-fetched and it would be easy for the state to carry out, if it doesn’t already routinely do so.

According to research conducted by the New York Times published in December 2019, “Every minute of every day, everywhere on the planet, dozens of companies—largely unregulated, little scrutinized—are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files.”

The Times Privacy Project (TPP) obtained one of those data files which it said is “by far the largest and most sensitive ever to be reviewed by journalists.” The data file contains more than 50 billion location pings during several months in 2016 and 2017, from the phones of more than 12 million people in major US cities, including Los Angeles, New York, San Francisco, and Washington, DC. And yet, the data file represents only a small portion of what is collected and sold every day by the location tracking industry.

The data obtained by TPP did not come from a giant tech company like Google nor from a government surveillance operation, though the state surely takes advantage of this vast resource. Smaller location data companies are largely unregulated and operate behind the scenes, embedding software in many of your favorite apps in order to quietly collect and sell your data. “By design, it’s often nearly impossible to know which companies receive your location information or what they do with it,” according to TPP.

These mostly self-regulated companies track and store your movements based on three claims: (1) people consent to be tracked, (2) the data is anonymous, and (3) the data is secure. TPP found that none of these claims held up to scrutiny.

Many companies gain users’ consent by vaguely explaining their policies and using technical language that can be confusing to the average smart phone user. And, while the data that companies collect may not be associated with other identifying information, like names and email addresses, TPP says it’s “child’s play” to match the data with people’s identities. The claim that your data is being safely stored on guarded servers is undermined by the data breaches we routinely read about in the news.

The insecurity of cell phone data combined with genuine concerns over state surveillance has caused many activists to leave their phones at home when joining a protest. But, many are still unaware or don’t care that their movements are being tracked, sometimes many times per minute.

With no special trainings, New York Times reporters were able to single out and identify individuals at political demonstrations across the country. Despite masks used by many of the anti-capitalist and anti-fascist activists protesting Trump’s inauguration in Washington, DC, and by those demonstrating against Milo Yiannopolous in Berkeley, California, Times reporters were able to easily identify where the masked activists lived and worked, as well as who some of their family members were.

The Architecture of Repression

Edward Snowden, who warned us in 2013 about the sweeping surveillance capabilities of the state, called a system that provides government with the location of every person at every time, the “architecture of repression.”

It gets worse. The Times reporters said they didn’t combine their data with other sources of information, something data firms typically do. “When datasets are combined, privacy risks can be amplified,” noted the Times. But, the potential for abuse and the capacity to do harm also become far more ominous. The limited protections that exist in a location dataset are rendered meaningless with only one or two more data sources.

Journalist Barton Gellman—one of the first three people to review tens of thousands of classified documents transmitted by Snowden—discovered years later that the bulk collection of cell phone “meta data” by the National Security Agency (NSA) represented only one dataset in the agency’s attempts to map our social networks. Gellman also uncovered the NSA’s most important tool in that effort, a vast database called Mainway, capable of “contact chaining,” a sophisticated method of analysis which looks for hidden, indirect relationships that human analysts can’t detect.

“[T]he issue we must grapple with isn’t just personal data anymore, or the ideas of privacy we have been contesting for years,” wrote Genevieve Bell of Technology Review.  “It is also intimate and shared data, and data that implicates others.”

The Facial Recognition Dataset

Another dataset used by the NSA—which should concern us all greatly—is that of facial images. We don’t know a lot about how the NSA uses Mainway but, if China is any example, the capabilities of high-tech surveillance are staggering and the temptation to use it as a form of control is considerable.

China has been developing facial recognition technology for years and, in combination with other datasets, has used it to oppress Uighur Muslims. “We can match every face with an ID card and trace all your movements,” Yin Jun of Dahua Technology, a Chinese video surveillance company, told BBC News in 2017. “We can match your face with your car; match you with your relatives and the people you’re in touch with.” High Tech Law Institute co-director Eric Goldman said “The weaponization possibilities of [facial recognition] are endless.”

Law enforcement in the US has been using facial recognition technology for almost 20 years, but until more recently police have been limited to searching government-supplied images like “mug shots” and driver’s license photos. Facial recognition algorithms have also become more accurate and tech companies have become more adept at gathering and compiling images, which are then uploaded into massive databases.

One such database was created less than four years ago by Clearview AI, which houses more than three billion images the company claims were scraped from Facebook, YouTube, Instagram, Venmo, and millions of other websites, according to the New York Times. More than 600 local, state, and federal law enforcement agencies, including the FBI and DHS, as well as a handful of private companies, have started using Clearview in just the past year.

Despite the widespread use of facial recognition technology, it’s not clear how well it works. Accuracy claims made by Clearview have not been independently verified, and plenty of research suggests that the technology has serious flaws. The Washington Post reported that Asian and African American people were up to 100 times more likely to be misidentified than white men, and Indigenous people had the highest false positive rate of all ethnicities.

But, the answer is not to make facial recognition more accurate. Groups like Fight for the Future have joined privacy advocates and anti-repression activists to call for an outright ban on facial recognition.

Data for Good?

Despite mounting lawsuits and cease-and-desist letters, Clearview is seizing this moment as a benevolent technocrat, offering its data for contact tracing efforts in order to mitigate the effects of the coronavirus pandemic. By doing so, Clearview is validating surveillance efforts without giving the public an opportunity to debate whether digital contact tracing is effective or in our best interests, what this use of data means for society at large, and whether it’s worth the social cost.

Clearview is also not the only data company exploiting the public health emergency to instill a false sense of security from the use of digital surveillance. Cuebiq is another company that is normalizing the massive accumulation and use of surveillance data. Long before COVID-19, Cuebiq launched the Data for Good Program in 2017, which claims to “provide access to anonymous, privacy-compliant location-based data for academic research and humanitarian initiatives related to human mobility.” Since COVID-19, Cuebiq has used its mobility data to support contact tracing efforts and analyze our adherence to shelter-in-place orders.

But, how “good” are programs like Cuebiq’s? We’ve been conditioned to believe that data collection is necessary to fight the pandemic and that digital surveillance—euphemized as contact tracing—is saving lives. The next logical question then is, “How much privacy would you be willing to trade to save lives?”

Edward Snowden, however, tells us to question the choice between mass surveillance and the uncontrolled spread of the virus. “They say that there is no alternative,” Snowden told The Intercept. “They say that if you want to save lives, you have to do this. But that’s not true.”

We’ve also been conditioned to believe that digital contact tracing actually works to curb the spread of COVID-19. Contact tracing has a long history, used against diseases like typhoid and the influenza pandemic of the early twentieth century, but it’s less clear how effective digital surveillance has been to contact tracing efforts in countries like South Korea, which used a cell phone app to track civilians.

The Electronic Frontier Foundation (EFF), which has long sought to protect location data from abuse, developed a three-part test for contact tracing apps: (1) is it likely to be effective, (2) is it prone to abuse, and (3) what protections are in place.

EFF further suggests that any privacy intrusions must be necessary and proportionate, and that data collection should be transparent, finite, and based on science not bias. Even if it’s effective, EFF argues that some technology is just too dangerous.

“It is all too easy for governments to redeploy the infrastructure of surveillance from pandemic containment to political spying,” said Matthew Guariglia of EFF, regarding the dangers of COVID-19 surveillance proposals to the future of protest.

Technology Review’s Genevieve Bell offers an alternative approach, suggesting there could be other ways of using data. “Can we imagine community contact tracing?” asks Bell. “It could be a way of identifying hot spots without identifying individuals—a repository of anonymized traces and patterns, or decentralized, privacy-preserving proximity tracing.”

Big Data Colludes with Government

Not to be out-done in providing digital surveillance data and prowess, under the auspices of fighting COVID-19, tech giants Google and Facebook were in talks with the US government early in the pandemic on how best to share the information they collect from users. According to Facebook executives, the US government is particularly interested in understanding patterns of people’s movements and whether people are adhering to physical distancing measures.

Both Google and Facebook have their own public-facing platforms that present user data in ways that help us understand some aspects of the pandemic. Facebook also has its own pre-COVID Data for Good program. But, Wired noted that it would be new for Google and Facebook to openly mine user movements on this scale for the government.

Should we trust that these big tech companies are telling the truth or have our best interests in mind? A spokesperson for Google told the Washington Post that any “partnership” with the US government “would not involve sharing data about any individual’s location, movement, or contacts.” But, from Snowden’s PRISM program leaks, we already know the NSA has had direct access to Google, Apple and Facebook data since at least 2007.

Today, Snowden rightly warns us not to let our fears drive our deliberations and choices, as we did after the 9/11 attacks. He told The Intercept in April that we are being asked to “accept involuntary mass surveillance in a way that has never been done before at this scale.” Our communications infrastructure was not designed for surveillance, or at least we’ve been told that it would not be used for such purposes. And yet, that’s now precisely what we’re using it for, supposedly for “good.”

Are Current Reforms against Data Tyranny Enough?

Transparency in the tech industry and in government seems to be the bare minimum demand of privacy, data protection, and civil liberties advocates. Who uses our data, who owns it, where does it live, and how long will it be retained? How easily can our data be accessed? Will data sources be combined to further invade our privacy and provide the state with a powerful tool that can be used to target people?

Granular surveillance is still fairly new, but some experts are claiming that the window allowing us to deliberately define our social values or rethink our approach to data and tracking may be closing.

Some European countries are taking steps to better regulate data collection, limit the use of mass surveillance, and prevent algorithmic bias. In the US, multiple data regulation bills have recently been introduced in Congress, but their chance of passage and ultimate impact on the population remains unclear.

The Public Health Emergency Privacy Act (PHEPA) was introduced in May by both Houses of Congress and aims to set strong and enforceable privacy and data security rights for health information related to COVID-19. The bill includes opt-in consent and data minimization, and limits accessibility to data by the US government. But, PHEPA has overly broad exemptions and needs stronger anti-discrimination measures, according to advocates.

EFF says “there is a lot to like” about the Consumer Online Privacy Rights Act (COPRA), a comprehensive bill introduced in the US Senate late last year that gives people greater control over their personal data. The bill’s authors say that COPRA establishes strict standards for the collection, use, sharing, and protection of consumer data. COPRA co-author Senator Edward Markey (D-MA) sees privacy as a human right. “It is very clear from the examples of the intersection of authoritarianism and surveillance that we’ve seen around the world that a privacy bill of rights is absolutely necessary,” Senator Markey told the New York Times in December.

More recently, Senator Markey along with fellow Senator Jeff Merkley (D-OR), and House members Pramila Jayapal (D-WA) and Ayanna Pressley (D-MA) introduced legislation to regulate facial recognition technology. If the bill were to pass, federal agencies would no longer be able to spend money on face recognition, voice recognition, or gait recognition. The bill is likely to receive strong opposition from the tech industry, which has fought similar but smaller efforts at the state and local level.

Surveillance Oversight is Inadequate

Even if these measures are adopted into law, would it be comprehensive enough or sufficient to truly protect our data and privacy? Regulating tech companies will also do little to curb ongoing abuses by police and federal agencies like the FBI and NSA.

Federal “oversight” meant to prevent abusive surveillance practices by the government is highly secretive and hearings are held only with government officials and appointed judges of the Foreign Intelligence Surveillance Act (FISA) court, considered by many to be a “rubber stamp” process.

Traditional FISA authority allowed spying on particular people or groups with warrants from the FISA court. But, under powers codified in 2008, known as FISA Section 702, the court now approves bulk surveillance through what The Intercept calls a recipe for mass surveillance.

In October, a declassified FISA court decision from 2018 revealed that the FBI had been conducting warrantless searches of the NSA’s mass surveillance program and may have violated the rights of millions of Americans in an unlawful and sweeping fishing expedition. Any safeguards the NSA claimed to use to prevent such abusive practices by law enforcement were shown by the declassified ruling to be entirely inadequate.

The FISA ruling also revealed that the FBI is the most prolific miner of NSA data, including “backdoor searches” on people inside the US. In 2017, the FBI conducted more than 3 million such searches, dwarfing the number carried out by the NSA. The ruling also found that many of the FBI searches were not predicated on criminal investigations and did not use proper justification.

Equally troubling, the FISA ruling indicates the FBI is using a process known as “parallel construction” to secretly enter evidence retrieved from the NSA into federal courts overseeing criminal prosecutions, akin to searching someone’s home before showing up at the door with a search warrant.

The DOJ unsuccessfully appealed the FISA ruling, and the FBI agreed to change its search methods, but advocates argued that the court did not go far enough to establish safeguards sufficient to prevent similar abuses in the future.

Reversing the Panopticon

It seems that we’ve reached an inflection point, where the asymmetrical control of information has become a severe liability with very dangerous consequences. The choices we’re making today will have profound and long-lasting effects on society.

Snowden argues that it’s imperative we “think very hard, very rationally, very deliberatively and very freely about the choices that we’re now making in terms of what powers do we want to invest in governments, what powers do we want to invest in corporations?”

Groups like EFF are doing incredibly important work fighting corporate and government mass digital surveillance in general, and pushing for more thoughtful, less invasive and controlling approaches in trying to mitigate the impacts of COVID-19 in particular.

But, mass digital surveillance—much of it undertaken without our knowledge or consent—is already broadly and deeply used, and will be difficult to uproot. Technology is also rapidly advancing, with too few safeguards and little chance of Congress catching up any time soon.

To be sure, it will take more than implementing technical and legal constraints on the corporatocracy. Efforts to legislate limits on these kinds of intrusions, in the end, will not succeed. A strong movement is needed, now more than ever, to reverse the panopticon, end mass surveillance, and dismantle our infrastructures of social control.

As long as we are governed coercively and by oppressive means, secrecy and mass surveillance are all but guaranteed to remain integral tools of the state, used to burrow ever-deeper into our lives. Only through resistance—by unencumbering ourselves from centralized control—can we be truly free from the violence of the state.


Kris Hermes is a Canadian-based activist, author, and media worker who for the past 20 years has engaged with movements to resist grand juries and political repression, provide mass demonstration legal support, and help to defend high-profile criminal cases.

Share This:

Agency promotes contemporary anarchist perspectives and practices through commentary, media relations, and educational campaigns.

More Like This