
CLEARVIEW, A BIOMETRIC DATA COMPANY, launched its facial recognition systems in 2017, offering a free trial to police departments. By 2020, the New Jersey Attorney Generalās office, the Los Angeles and Chicago police departments, and many Canadian police departments all refused to use Clearview technology within their jurisdictions, and with good reason. They saw the riskāto their constituents, their reputations, and their own ability to withstand the temptations of power.
āFor practical applications, Clearview AI was all the rage,ā said Peter Sloly, former chief of the Ottawa Police Service, in an interview for The AI Dilemma: 7 Principles of Responsible Technology, which I coauthored with Juliette Powell. āBut police chiefs actually held themselves in check and voluntarily put it back on the shelf. The police decided they shouldnāt have it, let alone use it.ā
Stories like this represent rays of hope amid the convergence of two great fears: anti-democratic governments and artificial intelligence. The Economist counts 76 elections with nationwide votes taking place in 2024, in countries ranging from Iceland (āfull democracyā) to the United States (āflawed democracyā) to North Korea (āfull authoritarianā). When candidates in any of those elections talk openly about retribution, itās only natural to wonder how they might use AI to identify, track, and persecute their opponents.
A worst-case scenario would see more governments routinely and ruthlessly weaponizing the technology until it becomes standard practice, even in democracies. A more optimistic scenario suggests that AI will, in its chaotic and competitive way, lay the groundwork for a future of more transparent and civil politics, where activities of all institutions, including government and big tech, are more accountable to the public at large.
There are good reasons to think the optimistic scenario will prevail. However, even at best, it will bring with it a heavy burden of responsibility for tech companies, engineers, governments, and the rest of us. A dystopian political future of AI isnāt predetermined, but itās all too plausible. To avoid this, there need to be solutions at many levels of society, with standards of responsible practice for business and government, and also tools that make it easier for the rest of us to resist algorithmic manipulation.
TO BE CLEAR, AI ITSELF IS NOT THE THREAT. Artificial intelligence is the group of autonomous, automated, algorithmic systems increasingly in use today, not intelligent in a human sense, but growing rapidly in capability. AI has already enabled many powerful, positive developments for humanity. But it also comes with immense risks, especially for vulnerable people. The track record of big tech companies in self-regulation so far has been inadequate at best. Governments, even sometimes in full democracies like the Netherlands, have also been dismal.
Facial recognition is a good example. It is a rapid, efficient, and inexpensive way of identifying peopleāmatching an image against others taken from social media and closed-circuit TV. Itās an invaluable tool, for example, for identifying victims in large-scale disasters or investigating human traffickers. But governments also use it routinely to track, arrest, and penalize suspected dissidents and members of targeted groups. Itās consistently prone to errors and false identifications, especially for women, darker-skinned people, children, and the elderly, who are generally underrepresented in the facial images used to train the software.
These errors help explain the urgency of the backlash against facial recognition. For example, the American Civil Liberties Union lawsuit forced Clearview to stop selling to private companies, and the 2021 Canadian Privacy Commission ruling outlawed Clearview software in Canada. Seminal āgender shadesā research by Joy Buolamwini and Timnit Gebru, and the outrage it provoked, have convinced some generative AI (GenAI) companies to alter their data sets to reduce the ingrained bias favoring white men. Yet thereās still a lot of inherent bias in the data, much of which was gathered from social media: A recent test of the image generator MidJourney found that requests for a picture of a Mexican person consistently yield pictures of a man in a sombrero. Ask for an Indian person and youāll get a white-bearded man in a turban. Facial recognition and biometric data are likely to be explicitly prohibited by the European Unionās AI Act, when the EU Parliament ratifies it, as expected, this year.
Governments and political campaigns also misuse other types of AI. A recent Freedom House report listed dozens of examples. The Philippines, Cambodia, Iran, Vietnam, and India have restricted or blocked access to independent news or social media channels. Myanmar, China, Belarus, Nicaragua, Saudi Arabia, and 50 other countries put people in prison for their online posts. Forty-seven countries, including the United States, had active disinformation campaigns using GenAI hallucinations, including fake images of Donald Trump embracing Anthony Fauci and fake video of Joe Biden making transphobic comments. We saw another example before New Hampshireās presidential primary, when deepfake robocalls used ElevenLabsās AI voice generation software to simulate Biden telling people not to vote.
Just about every one of these activities could be (and in the past, has been) accomplished without AI, of course. As Rory Mir, a spokesperson for the Electronic Frontier Foundation, put it, āThere are already coordinated misinformation campaigns and bots putting out messages against vaccines or to affect the climate change discussion. AI just makes all this more unique, harder to detect, and more widely present.ā
GenAI media tools are particularly powerful tools for disruption. They offer dictators what they offer the rest of us: broader reach, more productivity, easier targeting of specific people, and less immediate human oversight. Most of them, like most of us, probably havenāt tapped into its power because the technology is still so young and there hasnāt been time to experiment with it.
āItās not about writing one blog post with ChatGPT and you doing that manually,ā says marketing agent Mike Taylor in his prompt engineering course on Udemy. āItās about generating 1,000 blog posts at once and publishing them all. Itās not about creating one nice image for the website. Itās about scaling it up so you have a good image for every single page on the website automatically when you publish it.ā
WEāRE JUST BEGINNING TO SEE the possibilities of AI threats at a larger scale. Many of them have to do with gaining and misusing personal and institutional data, including images and videos. Deepfake scammers stole $25.6 million from a Hong Kong bank in early February, using GenAI to fake a videoconference with the companyās CFO. They could do this because they had enough data about the CFO and the bankās practices, apparently from internal video conferences, to make the simulation convincing.
For a private scammer, getting that kind of data presumably requires a lot of effort. Governments can just commandeer itāor any personal dataādirectly from big tech companies. Even in the United States, where the Fourth Amendment prohibits unreasonable searches and seizures, there are few legal constraints on public-sector collection, purchase, and use of personal dataāincluding browsing and purchase historiesāthat have been gathered by commercial entities like Google, Facebook, and Netflix.
According to Jonathan Askin, a Brooklyn Law School professor who is also founding director of the schoolās Law Incubator and Policy Clinic and the Brooklyn Justice Lab, this is a failing of the āthird-party doctrineā exception to the Fourth Amendment. That doctrine, established by the Supreme Court in the 1970s, enables the government to gather information willingly disclosed to āthird-party intermediaries,ā including corporations, without a warrant. āWe need a better way to participate in social media environments,ā he said. āThere should be trusted intermediaries to protect your data, through which you will not have made a demonstration that you've given up your privacy rights.ā
With that kind of data in its AI systems, a government could exert control over people by manipulating even minor aspects of daily life. The most widespread chilling effect probably wouldnāt come from the risk of imprisonment, but from minor cues and snags in the world around us. Travel restrictions, credit checks, and zoning regulations could be applied with more fine-grained detail and subtlety using an individualsā weaknesses and vulnerabilitiesāa debt, a secret dalliance, a troublesome browsing history, or a child needing a costly medical treatment.
Steven Feldstein, an expert on democracy and technology at the Carnegie Endowment for International Peace observed that using AI for this kind of micro-repression ārequires considerably fewer human actors than conventional repression, entails less physical harassment, and comes at a lower cost.ā The government wouldnāt even need a human decision-maker to choose the targets. An AI system can pick targets and execute the harassment, while those in power deny that it is even happening.
AI can also be used to target neighborhoods and groups more efficiently. Already there are systems like ShotSpotter, a sound recognition system that collects noise from microphones in public places, alerting law enforcement when gunfire erupts. āWhen they hear a car backfire,ā Mir said, āmost people would look out the window and use their judgment about whether to call. Now [ShotSpotter] increases the presence of law enforcement when no one has requested them, which is particularly problematic when itās people at risk of police violence.ā Sound recognition systems, like other forms of digital surveillance, are also prone to misidentification, and false alarms also draw first responders away from real emergencies. An unscrupulous politician could raise the crime rate statistics for a neighborhood simply by placing the microphones there.
If you think youād be protected because of your own status or income, consider a story recounted by legal scholar Brett Frischmann and philosophy professor Evan Selinger in their book Re-Engineering Humanity: A young boy came home from an upper-middle-class, suburban New Jersey elementary school, ecstatic because he had been selected to wear a free āactivity watchā that tracked his heart rate, body temperature, and movement. He was not supposed to take it off, even when showering or sleeping. The school enclosed a letter explaining that the activity watch program was funded by a grant from the U.S. Department of Education designed to help students make progress in physical education and weight loss. The school would compile the data from the watches to help judge the effectiveness of its program.
The programās intent was benign, but nonetheless manipulative. āThey present the idea that you could choose otherwise than to wear the watch,ā says Frischmann. āBut in the majority of circumstances, youāll never choose otherwise.ā
The illusion of choice is one of the most problematic aspects of AI systems even in democratic societies. Digital tools almost always represent the path of least resistance because they are designed to be frictionlessāto make it easy to respond without paying attention. People also tend to regard digital deductions as authoritative. Studies have found, for instance, that many people trust life advice from a GenAI therapy bot more than the guidance of a human therapist. The speed and convenience of an AI tool makes it seem like we are in control when using it. However, like the child wearing the activity watch, weāre not really in control of what gets measured or how the data are used. In a more authoritarian society, the watch could be distributed as a signal of trust, and then used to track movement or record conversation.
FOR ALL THE STRENGTH OF THE AI WAVE, there is a democratic undertow. The uncontrollable nature of unsupervised data on which large language models are trained leads to unexpected results. University of California at San Diego political scientists Eddie Yang and Margaret E. Roberts call this the āauthoritarian data problem.ā They point out that the best GenAI systems are typically trained on diverse data from many sources. In tightly controlled regimes, censorship reduces the breadth of the data stream. That makes them less effective at providing answers to challenging problems.
Some tightly controlled governments solve this problem by incorporating data from other countries. However, that opens up access to censored information. This may be why ChatGPT is banned in China, Russia, Iran, North Korea, and other authoritarian states. Even their officially sanctioned GenAI systems leak. āChatYuan, a chatbot created by Chinese AI companyYuan Yu,ā writes Feldstein, āgenerated numerous problematic responses, such as listing Chinaās economic problems and naming Russia as an aggressor in the Russia-Ukraine war. Unsurprisingly, Chinese authorities shut down ChatYuan within days of its release.ā
AI can also be used to counter the influence of misinformation, including the misinformation generated by other AI. Tools are now on the market claiming to detect GenAI text with 99 percent accuracy. The rapidly growing field of visual deep-fake detection uses techniques such as frame-by-frame video analysisāfor example, an eye that blinks at a different speed than a human eye is a signal of deception. Meta just announced a technical standard for photorealistic AI images, audio, and video, requiring Instagram, Facebook, and Threads users to identify the provenance of what they post. Standards like this could help counter misinformation.
Humans may get more sophisticated over time at detecting fakes as well. Studies at the University of Pennsylvania and Germanyās University of Passau have found that people can learn to distinguish AI-generated text from human-generated text. The skill appears to increase with exposure to the apps like ChatGPT. Image detection may be trickier, and of course if the technology improves, our detection abilities may not keep up. Nonetheless, some people do seem attuned to catch deepfakes. The fake Biden robocalls in New Hampshire came to light because they raised red flags with people who received themāand who then verified the issue by calling the campaign.
All these AI abuses have triggered responses from governments that seem ready to adopt a more stringent standardāat least rhetorically. In November, representatives of 28 governments signed the declaration of the Bletchley AI Safety Summit, saying that AI should be āhuman-centric, trustworthy, and responsible.ā Signatories included the UK (which hosted the meeting), the United States, China, the EU, Saudi Arabia, and Israel. This was yet another sign that government leaders recognize they will be held accountable in some way for this technology. It should not be surprising that an unprecedented number of AI-regulating laws are proposed in democratic systems like the EU, the United States, and Canada, even as tech pundits dispute the ability of politicians to understand what theyāre regulating.
WE NEED BETTER LAWS AND STANDARDS, not just to constrain companies, but governments as well. We need prosecutable laws against AI-generated misinformation used to stalk, defame, or harm people (a popular issue after the Taylor Swift deepfakes). We need more transparency in AI systems, including more established ways to query companies and governments about their intent. We need mechanisms like the ātrusted intermediariesā that Jonathan Askin proposes to restrict the ever-growing loophole of the third-party exception to the Fourth Amendmentās Warrant Requirement.
Most of all, we need more independent auditing of AI systems, including a structure like financial oversight, where the auditors are required to maintain independence from the companies they oversee. This feature is included in the current version of the EUās AI Act for all high-risk systems.
Regulation alone will only go so far, especially in monitoring the regulators themselves. Most proposed AI-related regulatory measures, including the EUās AI Act, make exceptions for governments and the military. They also tend to favor large companies at the expense of small ones. This is a serious threat to entrepreneurial innovation. Large companies can more easily bear the expenses of auditing and lobbying; there need to be ways for small, scrappy companies to compete.
Artificial intelligence systems might seem anti-democratic, but so far, paradoxically, they have generated tools that shed light on corruption and malfeasance. In the end, the rise of AI and authoritarianism have one thing in common: Theyāre prodding many of us to reactāand thus to think differently about our responsibility to society. Hannah Arendt, in her writing on totalitarianism, distinguished between the authority of a dictator and true power with othersāthe impact of people acting together in a thoughtful and considered way. Thatās the type of human learning that will help us keep up with machine learning in the years to come.