The Fight Against Disinformation Is Getting Harder
Truth is already under siege; now the weapons of mass deception are becoming more formidable.
DID YOU KNOW THAT 29 PERCENT of Americans believe voting machines were programmed to change votes in the 2020 election? Or that nearly a third of Americans—31 percent—say it is “definitely or probably true” that Barack Obama was not born in the United States?
These were among the findings from a YouGov survey conducted last November—one that also found that 23 percent of Americans believe mass shootings “have been faked by groups trying to promote stricter gun control law,” that 20 percent feel the U.S. government was behind the 9/11 terror attacks, and that 18 percent think the 1969 moon landing was probably or definitely a hoax “staged somewhere in Arizona.”
And then there’s Tyler Owens, the NCAA football player (and top NFL draft prospect) who recently shared that “I don’t believe in space,” meaning “I don’t think there’s, like, other planets and stuff like that.”
Is Owens trolling? Or is he the victim of misinformation? Misinformation is information that is false, for whatever reason. Someone got it wrong. Disinformation entails intentionality. Someone is knowingly misstating facts. Both are damaging, but disinformation is more pernicious.
One goal of disinformation is to erode the ability of people to make rational judgments based on empirical evidence, so they will outsource their judgment to those who present themselves, outrageously, as being worthy of their trust. Disinformation is hardly new, of course, but it is today being cranked out on an unprecedented scale around the world.
That’s the bad news. The even-badder news is that the problem is going to get a whole lot worse, as AI delivers new weapons to those who wish to go to war with the truth. As our collective ability to identify a shared, factually grounded public reality further deteriorates, the tools of mass deception are becoming more powerful. Now it’s possible for almost anyone to create real-looking images of things that never happened, and Google will sometimes spit them out at you in response to search queries without identifying them as fabrications.
Those who are out there sowing seeds of doubt about objective reality are sure to find fertile ground in Trump-addled brains. Already, the MAGA movement has succeeded in disinforming a broad swath of the American public.
According to a Washington Post/University of Maryland poll conducted last December, 36 percent of Americans believe Joe Biden’s 2020 election as president was “not legitimate”—a rise of 7 percent over the number of people who believed this two years earlier—while 25 percent think the FBI “definitely” or “probably” organized and encouraged the January 6th attack on the U.S. Capitol and 33 percent say Donald Trump’s actions regarding January 6th “are not relevant to his fitness for the presidency.”
We didn’t get here by accident. Organizations and individuals have deliberately cultivated mass gullibility and worked to thwart efforts to stem the flow of toxic disinformation.
EFFORTS TO COMBAT DISINFORMATION have largely failed, according to reporters Jim Rutenberg and Steven Lee Myers in a 4,000-word front-page New York Times article last month. But while attempts, from both the private sector and the government, to counter disinformation have mostly flopped, the efforts by Trump and his allies to push back against anything that would make it harder to lie and get away with it “have unquestionably prevailed.”
As Rutenberg and Myers put it: “Three years after Mr. Trump’s posts about rigged voting machines and stuffed ballot boxes went viral, he and his allies have achieved a stunning reversal of online fortune. Social media platforms now provide fewer checks against the intentional spread of lies about elections.”
Among the keys to this success has been the Bradley Foundation, a longtime funder of conservative groups that has in recent years turned in an increasingly MAGA direction. The Milwaukee-based foundation has deployed its considerable assets to promote voter suppression, block action to address the threat of climate change, and sow doubts about the integrity of elections. Each of these causes depends on the perpetuation of disinformation.
Rutenberg and Myers describe how the Bradley Impact Fund—a donor-advised fund that is legally separate from the foundation but aligned with its mission—has backed a nonprofit founded by Stephen Miller, Trump’s former senior policy adviser. The group, America First Legal, says it seeks to fight “an unholy alliance of corrupt special interests, big tech titans, the fake news media, and liberal Washington politicians.” In 2022, AFL received $27.1 million from the Bradley Impact Fund, accounting for more than 60 percent of what the organization raised that year. (Miller was paid a total of $296,880 in 2021 and 2022 to serve as the group’s president and executive director.)
When Biden’s Department of Homeland Security set out to create a Disinformation Governance Board to “serve as an advisory body and help coordinate anti-disinformation efforts across the department’s bureaucracy,” AFL sprang into action. Miller, in a typically measured assessment on Fox News, called it “something out of a dystopian sci-fi novel.” The proposal—which some liberals also opposed, warning of potential abuse and other problems—went down in flames.
Miller’s group has also played a substantial role in developing a court case aimed at blocking the ability of government officials to call disinformation to the attention of social media providers. The case, Murthy v. Missouri, is now before the Supreme Court, after a Trump-appointed district court judge in Louisiana and the ultraconservative Fifth Circuit Court of Appeals both ruled that doing so, even though the providers are not obligated to act, was an impermissible overstep. AFL filed a brief in the case representing the opinions of dozens of right-wing members of Congress who agreed with the rulings of the lower courts against the government.
During oral argument on the case on March 18, a majority of the Supreme Court seemed to agree that government officials are within their rights to attempt to sway the thinking and conduct of others. But the outcome may be mostly moot.
“Even before the court rules, Mr. Trump’s allies have succeeded in paralyzing the Biden administration” on this issue, Rutenberg and Myers reported the day before the oral argument. “Officials at the Department of Homeland Security and the State Department continue to monitor foreign disinformation, but the government has suspended virtually all cooperation with the social media platforms to address posts that originate in the United States.”
SO THERE YOU HAVE IT: The federal government has been largely stymied in its efforts to tamp down on disinformation just as the need to do so is becoming more critical to the health of American democracy. That leaves us with piecemeal efforts to chip away at the problem at the state level.
For instance, Wisconsin just enacted a law requiring political ads to disclose if they include content generated by artificial intelligence. “This technology is not good or bad,” mused Mark Spreitzer, a Democratic state senator who coauthored the bill. A candidate, Spreitzer said, can use AI for benign reasons but also “to make it look like their opponent said and did something they didn’t.”
According to the National Conference of State Legislatures, similar laws restricting or requiring disclosures for AI-generated material in campaign ads have been enacted in nine other states. But bills to ban the use of these images altogether have failed in several states, as have disclosure bills in New York and New Jersey.
In March, Sen. Amy Klobuchar (D-Minn.) and Sen. Lisa Murkowski (R-Alaska) introduced a federal bill to require disclosure of the use of AI in political ads. Klobuchar said it was important for voters to know “if the political ads they see are made using this technology.” Murkowski added, “Our bill only requires a disclaimer when political ads use AI in a significant way—something I think we can all agree we’d like to know.”
Of course, these days, there is nothing that we all agree on—not even whether the solar system is real.
PERSUADING PEOPLE TO STOP believing false things takes a lot of doing, but election officials across the country are putting their shoulders to the wheel. They have set out to address actual threats to election security while simultaneously seeking to reassure the public that the vote-counting process is trustworthy.
Ali Swenson, who covers election-related misinformation for the Associated Press, wrote last month that
election offices nationwide have dealt with mounting concerns, including persistent misinformation and harassment of election workers, artificial intelligence deepfakes used to disenfranchise voters, potential cyberattacks from foreign governments and criminal ransomware attacks against computer systems.
In Arizona, for instance, Democratic Secretary of State Adrian Fontes has created a small information security team to identify and counter election-related threats; the staff of four includes one analyst “solely devoted to monitoring the internet for disinformation and threats,” Swenson writes. The initiative has drawn criticism from some who worry about the possible invasion of privacy. But Fontes has pushed back on that.
“Yeah, we are surveilling a certain group,” he told Swenson. “We’re surveilling people that want to destroy our democracy. And that’s not political.” The team has no power to force social media platforms to remove posts; it only flags posts it feels are especially mendacious.
Fontes has partnered with Stephen Richer, the Republican county recorder in Arizona’s Maricopa County, to beef up election-related security. Vote-tabulating machines, Swenson explains, are protected as though they were gold bars at Fort Knox. And the pair “are taking more aggressive steps than ever to rebuild trust with voters, knock down disinformation and immediately address attacks.”
Fontes hopes to introduce a statewide system, similar to that in place in Arizona’s two largest counties, to notify voters when their ballots are mailed, delivered, returned, and counted. Richer, meanwhile, is taking steps to reduce the number of ballot packets sent to the wrong address and remove excess wiring around voting machines “so observers can see there is no internet connection,” Swenson writes. And Richer “frequently engages directly with voters,” on social media and elsewhere, to assuage their concerns.
Will such efforts make a difference? Probably not to those who think the moon landing was deep-faked before deep-faking was a thing. But for some people they will matter, and that should matter to us all. The battle to find our way back to sanity will be fought one mind at a time.