Unanimous Supreme Court Keeps Hands Off Tech Platforms and Online Hate
As there’s little room for the courts to act, curbing online hate will require creativity from lawmakers and tech firms.
IN A UNANIMOUS DECISION authored by Justice Clarence Thomas, the Supreme Court last week threw out a lawsuit against Facebook, Twitter, and Google (owner of YouTube) over their roles in facilitating extremist violence. Although narrow, the ruling in Twitter v. Taamneh was a clean victory for the technology platforms, with the Biden administration publicly siding with Twitter. The president should now work with Congress to pass laws that disincentivize social media corporations from manipulating algorithms to maximize profits from online hate.
A federal law called the Anti-Terrorism Act allows victims of international terrorism to sue and obtain damages from “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed . . . an act of international terrorism.” In their complaint, the family of Nawras Alassaf—one of 39 murdered when an ISIS terrorist fired into a crowd at a nightclub in Istanbul, Turkey in 2017—claimed that the technology companies aided the attack by allowing ISIS to share and spread its terrorist propaganda. The family alleged further that even after receiving complaints about ISIS’s use of their platforms, the companies “permitted ISIS-affiliated accounts to remain active, or removed only a portion of the content.” The trial court dismissed the complaint, the U.S. Court of Appeals for the Ninth Circuit reversed, and the Supreme Court reversed again, siding with the defendants.
After a lengthy detour into the nuances of what it means to “aid” or “abet” a wrongful act, Thomas concluded that supplying “generally available virtual platforms” and failing “to stop ISIS despite knowing it was using those platforms” does not establish liability under the law—even though it was undisputed that the nightclub attack was an “act of international terrorism” and that ISIS “committed, planned, or authorized” it. Everyone also agreed that the tech companies’ business models rely on placing ads “on or near the billions of videos, posts, comments, and tweets uploaded by the platforms’ users,” including violent and extremist content, and then applying “‘recommendation’ algorithms that automatically match advertisements and content with each user” based on the users’ individual search habits.
So, the Court accepted as fact that Facebook, Twitter, and Google make money off incendiary posts aimed at recruiting members to terrorist organizations, spreading violent propaganda, instilling fear and intimidation in the general population, and raising funding for terrorist activity. It merely concluded that unless a plaintiff can plausibly allege that a social media company did more to participate actively in a particular act of violence, the Anti-Terrorism Act can’t be used to motivate tech platforms to revise their business models.
In a related case brought under another statute, Section 230 of the Communications Decency Act, the Supreme Court also refused to hold Google liable for coordinated attacks that occurred across Paris, France, in 2015, killing 130 people, including a 23-year-old American citizen. Nohemi Gonzalez’s family sued, claiming that “Google approved ISIS videos for advertisements and then shared proceeds with ISIS through YouTube’s revenue-sharing system.” Section 230 protects internet service providers from being held liable for information posted on their sites on the rationale that the users—not the companies—generate the content. The Court didn’t make any ruling under Section 230, however, instead merely holding that the complaint was so obviously flawed that the case should be sent back to the lower court for consideration of how it fares under the Court’s decision in the Twitter case.
Which leaves the rest of us, for now, with no plausible answer to online extremism and the havoc it wreaks in our lives.
According to the FBI, the use of social media is a key factor that has “contributed to the evolution of the terrorism threat landscape” since 9/11. A 2019 paper by Anjana Susarla for George Washington University’s Program on Extremism explains:
The way digital platforms, and especially social media platforms monetize access, specifically social media, increases our vulnerability as users to disinformation. Instead of extremist videos being hidden in some darker corners of the Internet, social media platforms make it easy for anyone to stumble upon and post negative content disseminating hatred against a particular community or group, with the consequence that radicalization through exposure to hateful material.
All nine Supreme Court justices nonetheless agreed that “the fact that some bad actors took advantage of these platforms is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers’ acts.” The justices were worried about unlimited liability for social media companies over the extremist content posted by violent, would-be terrorists. “Defendants’ arm’s-length relationship with ISIS,” Thomas reasoned, “was essentially no different from their relationship with their millions or billions of other users.”
But that misses the point. The vast majority of the billions of other users—those who, for every minute of the day, upload over 500 hours of video to YouTube, post 510,000 comments on Facebook, and tweet 347,000 times on Twitter—are not plotting and executing acts of terror, which are increasingly occurring among children being taught to hate online. Smart people at these companies could surely figure out a way to protect the majority of users’ content while weeding out the ones that stoke actual violence and death. As Susarla notes, the tech companies “are no longer corporate entities responsible for their shareholders alone, but their ability to mold private interactions and sway public opinion affects the strength of the participative process and institutions of democracy.”
The fact that the vote tally in both of these cases was unanimous makes a broader statement about the Court’s approach to the separation of powers when it comes to online hate and radicalization. Even the progressive justices have no interest in using their power to make policy regulating internet content. Congress is the branch to do something. And if it were ever to miraculously pass meaningful legislation, one can only hope that conservatives on the Court would apply the same hands-off approach to the inevitable lawsuit that Big Tech would bring to protect its unfettered ability to profit off of hate, violence, and death.