Supreme Court to Hear Cases on Content Policing
Last week saw two events portentous for the tech industry generally and violence on the internet specifically: Elon Musk said that he would after all buy Twitter for $44 billion, a deal headed to close by October 28; and the Supreme Court agreed to hear a case called Gonzalez v. Google LLC. Together, these stories could define the trajectory of online hate speech, which has been globally linked to increases in violence toward minorities, mass shootings, lynchings, ethnic cleansings, and the decline of democracy.
If the Twitter deal goes through, Musk is expected to lift the ban on Donald Trump’s use of the platform, which since January 8, 2021 has confined the former president to his comparatively feeble platform, Truth Social. Trump had 88,936,841 Twitter followers and posted a total of 59,553 tweets and retweets. Around 60 percent of his tweets following the November 2020 presidential election were messages challenging and undermining the legitimacy of the results—an average of 14 a day—including his now-infamous December 19, 2020 tweet: “Big protest in D.C. on January 6th. Be there, will be wild!”
In Gonzalez, the justices will consider whether internet platforms have any legal responsibility for spreading false or violent content under Section 230 of the Communications Decency Act. (In another case, Twitter, Inc. v. Taamneh, the Court will relatedly consider whether internet service providers can be liable for aiding terrorists under a criminal statute.) Critics of unfettered Section 230 immunity run the gamut—from Trump himself, who has complained about censorship of conservative voices, to more left-leaning sources who argue for legal incentives on providers to screen users and content for lies and extremists. From the providers’ standpoint, if the Court confines immunity in Gonzalez, the legal risks and complexities of managing content is daunting. For his part, Joe Biden said during the 2020 campaign that Meta CEO Mark Zuckerberg “should be submitted to civil liability and his company to civil liability.”
A brief refresher on Section 230: Congress passed it in 1996 in reaction to a judicial ruling holding an internet service provider responsible for a defamatory statement posted on a website’s messaging board. The statute precludes internet service providers from being held liable for information provided by a third-party user. The theory was that providers do not generate content. They merely perform the equivalent of a publisher’s traditional editorial functions—such as deciding whether or not to publish content, when to run it, and whether to alter it in some way before publication. Section 230(c)(1) thus specifically states that a provider shall not “be treated as the publisher or speaker of any information” simply because they host it.
In 1996, only 20 million Americans had internet access, and spent on average under thirty minutes surfing the net each month. There was no Google, Twitter, Facebook, Instagram, Yelp, YouTube, Snapchat, Parler, or Wikipedia. Only a handful of national newspapers had articles posted online. Computers took about 30 seconds to load each page via a phone line and users paid for internet service by the hour. The first commercial ISP was only six years old, and the biggest one by far was AOL.com. The first web page was created in 1991. The first web browser—Mosaic—came out in 1993. Amazon began selling (just) books in 1995. The first web-based email services, Hotmail and Rocketmail, were launched the same year that Section 230 became law, in 1996. The term “blogging” was not coined until 1999.
A lot has changed since then. Today, there are over 307 million American internet users—97 percent of American adults. Of those, 15 percent only use smartphones. Rather than posting content on a site so that every user sees the same thing, social media platforms today generally use computer algorithms to sort and prioritize content according to the likelihood that an individual will actually engage with it. Once a user shows interest, the algorithm directs the user to similar items, guessing that the content will align with the user’s pre-existing likes. An algorithm might also send the user posts from another user with a similar profile, without regard to factual accuracy and journalistic quality. Social media companies make money through fees paid to promote certain content, as well.
Because the algorithms work with personally identifiable information—including an individual’s geographic location and associations with other online contacts—the privacy implications of today’s internet are a far cry from those of a quarter-century ago. The algorithms also disproportionately enable the “viral” spreading non-objective, polarizing, and false information across the social media space in a manner of seconds, becoming a tool of influence and propaganda that Congress likely did not envision in 1996.
The question before the Supreme Court in Gonzalez is whether social media companies’ use of algorithms to target users and push someone else’s content (rather than simply employing strictly traditional editorial functions) is fully protected from legal liability. The case arose from the November 2015 death of Nohemi Gonzalez, a 23-year-old American student, after three ISIS terrorists fired into a crowd of diners at a Paris bistro, killing 129 people. Her relatives sued Google, which owns YouTube, alleging that it assisted ISIS by knowingly permitting it to post hundreds of radicalizing videos inciting violence, and by targeting potential followers whose characteristics fit the profile of an ISIS sympathizer. The complaint alleged that Google was aware through media coverage, complaints, and congressional investigations that its services were aiding ISIS but refused to actively police its platform for ISIS accounts. Google moved to throw out the lawsuit based on absolute Section 230 immunity and won in the lower court, as the videos had been produced by ISIS and not by Google. The liberal-leaning Ninth Circuit Court of Appeals agreed.
The plaintiffs’ argument on appeal to the Supreme Court is that the selective promotion of content—often for a profit—is materially different from publishing and moderating third parties’ posts on a virtual bulletin board. In declining to hear a similar case in 2020, Justice Clarence Thomas expressed concerns over what he perceived as an overbroad reading of the statute, noting that courts too often “filter their decisions through the policy argument that ‘Section 230(c)(1) should be construed broadly,’” and that “extending § 230 immunity beyond the natural reading of the text can have serious consequences.” But so far, no court has denied Section 230 immunity because of algorithmic “matchmaking” results. The Ninth Circuit concluded that websites “have always decided . . . where on their sites . . . particular third-party content should reside and to whom it should be shown,” and that the algorithm function falls squarely within that editorial role.
Although the current Supreme Court majority has been under sharp attack for its ideological (or seemingly ideological) rulings in controversial cases, Section 230 is not ostensibly susceptible to a predetermined “conservative” outcome. Arguably, the strongest constitutional case for finding in Google’s favor is that it’s Congress’s job to pass laws—and update them—and Section 230 is no exception. Yet respect for the prerogative of Congress has not been this Court’s guiding mantra, as demonstrated by its rulings under the Voting Rights Act and the Clean Air Act. When it sees fit to usurp Congress on a question of broad public concern, this Court has acted without restraint. With nearly 200 election deniers on the ballot for congressional races next month, the ability to spread propaganda online might be precisely the kind of issue that would benefit from such judicial intervention.
The Court has not yet scheduled a date for oral arguments, but Gonzalez is now slated to be decided during this term.