Supreme Court to Decide Whether Social Media Companies Can Have Section 230 and Eat It, Too
Pretty sure Gorsuch would get a kick out of my title :)
Today the Supreme Court heard oral arguments in Moody v. NetChoice and NetChoice v. Paxton, cases asking whether Florida and Texas state laws, respectively, can compel social media companies to disseminate posts they would rather throttle or remove from their platforms. Also at issue are the laws’ strict disclosure requirements: Florida’s mandates that companies provide each “deplatformed,” “censored,” or “shadow banned” user with an individually tailored, “thorough rationale” for the decision within one week.
The laws’ proponents argue that they are necessary to protect the First Amendment rights of users—particularly those of conservative users. Ken Paxton, Texas’s attorney general, refers to social media as “the world’s largest telecommunications platforms” and the Texas law, he argues, “just enables voluntary communication…between speakers who want so speak and listeners who want to listen.” NetChoice, which is challenging both laws, argues that the First Amendment protects users against only government censorship, that so long as the platforms are making decisions purely as private companies and not as state actors, they are not violating any free speech rights of their users when they throttle or remove content.
It is the First Amendment rights of the platforms, NetChoice argues, that are impacted by the legislation:
Just as Florida may not tell The New York Times what opinion pieces to publish or Fox News what interviews to air, it may not tell Facebook and YouTube what content to disseminate. When it comes to disseminating speech, decisions about what messages to include and exclude are for private parties—not the government—to make.
In addition, NetChoice argues, the laws run afoul of the First Amendment in a second way: the laws’ compelled disclosures are “unduly burdensome,” would “chill[] protected speech,” and therefore do not fall under a narrow First Amendment exception carved out for compelled commercial disclosures of “purely factual and uncontroversial information.” The 11th Circuit agreed, finding that Florida’s mandated disclosures, to be enforced by means of steep fines, were likely to chill “platforms’ exercise of editorial judgment.”
So far so good. I agree that platforms have First Amendment rights, just as do The New York Times and Fox News. And I agree that these rights should not be infringed by state laws. But there’s one problem: when The New York Times and Fox News exercise their “editorial judgment” in deciding “what messages to include and exclude,” they are unable to rely upon the broad immunity from liability enjoyed by social media platforms under Section 230 of the Communications Decency Act. If the message a newspaper editor or television producer decides to disseminate happens to defame someone, their employer can be held liable in civil litigation.
Thanks to Section 230, platforms enjoy not only the same First Amendment rights as everyone else, but also legal immunity for their exercise of “editorial judgment” in the removal or amplification of content. At the same time, Section 230 ensures that liability for any “messages” they decide to “include”—or even amplify—rests squarely on the users who “generated” them. If only our “expressive activities,” as the U.S. Solicitor General called them, were given equal treatment! As it stands, Section 230 grants blanket immunity for platforms’ removal of content, even content protected by the First Amendment. It’s true that, so long as there’s no state action involved, those harmed by a platform’s removal or amplification decisions still enjoy their First Amendment rights. But they are almost always left without the legal recourse to which they would be entitled were their injuries caused by The New York Times or Fox News (just ask Dominion Voting Systems).
Justice Neil Gorsuch gets me. Today he told Paul Clement, who represented NetChoice and another trade group challenging the laws, that he perceived “a tension between the idea that a tech company can’t be held liable for its users’ speech and the idea that moderating that content is the tech company’s speech. Is it speech for purposes of the First Amendment, he asked, but not for purposes of Section 230?”
NetChoice is correct in challenging these unconstitutional Florida and Texas laws. Two wrongs don’t make a right, and the solution to Section 230’s negative externalities isn’t the utter denial of platforms’ First Amendment rights, any more than the solution to racism is affirmative action. That’s why I invite NetChoice (and everyone else) to join me and my colleague Jeffrey Wernick in our call to eliminate “platform privilege” at its root: amend Section 230 to allow for potential legal liability when a platform’s “expressive activity” modifies the reach of user-generated content in relation to other content. Unless and until we do that, our government will continue to invent new ways to use our tax dollars to exploit platforms’ liability loophole in service of its chosen narratives.