It’s been an interesting week in social media. Elon Musk has fueled speculation that, with his newly acquired stake in Twitter, plus a seat on the board of directors, he could influence policy at Twitter so as to make the free-speech “alternative” platforms—including Parler, where I have been working since July of 2020—obsolete.
While headlines seem to suggest that Musk will face an uphill battle if he plans to advocate for anything more contentious than an edit button, some of the sub-threads you see on Twitter indicate how Musk’s presumed goals—many, if not all of which he seems to share with former CEO Jack Dorsey—might be achieved in a way which avoids a head-on battle with thousands of Twitter employees: via the adoption of a decentralized protocol, developed as part of Dorsey’s Bluesky side project, first announced in late 2019. First, I saw Dorsey’s echo of a Musk tweet, saying users should be able to choose their own algorithms:
Then he retweeted some of Bluesky’s tweets about the project recently becoming an actual functioning legal entity with employees. Is this a coincidence? I think not, especially given that Musk’s transaction for his stake in Twitter was completed well before he ever started tweeting about whether the platform rigorously adhered to the principle of free expression.
So my bet, if I were a betting woman, would be on Musk (and Dorsey) hoping to achieve a freer Twitter de facto, via the adoption of some or all of the Bluesky protocol. Will it work? And if it does, will it actually allow people to share information, ideas, creations, and opinions more freely? All of this remains to be seen. I am just starting to try and understand how it would work and, as one of those pesky “unacceptable” free-thinkers, I have my doubts. In particular, I am concerned that what will be presented as a “decentralized” protocol for social media, one that it is hoped will be adopted world-wide, will have built-in throttles or filters for “hate speech” or “misinformation” of various kinds. And so we’ll be back to square one (and maybe we won’t even know it unless we’re really tech savvy).
But Bluesky isn’t the only decentralized protocol for social media in the works. Again, if you lurk on some of this week’s Twitter threads a bit, you will see another decentralized social network protocol mentioned, DeSo. This protocol, if I understand it correctly, differs from Bluesky in that it will be tied to a blockchain. And again, if I understand correctly, this might limit scalability in ways that a more “hybrid” protocol might not. But it looks like they’re working on it, and more importantly, they appear to really want to enable social media to be done right. The months ahead should be interesting, as we see what each of the two projects offer the world.
But of course all of this raises a question: Is decentralization social media’s only (or best) hope for the future? I’ll cut to the chase: I don’t believe that it’s the only way we can hope to do social media right. And by “right,” I mean in a way so as to preserve and respect our freedoms of thought and expression, while also providing individuals with control and transparency as to what data about them is collected, and how it’s used. (Bonus for providing a way for creators to better monetize themselves, instead of being treated like commodities the way the existing platforms have treated them.) The analogy which occurred to me this morning is that resorting to a decentralized solution, today, is like moving to Galt’s Gulch. And on many days, that seems very attractive! Perhaps, however, it’s not quite time to give up on the world outside the Gulch—especially when we can point to things which might and could be done to make that unnecessary.
I agree that decentralization may seem to be the most attractive option today, given the fact that we have a market dominated by social media behemoths which are generally thought to not be doing right by any of the standards I set out in the last paragraph. But here’s something to consider, at least before putting all your eggs in that basket: how much of what is wrong about the current state of social media is attributable to government intervention—particularly to perverse incentives created by decades of god-awful legal precedent with respect to both freedom of expression and data privacy?
Governments have so much power these days—especially since they’ve spent the last couple years using the excuse of a global pandemic to aggregate even more. So there are probably dozens of interventions or threatened interventions I can point to which have created perverse incentives for social media companies and exacerbated the problems we are experiencing today. Consider just one pandemic-era example: government-mandated lockdowns dramatically increased the time human beings around the world spent glued to screens, hoping to replace the feeling of in-person human connection via interactions on social media networks or other communications apps. As a result, the bottom lines, and therefore economic power of these companies grew just as dramatically. As did the amount of data they collected about individuals everywhere. Think about it: if everyone, everywhere, is locked down, the only way they can communicate is via electronic means. We have essentially spent the last couple years in Bentham’s Panopticon, and tech platforms and service providers have been our prison guards. Other examples, which I will mention only quickly, are direct regulation of speech by government, which has been going on for decades in the United States, at least, via the FCC; and of course antitrust law, which has been mostly, at least so far, a stick waved around in a menacing way, in order to “coax” social media companies and others’ to do politicians’ bidding.*
Which brings me to the first of the two major examples I want to discuss in a bit more detail—Section 230. Yes, I know, you’re pretty sure you’ve heard enough about Section 230. And you probably already have an opinion about what should be done about it. But if you have not read it yet, I encourage you to read this piece, published in the Wall Street Journal over a year ago, by Jed Rubenfeld and Vivek Ramaswamy. In it they argue that the legal immunity provided by Section 230, combined with the various carrots and sticks politicians and bureaucrats use to “coax” social media companies to remove content according to their wishes, makes the removal of that content tantamount to “state action,” a concept for which there is precedent in the law. And so, they argue, it is not inappropriate to use the term “censorship” for at least some of the content-removal actions taken by these companies.
My view? I believe the principle embodied in section 230 is correct: a platform should not be held liable for content created and shared by users, unless it knew or had reason to know it was on the platform and is properly deemed culpable for its continued presence there. However, I am not sure that a legal rule, codified in statute, is the correct way to implement this principle in the law. Perhaps some critics are correct, and section 230 really should be repealed, as it provides so much in the way of procedural advantage to these platforms, that it gives them too much power, or creates perverse incentives. Certainly the threat of repealing or amending Section 230 is being used as a stick by politicians and bureaucrats who pretend it’s some kind of generous gift, subject to being withdrawn if platforms fail to do governments’ bidding—which, if the principle behind 230 is correct, is dead wrong. And Facebook—oh, sorry, Meta—seems to be very eager to have Section 230 amended in a way which would require them to remove the content they are already, seemingly happily, removing. I cannot count the number of times I’ve been served ads telling me that Facebook supports “updated Internet regulations.” This would not only further enhance their market power, by creating more barriers to entry, but would also take care of the problem of them being deemed “state actors” by removing any discretion they have over these content removals. Convenient.
Still I hesitate to conclude that Section 230 should be scrapped entirely, and would first hope for courts to follow the lead of Supreme Court Justice Clarence Thomas, who has repeatedly called for a “narrower interpretation of Section 230.” I am not sure of the current status of former President Trump’s lawsuits against Facebook and Twitter, but I have previously explained why I think private lawsuits, like Trump’s, not only are the proper vehicles for redress in cases of fascist censorship, but also could help establish a proper interpretation of Section 230, one which allows platforms to be held liable for their contributions to content, or to its visibility and reach.
So that is one change which could be made in existing law, and perhaps effect significant improvements to social media. The second has to do with something not discussed nearly as often as Section 230: the so-called “third-party doctrine.” This doctrine of constitutional law says that, once you share information with a “third party,” such as a bank, a phone company—and yes, a social media company—you no longer have a “reasonable expectation of privacy” in that information, with the legal consequence that the government can obtain that information about you, from the third party, without a warrant. No probable cause or particularized suspicion required.
Perhaps your mind is already starting to imagine some perverse incentives a doctrine like this might create when it comes to social media companies, which collect—supposedly with user consent—vast amount of personal information about individuals. First, many in government are happy to have third parties collect and retain this data, because they know they can access it without the inconvenience of actually presenting a warrant to the subject of investigation. Second, many in government will naturally look for ways to ensure easy, reliable, quick access to this data, without even the necessity to seek a subpoena, and so will dream up reasons for administrative agencies to require its routine collection and submission. Some might even use the pretext of data privacy violations by the company to reach settlements resulting in “consent decrees” which arguably give the agency—or if they’re really brazen, even the DOJ along with them—routine, warrantless access to private user data. Such “settlements” are easier to achieve if, besides the other fines and penalties an administrative agency itself might have the power to bring to bear, they can credibly threaten to coordinate with others in government who have other carrots and sticks at their disposal.
The solution to this second mess is something I came to ten years ago, after reading a concurrence by Justice Sotomayor in United States v. Jones, and subsequently wrote up and published in this law review article. In the article I explain the origin of the third-party doctrine, and use what I believe to be the implicit, common-law rationale for the doctrine to explain how it can be scaled back to its original scope. In my view, the Supreme Court’s expansion of its scope in the 1970s, to cases of information sharing in the context of ordinary contracts for legal goods and services, was without justification. Both Justice Thomas and Justice Gorsuch were, I believe, in Carpenter v. United States, groping at a solution like mine. In that case, the Court carved out a tiny exception to the doctrine for cell phone location data. I am hopeful that, in an appropriate case—one in which my theory is presented as a rationale for more principled line-drawing—that the Court’s error can be rectified. And if newly appointed Justice Brown Jackson is as much of an “originalist” as she says, perhaps even she would vote to scale back the doctrine to its original scope, as I suggest. A girl can dream, can’t she?
If these two changes could be made in the law, both of which would require the help of smart, principled, lawyers and judges, there would be more room for a moral social media company—and no, I don’t think that’s crazy talk—to flourish. There’d be no need to erect the technological equivalent of an elaborate offshore tax shelter, simply to offer the product one wishes to offer.
Many no doubt believe that it’s already too late to change the law on these fronts, and that I should draw out the full implications of my Galt’s Gulch analogy. Otherwise they wouldn’t be investing time and resources in decentralized solutions. But so long as I can point to specific changes which I believe could make a difference, and which I think could be implemented in a reasonable time, I can’t admit defeat.
*Personally I think it would be ok to use existing antitrust law, in a surgical way, to compensate for the advantage these platforms were handed on a silver platter, via government-mandated lockdowns. If these are monopolies, my reasoning goes, it’s because the government helped to make them that way, and so some sort of government action is appropriate to undo the damage.
Is "Decentralization" Social Media's Best Hope?
I don't believe he will be able to change Twitter that much: Too much of the moderation is in the hands of low-level employees who just hit "yes" when the AI flags something.
1) The human reviewers have a leftist bias.
2) The human reviewers have an incentive to agree with the algorithm since they are likely judged by how quickly they carry out reviews. Elon, being a car and rocket guy, is unlikely to even question "turnaround time" metrics. Also, agreeing makes their jobs easier.
3) The AI's while impressive in many ways, cannot pick up on subtlety---and the companies that train them build in a leftist bias. You should see how the AI's The DailyWire uses perform. While DailyWire is good about overturning bad flags, they still happen. And if Twitter builds theirs in house, it will still reflect the preferences of the designers.
4) He is just one man. He is not the CEO, and he cannot plausibly threaten to take over the company or to vote out the CEO.
I wouldn't hold my breath. My guess, he uses the optimism about his joining the company to get the stock price to sore and then he is out---easily in less than a year---once that optimism begins to fade.