The Reboot - The Case for Rethinking Section 230 and Bloated Tech Monopolies

 
iStock-1215423096.jpg

Daniel Hanley of Open Markets Institute presents our stance in The Reboot that reforming Section 230 is not a panacea solution for stopping Big Tech’s iron grip on the digital world.

In the early years of the commercial internet, back when companies like CompuServe and Prodigy dominated service, regulators faced a reckoning. They were forced to determine whether or not digital platforms should have traditional publisher liability for user-generated content hosted on their site.

CompuServe had escaped liability for defamation in 1991 because it made no effort to review or filter its users’ content — the company took an entirely hands-off approach, so it couldn’t claim to have knowledge of what was posted. But in 1995, a court found Prodigy liable in another case because the company had decided to actively moderate its message boards. In policing some of the platform’s content, the court decided that Prodigy had assumed legal responsibility for all of it.

Concerned that imposing liability on internet companies for moderating user content would cripple the growth of online services, federal lawmakers decided that they should be incentivized to moderate “offensive material” in good faith without the fear of facing the same liability as a newspaper does for the content it publishes. In 1996, Congress enacted Section 230 of the Communications Decency Act. Popularly known as “the law that created the internet,” it established that “interactive computer services” (more commonly known as platforms) would not be treated as publishers of third-party content. This means that Section 230 shields platforms for transmitting, displaying, and filtering (or not filtering, as the case may be) material that they host.

The law attracted little attention over the next two decades, but this has changed significantly in recent years. One reason for the renewed attention to Section 230 is that tech giants like Google and Facebook have grown to be vastly more powerful than legislators imagined when Congress enacted the law in 1996. To deal with network monopolists and other providers of essential services, antitrust enforcers have traditionally implemented policies such as line-of-business restrictions, forced interoperability, and prohibitions on price discrimination. But we still lack policies that prevent undue control and concentrations of private power in this critical sector of the economy. Instead, alarmingly, Section 230 has been recently invoked in court as granting immunity from anticompetitive claims and other ostensibly exclusionary acts.

Regulators have generally applied a laissez-faire antitrust philosophy to online communications and commerce. This has allowed companies like Facebook and Google to base their entire advertising-supported business models on actively manipulating the news and information that they transmit between users, keeping them addicted to their platforms while pushing misinformation and extremist content. YouTube’s recommendation engine, for example, promotes deceptive and extreme content that the platform monetizes with ads. For years, not only have the dominant platforms failed to adequately police objectionable and harmful content, but they also have earned billions of dollars by algorithmically boosting such material.

What’s worse is that since courts have interpreted Section 230’s liability shield so broadly, despite its original purpose, the law has given dominant platforms almost no incentive to moderate even patently objectionable content. Courts have fundamentally subverted the main goals of the law by radically expanding its liability protection to nearly every business that involves even marginal internet-based operations. Section 230’s authors had never expected it to cover every possible activity on the internet, but businesses involved in digital activities that cannot plausibly be considered “free speech” in any traditional sense use the law to protect against liability today.

In one case, the Wisconsin Supreme Court affirmed immunity under the law for an online firearms marketplace that allows unlicensed gun sellers to sell firearms to users who cannot pass a background check. The site had facilitated the sale of a handgun to an individual who was legally prohibited from possessing one, and who used it in a mass shooting. As the legal scholars Danielle Citron and Mary Anne Franks have written:

“Section 230 has been read to immunize platforms from liability that knew about users’ illegal activity, deliberately refused to remove it, and ensured that those responsible could not be identified; solicited users to engage in [wrongful] and illegal activity; and designed their sites to enhance the visibility of illegal activity and to ensure that the perpetrators could not be identified and caught.”

The implications of using Section 230 to shield potentially criminal online activities from prosecution are just as problematic for the internet as anything that the law originally aimed to fix.

Read the full op-ed here.