Author vs Carrier

Are social media platforms responsible for hateful content?
web internet law

Google, in what might sound like hyperbole claimed in a recent court brief for Gonzalez v. Google that if the Supreme Court chooses to limit the scope of Section 230 that it would "upend the internet". They are probably not wrong. Such a decision would certainly disrupt the internet as we've known it. The question is whether that would be an entirely bad thing.

On October 3, the Supreme Court agreed to hear Gonzalez v. Google, a case initiated by relatives of Nohemi Gonzalez, a U.S. citizen killed by ISIS terrorists in a November 2015 attack at a Paris restaurant. After the attack, the plaintiffs filed a claim in a California federal district court against YouTube’s owner Google under the Anti-Terrorism Act, which provides a cause of action for the “estate, survivors, or heirs” of a U.S. national killed in “an act of international terrorism.” - from Brookings

The Supreme Court agreed last year to hear the lawsuit, in which the plaintiffs have contended Section 230 shouldn't protect platforms when they recommend harmful content, such as terrorist videos, even if the shield law protects the platforms in publishing the harmful content. Google contends that Section 230 protects it from any liability for content posted by users on its site. It also argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems to work online, and says Section 230 should protect them all. - from WSJ

The issue at hand is a challenge to a bedrock provision of US Code Title 47 related to Common Carriers in Section 230(c)(1) which states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This sentence has been called "the 26 words that created the internet".

Its well established that if someone posts something defamatory and false about you on the internet and it adversely affects you, you have a legal right to sue that person for damages - the online service that hosted the offending content was just the "carrier" of the message and not liable. This worked well when the internet was mostly just forums and static banner advertising, but in the modern age of the internet where giant tech corporations provide video, news, and slick Corporate-grade advertising which has almost completely replaced media as we knew it, things are more complicated.

Section 230 falls under the Subchapter II for Common Carriers, but its not clear to me that these platforms are merely common carriers. A common carrier transports messages for its customers (usually for a fee) but modern social media does this with information about consumers that 1950s ad-men could only dream about. They can both monitor and control which specific messages reach specific users, they can look at that information in the aggregate and the specific. They make money selling that information about their users and use it to deliver targeted advertising. They may even charge their users for the privilege of consuming advertising tailored to their taste. It is the secret, proprietary algorithms of Google, Twitter, facebook, et al. which are used to select the user targets and determine what content to present to those targeted users that is at issue here.

To my mind, the social media platforms are not merely "delivering the messages of others" if they are actively selecting and pushing certain content whether for financial gain or ideological purposes. At a certain point we might consider them to have become the author of the content they decide (or calculate) to deliver.

We do not know the extent to which the murderers in this case may have been influenced by hateful messages pushed to them by a YouTube algorithm or whether it can be proven that those videos led them to join ISIS and commit crimes in their name but if so, then I'm sure we can all agree that is a problem that should be addressed.

Google's prediction in its brief regarding "upending the internet" might be prescient because any decision will likely also affect the very similar algorithms used for Search and Search is a critical piece of how everyone uses the web. The big social media platforms would certainly need to more agressively monitor and censor posts on their platforms to avoid liability, pushing many to seek safe harbor for their fringe speech on smaller, private sites which might be more difficult to regulate. This has been happening for other reasons recently but would accelerate.

However, upending the status quo isn't always a bad thing. The centralization of users onto private for-profit social media platforms makes user experience of the web like that of a fly caught in it and waiting to be sucked dry.

Since I don't want to be the guy who just complains but offers no solutions here are a couple ideas:

  • We need to know how the algorithms work. I propose that recommendation algorithms should be open source projects. If social media platforms use an open source algorithm that meets certain regulation criteria (as determined by law) it could be assured of liability protection for content posted on its service.
  • We need a public, open source, non-profit platform that government and metropolitan agencies could use to post important information. These public agencies should not need be at the mercy of a variety of closed, private social media systems to get information out to the public. Police, fire, bus, water, DOT, library, schools could use such a platform for free and users could subscribe for free for information pertinent to their local area or areas of interest. This system could even be ad-supported as long as the ads were limited and non-intrusive (think NPR supporters). Agencies would be verified. Unclear if commenting should be allowed and moderated or just sent back to the agency and not posted, or disallowed completely.

Previous Post Next Post