By Andrew Wong (V)
Over the summer, like many of my friends at Pingry, I watched The Social Dilemma on Netflix, a documentary exposing the inner workings and dangers of social media and surveillance capitalism. Perhaps I was naïve at the time, but after watching the show, I still felt that social media was not a clear and present danger to our freedoms in this country. A negative social influence? Perhaps. Issues with user data and security? Certainly. But not a serious threat to society and civil rights.
However, in the wake of Capitol violence motivated by online actors, former President Donald Trump’s ban from every single social media platform on allegations of “incitement,” and thousands of other bans being handed out to conservative influencers, it is clear that the issue of social media needs to be addressed. Social media can be a force for good, helping to connect people from all over the world and allowing for immense information sharing that human history has never seen before. But on the flip side, it can also be used to facilitate illegal activities, share illicit material and content, curb free speech, or change the minds of millions with misinformation and propaganda. So how should social media be regulated in order to continue being a force for good, while also protecting our right to free speech and keeping unwanted actors out?
Presently, all social media in the United States is regulated under Section 230 of the Communications Decency Act of 1996. Section 230(c)1 grants social media companies legal immunity from whatever content is posted on their platforms, and makes it so that social media companies cannot be held liable for what is said and done by users, even if said actions are illegal. The second portion of the law, Section 230(c)2 provides immunity from civil liabilities for information service providers that remove or restrict content from their services they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable”, as long as they act “in good faith” in this action. In essence, platforms cannot be held liable for violating the free speech of users by removing content.
However, while Section 230 has allowed social media companies to expand and flourish without fear of legal repercussions for moderating what is posted on their platforms, it’s also presented many moral questions. For example, should illicit material or content that inspires violence or terrorism be allowed to be harbored on social media platforms with no repercussions whatsoever? Or what about the flip side of the coin, in that we’ve essentially given so-called “Big Tech” a free hand to play judge, juror, and executioner when it comes to free speech in the twenty-first century public square?
How should Section 230 be changed to try and remedy these issues? Democrats and Republicans have proposed several solutions. Democrats, most notably President Joe Biden, have supported weakening Section 230(c)1 protections, having stated in a January 2020 interview with The New York Times that “[Facebook] is not merely an internet company. It is propagating falsehoods they know to be false”, and that the U.S. needed to “[set] standards” for what content is and is not allowed on social media. Republicans, led by Senators Ted Cruz (R-TX) and Josh Hawley (R-MO), have proposed legislation limiting Section 230(c)2 protections, clarifying what it means when a platform takes down content in “good faith”, and stripping away immunity for content takedowns, thus allowing users to sue companies over content moderation policies.
Ultimately, effective regulation must address issues regarding illicit content and free speech. Such regulation must establish that content promoting illicit activities (such as child abuse, human trafficking, terrorism, and cyber-stalking, among others), are illegal on the internet, just like it is in the real world, and platforms will have their immunity stripped if they promote these activites. However, such regulation must also have adequate protections on free speech. This could be accomplished through clarifying the “good faith” clause with specific language, or perhaps writing specific legislation that only allows platforms to take down content when it is blatantly illegal, rather than letting platforms take down content they don’t like to see.
Social media is the public square of the 21st century, and ultimately, everyone should be able to have access to this public square, in keeping with our values of free speech. It is in the interest of everyone that the internet remains a free and open space, but also a safe space where illegal actions are not allowed to persist. Effective and smart regulation, accomplished via updating Section 230, would be an easy way to create an open and safe environment on all social platforms.