Section 230 and the Spread of Misinformation

A few days before the US Presidential Election, the chief executive officers of Facebook, Google, and Twitter appeared before the Senate Committee on Commerce, Science, and Transportation for a hearing on the relevance of Section 230 in the digital world. The basis of this hearing stemmed from governmental concerns about limited transparency and accountability with content moderation policies on social media platforms, and the larger implications for consumer privacy and freedom of speech online. The hearing, which occurred on the 28th of October, did not accomplish in-depth discussions about these concerns so much as it did deliberations about political biases in content moderation. While I didn’t watch the whole hearing and chose to read up on it later, I took away two big things from the hearing. One; outside of technology circles, there is still a lot of misunderstanding or incomprehension about how technological applications like social media function. Two; deliberations on the relevance of Section 230 is something that everyone, not just people in cybersecurity, should be watching out for.

A commonly held notion in cybersecurity is that people are the weakest link in the security food chain. There is understandable precedence for this: machines are built to be sturdy and accurate, and a computer basically does what a human tells it to do (except for AI and the foray into artificial consciousness, but perhaps that is another topic of discussion for another post). In the world of data breaches and phishing campaigns, there is always an unsuspecting human who accidentally clicked on a malicious link or provided their personal information to an attacker masquerading as a legitimate entity.

The general notion of humans as a weakness in the technological sphere is also largely reflected in the argument for content control in the online space. For the uninitiated, Section 230 is a provision of the Communications Decency Act of 1996 and is widely regarded as a provision that protects the freedom of expression and innovation on the Internet. In simple words, this provision protects publishers of information from potential liabilities caused by their users. For example, if an individual were to post an inflammatory comment about mint chocolate ice cream on their Facebook feed, Facebook would not be held liable for the actions of that individual. There are exceptions for criminal content and the like, but overall, Section 230 is the reason why the common Internet user can use social networking sites and share their thoughts and content with the world. As you can imagine, to do away with Section 230 would come across as wanting to do away with the freedom of expression on the Internet. Without Section 230, people would have to be highly critical of what they say online, even if their words are not meant to be taken seriously. Publishers of information, including but not limited to social media platforms, would have the added burden of fact-checking all content within the context that it is posted in.

So why should you care about this? Why should anyone in cybersecurity care about what goes on with a law on content control? Well, from the cybersecurity perspective, the arguments about Section 230 lean toward the direction of protecting the weak links, humans, on the Internet. I would argue that back when the Internet and its many applications were first invented, they were done so with good intentions. I still remember when I first made myself a Facebook account a decade ago – I did so to communicate with my friends, which was much easier to do than memorizing their phone numbers and calling them on the phone. The nature of immediate, asynchronous communication is what made social media appealing to people like me a decade ago, and it is for those same reasons now that it is unintentionally emerging as something that can be used for malicious purposes. I certainly don’t have to go into too many details about how there is something out there for everyone on the Internet, but we often tend to forget that “everyone” also includes the bad actors who have no qualms about misusing the Internet for their purposes. Where social media platforms like Facebook and Twitter were once merely places where people could share their daily thoughts, they are now grounds for breeding misinformation, enhanced by the very characteristics that made Facebook and Twitter popular among the common users of the Internet. Admittedly, not all of this is on the publishers of the information because they are not necessarily condoning what their users share. They are simply the messengers of what their users wish to convey to the world. However, not all content online is good content. There are increasing pressures on companies to think about what exactly their platforms are used to share (at least where harmful content is concerned) and what they as publishers can do about it.

These considerations aren’t new. Almost every social media platform, for instance, has its versions of community guidelines and user policies where they outline what is inappropriate content and the consequences for posting or sharing such content. However, it is difficult to determine exactly where misinformation falls into this spectrum, especially when there is difficulty in determining what constitutes misinformation in the first place. There is a difference between a meme shared for satire and an unsubstantiated claim that calls for people to act a certain way. Malicious actors like terrorists use certain social media platforms to spread radical content that is often disguised as ordinary content to pass community guidelines. In a similar vein, proponents of baseless conspiracy theories and disinformation use social media to propagate their beliefs, without fear of repercussions due to the protections afforded by Section 230 and user policies. Content like this is persistent due to the nature of the Internet and can cause a lot of harm, directly or indirectly, to regular Internet users.

Now as a reasonable consumer of information online, I have in the past wondered why people online would even give the time of day to false information. Sometimes it is painfully obvious when an image or post is crafted to look like it is genuine. This is where the issue of content control gets even more complicated – perpetrators of misinformation are getting better at their craft, and the average consumer would now have to expend significant amounts of effort to determine if what they are seeing online is true or not. There are opportunities now for Internet service providers and technology corporations to take a stand against harmful content like this and severely undermine the abilities of malicious actors, but it comes at the potential cost of compromising some of the fundamental facets of the freedom of expression that regular consumers of the Internet enjoy online. As the saying goes in popular culture, this (misinformation) is why we can’t have nice things.

The hearing on Section 230 and subsequent debates on content control are essentially walking the fine line between assuring the safety and security of the quintessential Internet user and compromising their rights to express their thoughts and opinions reasonably without fear of repercussions. It has larger implications for cybersecurity because of how ingrained technology is in the lives of everyone today. It comes down to a question that requires cooperation and understanding among all stakeholders involved so that no one is disadvantaged – to assure the safety of the Internet user, what can we do to address misinformation and related kinds of content without infringing on free speech? The continued hearings scheduled on this matter are indicative of the larger implications that misinformation can have on the Internet and even society on a whole if left unchecked, and subsequent policy decisions will have impacts on people where the privacy and security of the individual online is involved.

Written by:

Lancia Raja

Graduate Student CSEC

Dept of Computer & Information Technology