• Matthew Fuzi

The Trump-Twitter War of 2020, Contextualized

Updated: Jun 27, 2020



President Donald Trump is again expressing outrage at Twitter after the social media site flagged a recent tweet by the president warning protestors in Washington D.C that “serious force” would be utilized to prevent the formation of an “Autonomous Zone” in the nation’s capital, similar to the faltering yet ongoing Capitol Hill Autonomous Zone (CHAZ) protest in Seattle.

The tweet, according to Twitter, "violated the Twitter Rules about abusive behavior” and was flagged accordingly, but was otherwise left accessible for public viewing in accordance with Twitter’s Public Interest Policy.


This is not the first time Trump has clashed with the social media giant over the past month. Since updating its “civic integrity” policies in May, Twitter has flagged a number of the President’s tweets for reasons primarily related to precipitating misleading information and promoting violence against protestors.


The first of these flagged tweets were posted by the President on May 26th, making a pair of unsubstantiated claims regarding mail-in ballots leading to voter fraud. This act led the President to issue an executive order challenging Section 230 of the Communications Decency Act of 1996, which could possibly expose Twitter and other social media companies to lawsuit.

It is precisely that legislation, and particularly its Section 230, that has set the stage for the contentious limited liability of social media as it stands today.


The Communications Decency Act (CDA) was originally passed as Title V of the Telecommunications Act of 1996, which was one of the first major overhauls of U.S. telecommunications law since 1934. The intended purpose of the CDA was to regulate "indecent and obscene” material published on the internet, most prominently with regard to pornographic content.


However, Section 230 of the CDA established the following legal precedents regarding the liability of internet service providers and users:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
“No provider or user of an interactive computer service shall be held liable on account of... any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or... any action taken to enable or make available to information content providers or others the technical means to restrict access to [the aforementioned] material..."

In other words, companies who provide an interface or platform for third-party content cannot be held civilly or criminally-liable for the nature of posted content, or for their responses to said content.


This creates a distinction between publishers, such as news publications or interest groups who create and distribute content that is licensed/copyrighted to themselves, and platforms like social media websites who simply provide the digital forums for third-parties such as individual users and advertisers to publish their own content.


In essence, Section 230 of the CDA protects social media sites like Twitter, Facebook, Instagram and Youtube among others from lawsuits and prosecution based upon the content posted by their users.


However, the very same provision also enables social media platforms to police content posted to their sites as they see fit, and they similarly cannot face legal challenges on the basis of whether they choose to allow, restrict, or flag content they deem disagreeable.


It is this caveat of Section 230 that has come under scrutiny on multiple occasions in recent months.


Previously, controversy has flared over the proliferation of hate groups and fake news on social media platforms like Facebook and Twitter, as well as less-syndicated forums like Reddit, 4chan, and 8chan, while users on Instagram and Snapchat have observed numerous posts circulating racist and antisemitic tropes. As all of these organizations are considered “platforms” under Section 230, the legal immunity they possessed in permitting these groups to remain on their platform unimpeded became a point of contention for pundits.


The ensuing backlash against social media companies has included everything from general bad press to complete boycotts on ad spending. These tactics appear to have yielded results thus far, with Facebook deactivating hundreds of accounts linked to racialist and reactionary hate groups, and Twitter visibly and more strictly enforcing its user guidelines.


Twitter’s response to some of Trump’s tweets, albeit logical given the public pressure and verifiable violations present in the posts, have now attracted the ire of many conservatives who claim that Twitter is demonstrating partisan bias in not as strictly enforcing their content standards on both sides of the aisle. Moreover, Facebook’s removal of Trump-originated ads rallying supporters against Antifa using an inverted red triangle symbol, a symbol worn by Nazi political prisoner during the Holocaust, without proper context has served as an additional point of criticism from the political right towards social media’s recent enforcement approach.


It is reasonable to acknowledge how social media companies do not play a primary or direct role as the originators of content. Thus, they cannot be held accountable for the content, ideology, or affiliations of every individual user and advertiser who choose to utilize the platforms they provide publicly and free of charge, and as such should be allocated some degree of insulation from legal recourse.


However, while these platforms cannot control the mindsets of their users, they do have the freedom to set parameters on the content that they host by adjusting algorithms and vetting contributors regarding the nature of the content they post. Although this is likely not unconstitutional given that these firms exist in the private sector, it does give social media platforms a great deal of power to influence public opinion, regardless of whether or not that is the intention.


Such a capacity could be leveraged for or against any number of demographics, be them on the political left or right, a particular gender, age, race, ethnicity, or sexual orientation, or even based simply on other profiles and pages a user might like or follow.


Indeed, most social media platforms, especially Facebook, are already incredibly transparent on how their advertising algorithms use personal information to predict relatable marketing content for each user.


As such, there does exist a solid argument against Section 230 and its provision of legal immunity for social media companies regarding how they police their own content, if there is potential for the treatment and presentation of content to be abused or skewed for certain ends.


Whether or not President Trump’s recent executive order against Section 230 is simply a personal retaliation against Twitter’s flagging of his comments, and if he is even legally able to overturn it, is a more subjective debate.


Matthew Fuzi is an Editor-At-Large for The National Times.

Receive Elite Analysis By Subscribing To The National Times.

  • Instagram