The Terms and Conditions of Free Speech in the Modern-Day Public Square

Photo Credit: Gerd Altmann on

By Kurt Valentine, Web Editor

Social media has rapidly asserted itself as the modern-day public square. In 2005, one year after Facebook’s launch, 10% of internet-using U.S. adults used at least one social media site.[1] Ten years later, that number increased to almost 80%.[2] Facebook, which is the most popular social network worldwide, boasts 210 million users in the U.S.[3] and 2.3 billion users worldwide.[4] It “allow[s] a person with an Internet connection to become a town crier with a voice that resonates farther than it could from any soapbox.”[5]

The Supreme Court of the United States has even opined on the importance of social media in today’s society. In a 2013 opinion, Justice Kennedy recognized that, “While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace—the ‘vast democratic forums of the Internet’ in general, and social media in particular.”[6]

Despite the Court’s emphasis on the special role social media plays in an American’s right to free speech, social media companies are not constrained by the First Amendment. “[T]he First Amendment governs only governmental restrictions on speech.”[7] As private entities, social media companies are free to restrict speech and users on their websites. In fact, the companies have their own First Amendment rights.[8] They are also protected by Section 230 of the Communications Decency Act of 1996, which shields social media companies from liability for filtering decisions and content posted by third-party users.[9]

In 2018, Facebook, Spotify, Apple, YouTube, and Twitter exercised their First Amendment and Section 230 rights when they took down material posted by conspiracy theorist Alex Jones and removed his Infowars channel.[10] [11] In response, First Amendment scholar Jameel Jaffer said, “We should have a discussion about that power and whether Facebook should be able to decide who gets to speak.”[12] Jaffer further stated, “I think free speech advocates start to get nervous about Facebook excluding people from the platform, especially when there’s an argument that they’re excluding people on the basis of viewpoint. You can think whatever you want to about Alex Jones, but I worry not about Alex Jones, but about the next person or the next year. Who is it that Facebook is going to be excluding next year?”[13]

Throughout Facebook’s short history, it has struggled to determine what, or who, should be excluded from the site. In 2013, Facebook removed images posted immediately after the Boston Marathon bombing because depictions of severed limbs violated their policy on graphic violence.[14] The newsworthiness of the event led Facebook to restore the photos.[15] Shortly thereafter, Facebook declined to remove violent rape jokes because users were permitted to post toxic speech that didn’t seem likely to provoke physical harm.[16] After several companies pulled their ads, Facebook did an about-face.[17]

Facebook has had the most difficulty setting its hate speech policy. It cites the difficulty in defining the boundaries of hate speech as their primary challenge in stopping hate speech.[18] This challenge is due, in part, to the significant variation of laws against hate speech. In the United States, “even the most vile kinds of speech are legally protected under the U.S. Constitution.”[19] In a unanimous 2017 decision, Justice Samuel Alito wrote, “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’”[20]

On the other hand, most European Union counties have hate-speech laws, but they are not uniform. For example, “‘Muslims are criminals’ is clearly illegal in Belgium and France, likely legal in Denmark and Italy, and likely illegal in England and Germany (and banned by Facebook).”[21] In fact, the German Parliament recently passed the Network Enforcement Act, commonly known as NetzDG, which specifically targets hate speech on social media sites. NetzDG requires social media companies with more than 2 million registered German users “to promptly remove ‘illegal content,’ . . . ranging widely from insult of public office to actual threats of violence.”[22] If the companies fail to timely remove the content, they face fines up to €50 million.[23]

Facebook acknowledges that “it’s clear [Facebook is] not perfect when it comes to enforcing [its] policy. Often there are close calls—and too often [it] get[s] it wrong.”[24] In 2017, a number of L.G.B.T.Q. individuals were using the term “dyke” on Facebook.[25] The posts were removed because the term was deemed a slur.[26]

Again, in 2017, shortly after the #MeToo movement began, female comedian Marcia Belsky was banned from Facebook for 30 days for commenting “Men are scum” on a friend’s post.[27] A group of female comics, who were getting banned for similar infractions, protested Facebook’s policy by spamming the platform with dozens of “men are scum” posts.[28] Facebook removed the posts and Belsky was banned from her account again.[29]

The content team holds meetings every two weeks “to debate what you can and cannot post on Facebook.”[30] At a late-2017 content team meeting, Mary deBree, who worked in Obama’s State Department, introduced the “Men are scum” issue.[31] She said, “We don’t want to silence people when they’re trying to raise awareness around—for example—sexual assault. However, um, the tension in this is that, if we allow more attacks on the basis of gender, this may also lead to more misogynistic content on the platform.”[32] Therein lies the problem with Facebook’s hate speech policies. “If you remove dehumanizing attacks against gender, you may block speech designed to draw attention to a social movement like #MeToo.”[33]

Under its current hate speech policy, Facebook defines hate speech as “anything that directly attacks people based on what are known as their ‘protected characteristics’—race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.”[34] An “attack” is further defined as “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”[35] The current policy has exceptions when “words or terms are used self-referentially or in an empowering way” and they also permit “humor and social commentary related to these topics”[36]

Facebook’s policies simply reflect the realities of the “privatization of the public square.” Monika Bickert, who is in charge of setting Facebook’s content policies, shares the same concerns as many of the people criticizing the policies.[37] Just like many free speech advocates, Bickert, a former federal prosecutor, is fearful that fundamental free speech principles are eroding.

“It’s scary. When they talk to people on U.S. college campuses and ask how important freedom of speech is to you, something like 60 percent say it’s not important at all. The outrage about Facebook tends to push in the direction of taking down more speech. Fewer groups are willing to stand up for unpopular speech.”[38]

Thus, social media companies’ restraints on First Amendment protected speech, highlights the current status of speech across the globe: more people are willing to accept speech subject to terms and conditions.




[1] Andrew Perrin, Social Media Usage: 2005-2015, Pew Research Center (Oct. 8, 2015),

[2] Id.

[3] Countries With the Most Facebook Users as of January 2019, Statista (Jan., 2019),

[4] Facebook: number of monthly active users worldwide 2008-2018, Statista (Jan. 2019),

[5] Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017).

[6] Id. at 1735 (internal citation omitted).

[7] Nyabwa v. Facebook, 2018 U.S. Dist. LEXIS 13981, Civil Action No.2:17-CV-24, *2 (S.D. Tex. Jan. 26, 2018).

[8] Eric Johnson, Should the First Amendment apply to Facebook? It’s complicated, Recode (Nov. 19, 2018),

[9] Bradley Reeves, Section 230 and Keeping the Trolls at Bay: Twitter Obtains a Significant Legal Victory on Content Control, Pillsbury’s Internet & Social Media Law Blog (Aug. 31, 2018),

[10] Alan Feuer, Free Speech Scholars to Alex Jones: You’re Not Protected, The New York Times (Aug. 7, 2018),

[11] Barbara Ortutay and Ryan Nakashima, Twitter’s ban of Alex Jones Raises Questions on Consistency, Associated Press (Sept. 7, 2018),

[12] Johnson, supra note 8.

[13] Id.

[14] Simon Van Zuylen-Wood, “Men are Scum”: Inside Facebook’s War on Hate Speech, Vanity Fair (Mar., 2019),

[15] Id.

[16] Id.

[17] Id.

[18] Richard Allan, Hard Questions: Who Should Decide What is Hate Speech in an Online Global Community?, Facebook Newsroom (June 27, 2017),

[19] Id.

[20] Matal v. Tam, 137 S. Ct. 1744, 1764 (2017) (citing United States v. Schwimmer, 49 S. Ct. 448, 451 (1929) (Holmes, J., dissenting)).

[21] Simon Van Zuylen-Wood, supra note 13.

[22] Germany: Flawed Social Media Law, Human Rights Watch (Feb. 14, 2018),

[23] Id.

[24] Richard Allan, supra note 17.

[25] Simon Van Zuylen-Wood, supra note 13.

[26] Id.

[27] Id.

[28] Id.

[29] Id.

[30] Id.

[31] Id.

[32] Id.

[33] Id.

[34] Allan, supra note 17.

[35] Community Standards: Objectionable Content, Facebook,

[36] Id.

[37] Van Zuylen-Wood, supra note 13.

[38] Id.

Comments are closed.