top of page
DAVantage

THE RISE OF FAKE NEWS AND HATE SPEECH ON SOCIAL MEDIA IN THE US: A CALL FOR COMPREHENSIVE REGULATORY MEASURES

453212522_887245300116158_8693452171850319385_n.jpg

        Introduction: 

     In the vast digital landscape of the United States, the internet has become a double-edged sword, which offers unprecedented connectivity while simultaneously giving rise to new societal challenges. The phenomenon of fake news and hate speech has surged, casting shadows over the vibrant exchange of ideas and information. It leads to serious consequences such as eroding public trust in traditional media and governmental institutions, and deepening societal divides. As social media platforms amplify these harmful narratives at lightning speed, the urgency for effective regulatory measures to restore integrity and harmony in the digital realm has never been more apparent.

​

        1. The rise of fake news and hate speech on the internet in the US 

       The rise of fake news and hate speech on the internet in the US has become a pressing concern, significantly impacting society. Fake news, characterized by deliberately misleading or false information presented as legitimate news (Levy & Ross, 2021), has been particularly prevalent during critical events such as elections and public health crises. At the same time, hate speech, which includes any form of communication that belittles a person or group based on attributes such as race, religion, ethnicity, or sexual orientation (The United Nations, 2019), has also seen a disturbing rise.

      With the advent of social media platforms and the widespread use of the internet, fake news and hateful rhetoric can spread rapidly and reach vast audiences almost instantaneously. This phenomenon undermines public trust in traditional media and governmental institutions, contributing to increased political polarization and social unrest (McQuade, 2024). Especially during the 2020 presidential election and the COVID-19 pandemic, false narratives proliferated online, leading to widespread confusion and sometimes dangerous behaviors in the public (McQuade, 2024). For instance, unregulated online platforms like Twitter, Facebook, and Google helped former President Donald Trump and his supporters spread conspiracies and misinformation about the results of the 2020 presidential election, which produced the infamous riot at the U.S. Capitol on January 6, 2021 (Scott, January 2021; Frenkel, January 2021). 

        In addition, the role of advanced technologies, particularly AI, has further complicated these issues. AI can be exploited to create convincing fake news and facilitate the rapid spread of hate speech. According to NewsGuard, it is estimated that the number of websites hosting AI-generated articles containing fake news has surged by over 1,000 percent since May 2023 (Verma, 2023).

 

        2. Current Regulatory Landscape

    With the problem of misinformation, hate speech, and harmful content becoming increasingly serious, especially causing serious consequences for the political stability of American society, the US Government has recognized and taken significant steps to regulate and prosecute tech companies, primarily concerning their content monitoring practices on the internet.

      A central piece of legislation in this domain is a law that has been introduced since the early days of the Internet - the Communications Decency Act (CDA) of 1995. The CDA is a part of the Telecommunications Act of 1996, which was the first significant attempt by the United States Congress to regulate online content. The primary aim of the CDA was to address concerns about the availability of indecent and obscene material on the internet, particularly to protect minors from harmful content (104th Congress, 1995). In 1996, due to the concerns that this law would limit the development of an infant internet, the US lawmakers introduced Section 230 of CDA, which provides platforms with immunity from liability for user-generated content while allowing them to moderate content in good faith (Brannon & Holmes, January 2024).

        However, some aspects of Section 230 have been circumvented by laws such as the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA). These laws were intended to help law enforcement target websites involved in sex trafficking, but they do this by making platforms potentially liable for prostitution ads even if they're posted by users, not the platform itself (Morrison, 2023). 

 

        3. Proposals for New Regulations

      More than these actions are needed to solve the problem because the US has no direct laws regulating the monitoring of content on social networks of large technology corporations. Therefore, US lawmakers have proposed numerous regulations in the hope of establishing an official law to resolve this issue in the future.

        The reform of the 230 Section is one of the most prominent and important proposals. According to Benson & Brannon (2024), the reform of the 230 Section has two main directions. The first direction is to restrict Section 230 immunity concerning the hosting of third-party content, which aims to encourage platforms to remove harmful content. Some bills targeted specific content types, while others suggested exemptions for particular legal actions, such as lawsuits related to drug trafficking or nondiscrimination laws. Some bills concentrate on general hosting practices, potentially holding platforms liable if they promote disputed content via personalized algorithms (Benson & Brannon, 2024). The second direction is to curtail Section 230 immunity related to content moderation, intending to encourage platforms to host lawful content. Some bills have suggested eliminating the broad immunity under Section 230(c)(2) for moderating "otherwise objectionable" material. Other bills have aimed to restrict immunity to decisions that moderate content in a viewpoint-neutral manner (Benson & Brannon, 2024).

     Additionally, certain bills have focused on procedural elements of content moderation decisions, such as conditioning immunity on publishing terms of service or providing explanations for the moderation of specific content (Benson & Brannon, 2024). 

       In general, the proposal to reform the 230 Section has continued to face many challenges. Opponents believe that the reform would harm small and start-up technology companies as well as the big ones, which would decrease the development of the US tech sector. Moreover, without a clear mechanism for content moderation, the reform would likely put too much pressure on the tech companies. Lastly, reforming the 230 Section won’t solve the root problem as the First Amendment still protects the existence of harmful content on social media as a sign of freedom of speech (Chen et al., 2024). 

     Another notable legislative effort is the proposed Digital Consumer Protection Commission Act of 2023, aiming to establish a dedicated regulatory body to oversee digital consumer protection, enforce stricter content moderation standards, and enhance transparency and accountability in tech companies' practices (Warren, 2023). According to this bill, dominant digital platforms are required to publicly disclose their terms of service and content moderation criteria, as well as establish efficient and accessible appeal procedures for users. Furthermore, these platforms must promptly notify users and provide appeal options when restricting content access or failing to remove prohibited material in violation of their terms. Users can file complaints about these violations with the Commission (Warren, 2023).

     The Biased Algorithm Deterrence Act of 2019 is another bill that deserves close attention. The bill was introduced by Representative Louie Gohmert in November 2019. It provides that the owner or operator of a social media service will be considered the publisher or spokesperson of user-generated content (and thus may be liable for content) if the service or its algorithms do any of the following: (1) displays user-generated content in an order that is not chronological; (2) delays the display of such content relative to other content; or (3) hinders the display of such content for reasons other than to carry out the user's direction or to restrict material that the provider or user considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable (116th Congress, 2019).

     Going back to 2017, the Honest Ads Act was introduced to address concerns about the transparency and integrity of online political advertising, particularly in the wake of foreign interference in the 2016 U.S. presidential election (Warner, 2019). The Act mandates that online platforms maintain a public file of all electioneering communications purchased by individuals or groups spending over $500 on ads, including details about the purchaser, cost, and targeted audience. This requirement applies to platforms with at least 50 million unique monthly visitors. Additionally, digital political ads would need to include disclosures about the ad's sponsor and whether it was authorized by a candidate or political committee. The Act aims to hold political advertisers accountable, deter foreign interference, and ensure voters are better informed (Warner, 2019).

       In October 2022, President Joe Biden introduced the Blueprint for an AI Bill of Rights, aiming to safeguard Americans against the potential risks of artificial intelligence. The blueprint outlines principles for protecting privacy, ensuring algorithmic transparency, preventing discrimination, and promoting accountability in AI systems (The White House, October 2022; The White House, November 2023).

​

        4. Potential of Proposal Regulations  

       Overall, content moderation regulations encounter significant challenges for three main reasons: the potential violation of the First Amendment, the misalignment of interests among stakeholders, and the incomplete resolution of the underlying issues (Johnson & Castro, October 2022). 

       Firstly, US society is deeply engaged in a large debate over how content moderation regulations can preserve freedom of speech and information in accordance with the First Amendment. In this debate, the American people are divided into two sides, with half supporting content moderation and the other half opposing it. According to the Pew Research Center (2021), up to 48% of Americans believe that the government should take measures to curb fake news, meaning the freedom of information could be limited. At the same time, the remaining 50% of Americans oppose this, arguing that it is essential to uphold absolute freedom despite the widespread fake news (Pew Research Center, 2021). Moreover, the Republicans tend to support the “absolute freedom” approach, while the Democrats tend to oppose it (Pew Research Center, 2021). Without a consensus, it is highly challenging for the US government to enact a federal law on content moderation. 

      Currently, the United States Supreme Court still highly values ​​adherence to the First Amendment. Most of content moderation laws and regulations have yet to be passed or stalled by the US Congress on the grounds of potentially violating the First Amendment (Johnson & Castro, 2023; Sherman, 2024). Notably, in February 2024, the Supreme Court expressed concerns over Republican-backed laws in Florida and Texas that aim to restrict social media companies' ability to manage objectionable content. These concerns were raised following challenges from tech industry trade groups such as NetChoice and the Computer & Communications Industry Association (CCIA) (Kruzel & Chung, February 2024). In July 2024, the Supreme Court instructed the lower courts to re-evaluate the laws in question, focusing on potential violations of Section 230 and the First Amendment (Montgomery & Robins-Early, July 2024). 

     Moreover, content moderation can conflict with a platform's business model because sustaining it can be challenging, leading to a loss of profit. Major platforms dedicate large portions of their workforce to moderation, raising concerns about financial viability (Chatain, February 2023). For example, Twitter’s financial struggles highlight the difficulties smaller platforms face in remaining profitable while maintaining adequate moderation. Elon Musk's management has drastically cut costs, including moderation, hoping to mitigate revenue loss despite the departure of brand advertisers who prefer moderated platforms. Thus, the success of this approach remains uncertain (Chatain, February 2023). 

     Additionally, some current laws are unable to fully address the core issue and may even create additional serious problems. For example, despite the good objectives, FOSTA and SESTA have brought negative consequences. These laws have been reported to contribute to a rise in human traf như ficking and violence against sex workers. The laws have stripped off sex workers’ safety tool - the internet, making them face the risk of returning to street-based work, where they cannot plan client meetings, exposing them to heightened dangers such as physical violence from unscreened clients and harassment by law enforcement (Decriminalize Sex Work, 2023). Moreover, there are criticisms that these laws violate freedom of speech on the internet (Romano, 2018). Given these precedents, proposed bills are likely to face concerns about their practical effectiveness, reducing public and government support. 

       As the digital landscape continues to evolve, the need for effective regulatory measures becomes increasingly apparent, particularly in the context of balancing user and provider rights with the necessity to curb harmful content. However, with the challenges posed, the US government will need to actively cooperate with big technology corporations and the American people. Most importantly, the US government needs to unify the views of both the Republican and Democratic Parties if they want to establish federal laws on content moderation. 

 

        Conclusion: 

        The rise of fake news and hate speech on the internet in the United States poses a significant threat to societal harmony and trust in institutions. Addressing the rise of fake news and hate speech on social media in the US is a complex challenge requiring multifaceted solutions. Despite efforts by the government to implement regulations and laws, challenges such as First Amendment rights, profitability for users and platforms, and unintended consequences of existing laws complicate the path forward. Therefore, achieving consensus between political parties and stakeholders will be key to establishing comprehensive laws for this matter in the future. 

 

        Author:

1. Nguyen Thu Tra, International Politics and Diplomatic Studies intake 48, DAV

2. Nguyen Hien Thao, International Politics and Diplomatic Studies intake 50, DAV

​

        References:

1. 104th Congress. (1995). Communications Decency Act of 1995. https://www.congress.gov/bill/104th-congress/senate-bill/314 

2. 116th Congress. (2019). H.R.492 - Biased Algorithm Deterrence Act of 2019. https://www.congress.gov/bill/116th-congress/house-bill/492

3. Benson, P., & Brannon, V. (2024). Section 230: A Brief Overview. https://crsreports.congress.gov/product/pdf/IF/IF12584 

4. Brannon C. V., & Holmes N. E. (2024, January 4). Section 230: An Overview. https://crsreports.congress.gov/product/pdf/R/R46751 

5. Chatain, O. (2023, February 17). Social Media Moderation: Is it Profitable to Fight Fake News?. HEC Paris. https://www.hec.edu/en/social-media-moderation-it-profitable-fight-fake-news 

6. Chen, Y., Clement, C., & Wood, M. (2024). What is Section 230? Why ending It would create problems. Free Press. https://www.freepress.net/blog/what-is-section-230 

7. Decriminalize Sex Work. (2023, June 3). What is SESTA/FOSTA? - Decriminalize sex work. https://decriminalizesex.work/advocacy/sesta-fosta/what-is-sesta-fosta/ 

8. Frenkel, S. (2021, January 6). The storming of Capitol Hill was organized on social media. The New York Times. https://www.nytimes.com/2021/01/06/us/politics/protesters-storm-capitol-hill-building.html 

9. Johnson, A., & Castro, D. (2023, May 30). How to address political speech on social media in the United States. ITIF. https://itif.org/publications/2022/10/11/how-to-address-political-speech-on-social-media-in-the-united-states/ 

10. Kruzel, J., & Chung, A. (2024, Fbruary 27). US Supreme Court torn over Florida, Texas laws regulating social media companies. Reuters. https://www.reuters.com/legal/us-supreme-court-weigh-florida-texas-laws-constraining-social-media-companies-2024-02-26/ 

11. Levy, N., & Ross, R. M. (2021). The cognitive science of fake news. In Routledge eBooks (pp. 181–191). https://doi.org/10.4324/9780429326769-23 

12. McQuade, B. (2024, March 4). Disinformation is tearing America apart. TIME. https://time.com/6837548/disinformation-america-election/ 

13. Montgomery, B., & Robins-Early, N. (2024, July 1). Supreme court remands decision on Republican-backed social media laws to lower courts. The Guardian. https://www.theguardian.com/us-news/article/2024/jul/01/supreme-court-texas-florida-social-media-laws 

14. Morrison, S. (2023, February 23). Section 230, the internet law that’s under threat, explained. Vox. https://www.vox.com/recode/2020/5/28/21273241/section-230-explained-supreme-court-social-media 

15. Pew Research Center. (2021, April 14). More Americans now say government should take steps to restrict false information online than in 2018. Pew Research Center. https://www.pewresearch.org/short-reads/2021/08/18/more-americans-now-say-government-should-take-steps-to-restrict-false-information-online-than-in-2018/?utm_source=Pew+Research+Center&utm_campaign=12ffd10197-Internet-Science_2021_09_15&utm_medium=email&utm_term=0_3e953b9b70-12ffd10197-401050526 

16. Romano, A. (2018, July 2). A new law intended to curb sex trafficking threatens the future of the internet as we know it. Vox. https://www.vox.com/culture/2018/4/13/17172762/fosta-sesta-backpage-230-internet-freedom 

17. Scott, M. (2021, January 8). Capitol Hill riot lays bare what’s wrong with social media. POLITICO. https://www.politico.eu/article/us-capitol-hill-riots-lay-bare-whats-wrong-social-media-donald-trump-facebook-twitter/ 

18. Sherman, M. (2024, July 1). Supreme Court keeps on hold laws seeking to limit how Facebook, TikTok, X, YouTube regulate user content | AP News. AP News. https://apnews.com/article/supreme-court-social-media-florida-texas-dc523bc9a6ef7b0f7b0aa933d0a43cca 

19. The United Nations. (2019). The UN Strategy and Plan of Action on Hate Speech.

20. The White House. (2023, November 22). Blueprint for an AI Bill of Rights | OSTP | The White House. The White House. https://www.whitehouse.gov/ostp/ai-bill-of-rights/ 

21. The White House. (2022, October 4). FACT SHEET: Biden-Harris administration announces key actions to advance tech accountability and protect the rights of the American public. The White House. https://www.whitehouse.gov/ostp/news-updates/2022/10/04/fact-sheet-biden-harris-administration-announces-key-actions-to-advance-tech-accountability-and-protect-the-rights-of-the-american-public/ 

22. Verma. (2023). The rise of AI fake news is creating a “misinformation superspreader.” https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/ 

23. Warner, M. (2019). The Honest Ads Acts. https://www.warner.senate.gov/public/index.cfm/the-honest-ads-act 

Warren, E. (2023). Digital Consumer Protection Commission Act of 2023. https://www.warren.senate.gov/imo/media/doc/DCPC%20Section-By-Section.pdf 

DAV MODEL UNITED NATIONS

Contact us:

  • Instagram
  • alt.text.label.Facebook
  • alt.text.label.YouTube
  • Spotify
bottom of page