Skip to main content

Join the Conversation: 5.8 From 'The Patriot' to Twitter

Join the Conversation
5.8 From 'The Patriot' to Twitter
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeJoin the Conversation
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Section 1: Writing at Baruch
    1. 1.1 First-Year Writing Program Mission
    2. 1.2 Writing in Your Courses at Baruch
    3. 1.3 Assignment Sequence
    4. 1.4 Resources for EAL / Multilingual Students
    5. 1.5 Writing in Your Courses at Baruch
  2. Section 2: Composing as a Process
    1. 2.1 Reading and Writing
    2. 2.2 On Writing as Style and Entering a Conversation
    3. 2.3 Suffer Less: On Writing as Process
    4. 2.4 Making and Unmaking
    5. 2.5 Peer Review
  3. Section 3: Literacy as (re)Making Language
    1. 3.1 Language, Discourse, and Literacy
    2. 3.2 Defining My Identity through Language
    3. 3.3 The Linguistic Landscape of New York
    4. 3.4 Caught between Two Worlds
  4. Section 4: Analyzing Texts
    1. REWRITE 4.1 What is Rhetoric?
    2. 4.2 Tools for Analyzing Texts
    3. 4.3 Autism, As Seen on TV
    4. 4.4 Finders and Keepers
  5. Section 5: Researching and Making Claims
    1. REWRITE 5.1 The Research Process
    2. 5.2 Finding and Evaluating Sources
    3. 5.3 Writing with Other Voices
    4. 5.4 Stasis Theory
    5. 5.5 Organizing Your Ideas
    6. 5.6 Organizing an Argument
    7. 5.7 The Russians are (Still?) Coming
    8. 5.8 From 'The Patriot' to Twitter
    9. 5.9 What's On Your Mind

From The Patriot to Twitter

The Challenges of Managing Disinformation in Contemporary Online Spheres

Jahongir Tohirov

The Cold War was as notorious for its handling and exchanges of information as it was for the actual conflicts that took place at the time. Picture this: In 1962, an Indian citizen picks up a copy of the pro-Soviet newspaper The Patriot. The title? “AIDS May Invade India”, in which a supposed famous American scientist had discovered that the AIDS epidemic originated from the Pentagon (Kramer, 2024). Allegedly, AIDS was a result of experiments to develop new and dangerous bioweapons – and crucially, the Pentagon was aiming to continue experiments in neighboring Pakistan (Kramer, 2024). None of this was true. The Patriot and its associated articles were part of deliberate disinformation campaigns orchestrated by the Soviet Union’s KGB.


Disinformation is the spreading of “false information designed to mislead others” (Palfrey, 2025). The Patriot’s AIDS disinformation campaigns were spread with the intent of fostering division between India and the U.S. In the coming years, these accusations spread across the Soviet Union.


In the modern day, disinformation such as that found in The Patriot is being spread with new mediums that threaten our democratic systems, such as when false information surrounding vaccines was spread during the COVID-19 epidemic (Gottlieb 2020). This paper argues that while action against the effects of this disinformation is agreeable, indirect action concerning false information is the only plausible way forward in regulating it as opposed to any direct altering of information.


To illustrate the urgency of addressing disinformation, consider a Kremlin propaganda campaign uncovered by the Department of Justice one year ago. On X, bot accounts managed by the Kremlin posed as Americans to spread anti-Ukraine, pro-Russia rhetoric (Bond, 2024). This case likely only represents a fraction of influence operations online. As early as 2017, Twitter, Facebook and Instagram hosted a combined 190 million bots (Salge & Berente, 2017, p.1). The sheer number of bots online demonstrates the scale at which these operations could be acting.


Bot-driven disinformation has been employed at scale as early as the 2016 presidential election, in which Russian bots spread pro-Trump rhetoric online (Parlapiano & Lee, 2018). Reflecting on 2016, the potential is evident for disinformation to tamper with our democratic processes, even those assumed to be resistant to outside influence.


In accordance with this threat, Congress must maintain the quality of information available to us (a goal also generally beneficial in itself). Luckily, there are multiple responses at its disposal in dealing with out-of-state actors. The easiest is to launch operations such as those which uncovered the Kremlin bot farms (Bond, 2024). Increasing funding for the NSA to monitor disinformation abroad would likely result in the biggest operations being taken down.

Additionally, Congress could pass legislation requiring strengthened identity verification for accounts. For instance, Meta “currently requires a phone number or email to create an account” (Lesser et al., 2024, p. 7). There are limits to these restrictions (malicious actors can bypass them by buying existing accounts en masse), but dealing with disinformation out-of-state is a relatively easy problem to resolve (Fredheim et al., 2023, p. 5).


Cutting off any sources of explicit disinformation may be a simple matter, and will address a significant portion of the problem. However, it only illustrates half of the false information dilemma, because malicious actors abroad don’t spread false information for the sake of it. The end goal is to seed domestic misinformation: “the inadvertent spread of false information without intent to harm” (Palfrey, 2025). Misinformation continues to spread independently after its source is eliminated, because citizens spread false information inadvertently. This means that there is no root of the problem that can be removed to cut off streams of misinformation, complicating the process of resolving the issue.


There are a few suggestions posed that could mitigate the impacts of misinformation, even despite these complexities. Broadly, these methods of regulating information can be classified as either direct or indirect actions. Direct actions are any altering of information itself, and indirect actions consist of preventing the effects of misinformation after the initial lies are spread. However, direct action must fight through psychological, cultural, and legal barriers to be able to be effective. In the U.S., the legal and cultural barriers concerning direct action revolve around the First Amendment, which protects much online speech from being censored or removed. The psychological barrier concerns echo chambers: spaces in which all information outside the echo chamber is completely discredited (Nguyen, 2020, p. 145). Indirect actions do not face any of these barriers, and thus are a more advisable course of action to take.


Firstly, the psychological barrier to direct action can be explained using the hypothetical of Congress requiring social media platforms to partner with external organizations. This would debunk false claims online by providing links to reputable organizations with “authoritative information” that work on the subject matter (Warnke et al., 2024, p. 1099). For example, a false claim about the COVID-19 virus could be debunked by the World Health Organization. In this hypothetical, the WHO, being experienced in the matters of diseases through objective studies, would be considered a provider of “authoritative information”. This “authoritative information” might come in the form of a label attached to posts with correct information that would instantly disprove a false claim.


However, disproving false claims does not necessarily equate to the correcting of false beliefs. This can be attributed to echo chambers – online spaces in which information outside the group is rejected – that form within the spheres of misinformation that “authoritative information” aims to correct (Nguyen). It does not matter how much objectively correct information is used to debunk a false claim – in the end, those who fall into echo chambers through a misinformation pipeline will stick to their beliefs. As a real-world example, take the story of Zach Mack and his father, a conspiracy theorist.


In Zach Mack’s series on NPR, Alternate Realities, Mack illustrates some of the unfounded claims his father believed in, consisting of absurd events like the Bidens and Clintons being convicted of murder (Mack 2025). As the podcast continues, it becomes clear that Mack’s father is unwilling to ever change his beliefs. By the end of the podcast, Mack’s father has fractured his relationships with his family and all of his claims have been disproven, but he still clings to them. Mack’s father is an example of the failure of fact-checking in correcting false beliefs. Direct action such as that posed in Warnke et al. fails because if implemented, it will fail to penetrate the maintained ignorance of misinformation echo chambers. Requiring labels on objectively false information, or any other method of directly addressing this issue, are all strategies that fail to clear this psychological barrier – and are thus ineffective.


Direct action must also clear a legal barrier, since legislation still needs to be passed to be of any worth. Unfortunately, the First Amendment’s protection of speech makes this very difficult. Numerous laws have been passed relating to this safeguard, but the most relevant to false information is Section 230 of the Communications Decency Act. Section 230 states, and has been interpreted to mean, that corporations owning social media platforms cannot be held liable for speech produced on them (Zeran v. America Online Inc., 1997). No matter how false the speech is, a social media company cannot be penalized for not censoring it. This means that when arguing for regulation of speech online, Congress must somehow find a viable workaround for the automatic immunity companies have from moderating their content.

If legislation becomes infeasible, what is left for Congress to do? The only remaining course of action would be to take down at least the legal barrier standing in front of it and amend Section 230. Some critics of the statute suggest narrowing its focus and limiting the types of content it protects (Musquera & Brennen, 2025). Working within reason, this is a good idea. Consider a scenario in which Congress amends Section 230 to hold companies at least partially liable for misinformation appearing on their platforms: it might seem prudent for companies to start regulating it. However, in this hypothetical where Congress finally gets past the legal challenge it faces, it still has to get past a cultural resistance to its regulation.


In the U.S., there has emerged an almost universal agreement that the First Amendment, and freedom of speech, is vital to the average American. A survey of 813 randomly selected Americans found that 9 in 10 Americans think the First Amendment is “vital”, and that 64% say it “should never be changed” (Where America Stands, 2025). With the vitality of the First Amendment being agreed upon, there comes a resistance to restrictions on the speech it protects. I conducted a survey to find out the extent of this resistance, and its potential effects on direct action.


In a survey conducted in November, I collected data from 55 college students to assess their social media usage and opinions on direct government action on misinformation online. The students were all Baruch students, attending Baruch. No other colleges were involved in this sample. Survey data was collected with a series of Likert scale questions, with “1” representing “strongly disagree” and “5” representing “strongly agree”. As is evident in Figs. 2 and 3, everyone uses social media apps often.


Figure 1


Figure 2


Figure 3


Fig. 1 shows that users often see political content on social media, but tend not to trust it. The key observations occur in Figs. 2 and 3: while the majority of users agree there is too much disinformation online, the majority would not support more government regulation to reduce it. An open-ended response (provided at the end of the survey) illustrates why this might be:


“While I agree that disinformation is a problem on social media, I don’t think government regulation of it is a good idea— especially in this political climate, as it can easily be turned into a means of censorship.”


While the results of my survey can only be generally applicable to lower-year NYC undergraduates, the figures and this quote summarize a sentiment popular in the U.S.: “a plurality of U.S. adults trusts neither government nor news organizations to provide fair and truthful information”, as phrased in a survey from George Washington University (Cooperman Research, 2024).


GW University study

Both surveys illustrate that the culture surrounding the First Amendment’s speech protection signals distrust in the government. Any direct action will be met with a watchful eye concerned about violating First Amendment rights. In worse cases, legislation could be outright rejected by the general public.


Overall, direct action is infeasible. The barriers standing in the way of direct action’s effectiveness means that its position as a solution to misinformation is not guaranteed. The only option left for Congress is indirect action: since Congress does not censor speech when acting indirectly, it does not have to worry about any of the aforementioned challenges.


In practical application, indirect action often takes the form of media literacy campaigns. These campaigns are the simplest way to address misinformation because as previously shown, the state is limited in its capacity to directly censor it. Instead, it can educate an online population to resist false narratives. This was suggested in the 117th Congress. In it, the Digital Citizenship and Media Literacy Act was introduced which would “direct(s) the National Telecommunications and Information Administration to award grants to state and local educational agencies, public libraries, and qualified nonprofit organizations to develop and promote media literacy and digital citizenship education for elementary and secondary school students” (Congress, 2022). A library receiving funds for a literacy campaign might provide free services that teach the public to assess the accuracy of information online, and a nonprofit organization might provide similar free services teaching basic technological skills. “Educational agencies” like the DOE could make these programs mandatory for elementary school students.


These methods work in two ways: firstly, they counteract the effect of false information before it has the ability to entice people into echo chambers, and secondly, they futureproof people against the effects of additional misinformation. Literacy training prevents people from falling into the trap of false narratives circulated in echo chambers, and even if they circulate online, educated people of all ages will be able to detect and ignore it. Meta-analyses done by researchers Jeong et al. and Cho et al. reveal that media literacy interventions – information provided to subjects – resulted in positive effects on multiple variables (Jeong et al., 2012; Cho et al., 2025). Knowledge, criticism, realism, influence, beliefs, attitudes, and behaviors were all positively impacted in these meta-analyses (Jeong et al., 2012; Cho et al., 2025). These real-world examples of indirect action have had their effectiveness proven, and thus indirect action becomes the only currently plausible way Congress can combat misinformation.


Indirect actions are not a catch-all solution. Congress is a slow-moving body, and legislation is a slow-moving solution to a rapidly developing problem (Tutella, 2024). There are also potential issues with the distribution of literacy campaigns – will the privileged or youth, who are more open to technology, be able to utilize it more effectively than those who are not? Even so, inaction is not a choice for our legislators. The dangers of dis-/misinformation have only scaled up since the era of The Patriot, and as technology develops rapidly each year, Congress must act to preserve our democratic systems. Even if the only option available to it at the moment is slow legislation, anything else risks erosion of the information we have access to online today. Until the barriers standing in the way of direct action are mitigated, it is Congress’s role to legislate and create a digital environment where information is valued and protected.


References

Bond, S. (2024, July 9). U.S. says Russian bot farm used AI to impersonate Americans. NPR. https://www.npr.org/2024/07/09/g-s1-9010/russia-bot-farm-ai-disinformation


Cho, H., Carpenter, C., Li, W. (2025, March 28). Media literacy interventions: meta-analytic review of 40 years of research, Human Communication Research, 57-79. https://doi.org/10.1093/hcr/hqaf004


Fredheim, R., et al. (2023, March 3). Social Media Manipulation 2022/2023: Assessing the Ability of Social Media Companies to Combat Platform Manipulation


Gottlieb, M., & Dyer, S. (2020, May 31). Information and Disinformation: Social Media in the COVID-19 Crisis. Academic Emergency Medicine. 27(7), 640–641. https://doi.org/10.1111/acem.14036


Jeong, S. H., Cho, H., & Hwang, Y. (2012, April 24). Media Literacy Interventions: A Meta-Analytic Review. The Journal of Communication,  62(3), 454–472. https://doi.org/10.1111/j.1460-2466.2012.01643.x


Kramer, M. (2024, April 18). Lessons from operation “Denver,” the KGB’s massive AIDS disinformation campaign. The MIT Press Reader. https://thereader.mitpress.mit.edu/operation-denver-kgb-aids disinformation-campaign/


Library of Congress. (2022, July 28). S. 4490 – Digital Citizenship and Media Literacy Act. https://www.congress.gov/bill/117th-congress/senate-bill/4490/text


Lina, W. et al. (2024, December 1). Social media platforms’ responses to COVID-19-related mis- and disinformation: the insufficiency of self-governance. Journal of Management and Governance, 28, 1079-1115.


Mack, Z. (2025, February 26). How a son spent a year trying to save his father from conspiracy theories. NPR. https://www.npr.org/2025/02/26/g-s1-50605/conspiracy-theories-politics-family-alternate-realities


Musquera, V.A., & Brennen, B.S.J. (2025, May 27). What has congress been doing on section 230? Lawfare. https://www.lawfaremedia.org/article/what-has-congress-been-doing-on-section-230


Salge, L.A.C., & Berente, N. (2017, August 23). Is that social bot behaving unethically? Viewpoints, 60(9), 29-31.


Palfrey, J. (2025, October 10). misinformation and disinformation. Encyclopedia Britannica. https://www.britannica.com/topic/misinformation-and-disinformation


Parlapiano, A., & Lee, J. C. (2018, February 16). The propaganda tools used by Russians to influence the 2016 election. The New York Times, https://www.nytimes.com/interactive/2018/02/16/us/politics/russia-propaganda-election-2016.html


Schoen Cooperman Research. (2024, December 3). U.S. Post-Election Trust in Government Study. The George Washington University.


Tutella, F. (2024, January 26). Political polarization may slow legislation, make higher-stakes laws likelier, Penn State Research. https://www.psu.edu/news/research/story/political-polarization-may-slow-legislation-make-higher-stakes-laws-likelier


Valerie, B., & N., Eric (2024, January 4). Section 230: An overview. Library of Congress.


Where America Stands. (2025). Freedom Forum. https://www.freedomforum.org/where-america-stands/


Zeran v. America Online, Inc., 958 F. Supp. 1124

Annotate

Next Chapter
5.9 What's on Your Mind
PreviousNext
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org