The US election: Freedom of information and misinformation
Guest blog post by Nym's resident social science researcher Dr. Navid Yousefian. Read their full report on freedom of information and the fragmentation of mis/disinformation regulation across the world.
With misinformation, disinformation, and malinformation (MDM) on the rise across the US, the line between safeguarding free speech and protecting digital safety is growing thinner. Americans face unprecedented risks this election season from deceptive information campaigns, uneven regulation across states, and polarized debates over social media platform accountability. Here, we dig into the regulatory landscape, spotlight new privacy tools, and explore how collective privacy can be a defense shield against MDM.
Download our full dataset here for a comprehensive database detailing laws in all 50 states in each of the seven MDM categories.
Guardrails or gatekeepers? Navigating the line between protection and control
The battle between Big Tech and governments over content regulation is intensifying and reshaping the digital landscape. Regardless of the regime type, governments worldwide are increasingly stepping in to moderate, remove, or block online content that impacts public safety, health, and election integrity. From Brazil’s nationwide ban on X and VPNs, Discord’s restrictions in Turkey and Russia, the controversial arrest of the CEO of Telegram in France, and Germany's NetzDG law enforcing content removal, the growing urge for state intervention shows rising concerns over unchecked digital influence.
Free speech protections in the US have historically limited government involvement. Under Section 230 of the Communications Decency Act, content regulation falls largely on platforms. The law essentially shields platforms from liability for user-generated content, allowing them to moderate content without facing legal repercussions for most posts. This principle has shaped the internet as we know it today.
Intended to protect platforms from excessive litigation while enabling content moderation, the law now faces significant criticism as MDM increases. Critics argue that the immunity it grants leaves harmful content unchecked, with real-world consequences. In practice, states increasingly craft their own regulations to bridge perceived gaps in federal law, resulting in a fragmented and often politicized approach.
This issue strikes harder at marginalized groups, including Black and Latino voters, who are often the first to face targeted disinformation. Regulations are supposed to serve as critical buffers for communities and individuals targeted by disinformation, where unchecked content can have outsized, real-world impacts. Ideally, by setting guardrails – or gatekeepers? – regulation seeks to create an internet that respects diversity without exposing communities to digital exploitation. Yet, where do we draw the line between protection and control?
This situation begs the question: with growing calls for intervention and the unchecked spread of MDM, is net neutrality – once celebrated as a protector of open information – still a progressive agenda? Does regulation restrict our freedoms, or can alternative solutions, like privacy tools, offer the protection we need without sacrificing open access? The discourse, however, has not evolved alongside new developments in digital environments and has suffered from a lack of scholarly attention and debate. This gap leaves critical issues unaddressed, trapping the conversation in outdated regulation versus free speech binaries that fail to address the layered complexities of modern digital interactions.
Without collective/communal privacy shields, the challenge is to balance regulatory measures that restrain harmful content without slipping into censorship. The map below indicates the number of MDM-related categories, differentiated by the intensity of the color, in each state that has enacted laws. Darker shades represent states with legislation across more categories, while lighter shades represent states with fewer or no MDM laws. While Democratic-leaning states often lead in enacting MDM regulations, Nym’s research data suggests that political affiliation alone does not fully account for the variation in these laws across states. Although Democratic states may trend toward more extensive regulation, the correlation is weak and statistically insignificant, showing that MDM regulation is not simply a “blue state” issue.
The regulatory divide across states reflects deeper tensions in US society. Section 230 reform has turned into a full-blown partisan debate. Democrats often argue that unchecked platforms contribute to societal harm, from disinformation to hate speech. At the same time, Republicans see unrestricted content as essential to preserving individual expression and a free digital market. This polarization has hindered the search for a balanced approach that protects individuals without suppressing legitimate free speech. However, despite ideological differences, MDM regulation often gains bipartisan support on public health and election integrity, where both sides agree on the need for intervention but debate how to best implement it.
State-level MDM regulation: A fragmented approach
Federal MDM regulation, while not cohesive, covers four main areas: public health, election integrity, social media accountability, and national security. During the COVID-19 pandemic, federal efforts focused on combating vaccine mis(dis)information in collaboration with social media platforms, while election-related mis(dis)information has been a central concern for agencies like CISA in mitigating foreign interference.
Without a comprehensive federal approach, MDM regulation in the US has shifted to the states, leading to an inconsistent regulatory environment. State approaches further divide along the lines of Privacy vs. Content Moderation and Election vs. Public Health. Privacy-focused states like California and Virginia impose strict platform accountability and transparency that indirectly curtail misinformation. In contrast, Texas and Florida prioritize anti-censorship laws, limiting platform intervention in user content. Similarly, states like Minnesota and Michigan emphasize election misinformation laws, while New York and Massachusetts concentrate on public health, particularly COVID-19-related misinformation.
Across the US, election MDM laws differ from state to state, exposing how uneven the playing field can be for voters, with some states enacting specific regulations to address traditional misinformation while others focus solely on AI-generated content. States like Maryland, Montana, and Virginia have established laws targeting false claims about voting procedures and election dates, aiming to prevent voter suppression through misinformation. Meanwhile, states such as Minnesota, New Mexico, Michigan, and Hawaii have enacted measures covering both traditional and AI-driven election misinformation, including deepfakes. At the same time, states like Arizona, Colorado, Florida, and Texas have focused their legislation solely on AI-generated election content, emphasizing the need to address deepfake threats without broader election misinformation laws.
This state-level variation creates a patchwork where Americans’ access to information differs based on where they live. For instance, battleground states like Pennsylvania, Georgia, and North Carolina lack specific laws or privacy measures on election misinformation, potentially exposing voters to disinformation. Conversely, California’s comprehensive regulations span all seven identified MDM categories, making it one of the most protected states against MDM. Wyoming and Maine, however, have no significant MDM-related laws, leaving residents vulnerable to harmful content, especially around critical moments like elections and public health crises.
The Seven Categories of MDM Legislation
States have addressed MDM across seven main categories, reflecting the complexity of combating false information in today’s digital world. Here’s a closer look at each category:
1. Privacy and data protection
Privacy laws limit how platforms collect and use personal data, indirectly curbing the spread of MDM by restricting access to sensitive and personal information. The California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) exemplify such laws, granting consumers control over personal information and mandating data transparency. New York SHIELD Act mandates that businesses implement specific security measures to protect consumer data, addressing privacy and cybersecurity. States like Virginia, Colorado, and Utah follow similar approaches, each with provisions that prevent unauthorized use of data, thus safeguarding against data-driven misinformation and disinformation. The Colorado Privacy Act (CPA) emphasizes individual control over personal data by allowing consumers to opt out of data processing for targeted advertising and profiling. These privacy measures mean less exploitable data for disinformation campaigns, bolstering individual privacy and reducing the ease with which MDM targets users.
2. Transparency, platform accountability, and anti-censorship
States take wildly different stances on platform transparency and accountability. California requires major social media companies to disclose moderation policies and report on harmful content like hate speech. Like California’s AB 587, such transparency laws enhance accountability by making platform practices more visible. In contrast, states like Texas and Florida emphasize anti-censorship by allowing lawsuits against platforms accused of biased moderation. Florida SB 7072, for example, establishes a violation for social media de-platforming of a political candidate or journalistic enterprise and requires a social media platform to meet specific requirements when it restricts speech by users. These states, often critical of the perceived liberal bias of major social media platforms, prioritize laws that protect user-generated content from removal.
3. Election misinformation (excluding AI)
Laws targeting traditional election misinformation focus on safeguarding voting procedures and combating false claims and deceptive messaging around election times and locations. States like Connecticut (Gen. Stat. § 9-135) and Hawaii (H.R.S. § 11-391) have laws that penalize misleading claims about voter eligibility and voting times, while Maryland (Election Law § 16-101) and Michigan (MCL - Section 168.932f) criminalize false statements regarding voter registration and the distribution of materially deceptive media. Such regulations protect the democratic process by preventing voter suppression through misinformation.
4. AI regulations (election-specific)
As AI-driven disinformation ramps up, more states enforce rules to keep election content honest. A primary focus is the regulation of deepfakes that could mislead voters. For instance, California mandates labeling AI-generated political content, and states like Minnesota, Florida, and Texas have enacted similar laws requiring the disclosure of political deepfakes. Florida (HB 919) and Minnesota (HF 1370) mandate clear disclaimers on AI-generated content in political ads, with violations treated as misdemeanors. Arizona’s SB 1359 and HB 2394 laws mandate disclaimers on deepfake political media published within 90 days of an election. These measures intend to mitigate the influence of synthetic and deceptive media in elections, providing voters with transparency around AI-manipulated content.
5. AI regulations (excluding elections)
Beyond election concerns, states are introducing AI regulations for transparency and fairness in employment, insurance, and public services sectors. California leads with bills such as the Artificial Intelligence Accountability Act, pushing for assessments on generative AI use, while New York enforces transparency in AI-assisted employment decisions. Colorado restricts AI-driven data use in insurance to prevent bias. These laws reflect a growing consensus that AI must be regulated for fairness and accountability, with states like California and New York setting precedents for responsible AI use.
6. Cyberbullying, defamation, and online harassment
Nearly all states have laws addressing cyberbullying and online harassment, though definitions and penalties vary. Federal law criminalizes interstate online harassment, but six states still lack specific cyberbullying regulations. High-profile cases like those of Megan Meier and Tyler Clementi have driven stricter cyberbullying laws in some states, such as Texas’s “David’s Law,” which expands harassment codes to include digital abuse. This law explicitly includes digital harassment as a prosecutable offense, setting a significant precedent for tackling online abuse and protecting individuals from harmful digital behavior. These cyberbullying laws are critical for protecting vulnerable individuals, though their inconsistencies highlight the need for a cohesive federal standard.
7. Digital literacy and public education
States have increasingly focused on digital literacy to equip citizens against misinformation. States like New Jersey and Illinois mandate media literacy in schools, preparing students to analyze information critically and to recognize disinformation. Illinois requires media literacy in school curricula to better prepare students for the digital world, helping them identify and resist misinformation. Delaware’s Digital Citizenship Education Act enforces similar standards, while Texas includes social media’s role in shaping opinion in its curriculum. By fostering critical thinking, these initiatives aim to build resilience to misinformation, especially among younger audiences, helping them responsibly navigate an information-rich digital world.
Privacy as a collective shield against MDM
While regulations attempt to address MDM’s various facets, privacy tools offer a vital line of defense. Privacy-focused applications such as NymVPN prevent MDM agents from accessing personal data that can be used for targeted manipulation. However, privacy is most effective as a collective measure; even if one person’s data is protected, MDM campaigns can still target them by analyzing data from others in their network who share similar characteristics, such as location or interests.
Widespread adoption of privacy tools creates a decentralized barrier, protecting communities from MDM’s reach. When more people use privacy applications, it becomes increasingly difficult for disinformation agents to leverage aggregated data to influence individuals. This approach not only protects user privacy but also strengthens the digital ecosystem as a whole. Ultimately, privacy serves as a balanced solution that protects individual freedoms without resorting to government overreach. As MDM becomes more sophisticated, privacy tools are essential to the fight for a secure, transparent, and trustworthy information environment. In a world where misinformation tactics keep evolving, privacy is a safeguard not just for individual freedom, but also the integrity of information itself.
The US regulatory landscape for MDM is a complex web of state-led initiatives evolving ongoing debates over federal legislation. Each state’s approach reflects its unique priorities, whether it’s emphasizing privacy, election integrity, or platform accountability. As federal reform discussions continue, the role of privacy tools remains essential, offering individuals and communities a protective layer against the increasingly sophisticated tactics of disinformation agents.
Download our full report here for a comprehensive look at the data and a state-by-state breakdown of regulations. Together, informed policy, robust privacy protections, and digital literacy empower Americans to reclaim control over their digital spaces, making them more resilient to MDM’s pervasive influence.
Download our full dataset here for a comprehensive database detailing laws in all 50 states in each of the seven MDM categories.
Share
Table of Contents
Keep Reading...
Nym welcomes new researcher to its censorship resistance team
Ramping up the fight against global censorship
The Nym Dispatch: X blackout in Brazil
VPNs caught in the crosshairs in row over content regulation