The AI Election
Back To News
We are about a month away from the U.S. presidential election, an unusual election befitting an unusual year. While our last two elections were the social media elections, 2024 is likely to be remembered as the AI election, perhaps in ways we don’t even yet realize (and well beyond AI images of Taylor Swift posted by former President Donald Trump). The boost in AI adoption unfortunately comes with risk: algorithmic bias perpetuating unfair systems and mis- and disinformation being produced more cheaply and spread more widely, just to name a few.
Since the introduction of ChatGPT in particular, AI regulation that targets a suite of potential AI harms has quickly become the topic du jour among federal and state policymakers, and the Biden administration has issued an AI executive order establishing new standards for AI safety and security.
AI in political advertising quickly raised concerns about misinformation on a massive scale, and with good reason. We’ve already seen deepfakes, or manipulated images and videos designed to deceive viewers, emerge in this election, which could shape not only voters’ perceptions of reality but also affect election outcomes. The potential damage of deepfake-enabled disinformation is enormous, and as AI tools improve, our ability to discern lies from truths will only be further challenged. Doubt can be damaging in its own right, if voters are so skeptical that they don’t know what to believe. But with the election just weeks away, it is very unlikely that Congress will pass legislation to defend against deepfakes before Americans cast their ballots.
While some rushed to take action, others have decided to adopt a ‘wait and see’ approach. Notably, the Federal Elections Commission (FEC) announced that it would not propose any new rules or take action on AI used in political advertising this year. This means that this year’s election is dependent upon voluntary actions by tech companies to combat political deepfakes.
In the meantime, nonprofits, think tanks and trade associations are working to combat the threats of misinformation to protect the larger public while policymakers consider legislation. For example, the Integrity Institute ― a nonprofit dedicated to building a better internet ― is tracking how social media amplifies misinformation as part of its Elections Integrity Program. Their membership includes 300+ integrity professionals, many of whom worked on Trust & Safety teams within digital platforms and have spent their careers working to improve the internet’s infrastructure to better protect users. Their important analysis reveals how popular sites are designed to incentivize lies and misinformation online, which threatens the integrity of all elections globally.
Tech plays an increasingly essential role in our elections. More and more, it’s where voters get their information, consume news and learn about candidates’ positions on key policy issues. Because the internet is now an essential tool for sharing that information, it’s critical that it is credible ― and that users are able to determine what's real and what's not.