Bottom Line Up Front
- The unregulated use of artificial intelligence (AI) threatens further chaos and erosion of public trust in U.S. institutions and democracy in the 2024 presidential election and beyond.
- According to DeepMedia, a company working to detect synthetic media, there are currently three times as many deepfake videos, AI-generated content, and eight times as many voice deepfakes being posted online compared to 2022.
- Deepfakes could pose a massive threat to the democratic process in the United States through the spread of false narratives, potentially resulting in the further erosion of public trust in media.
- Former CEO of Alphabet Eric Schmidt has warned that social media companies do not have the capacity to deal with AI mis- and disinformation on their platforms, which could lead to chaos in the upcoming U.S. presidential elections.
Florida Governor and U.S. presidential candidate Ron DeSantis, as well as the Republican National Convention (RNC), have both recently utilized artificial intelligence (AI) to create deep fake videos and images, content that looks realistic and makes it almost impossible to distinguish authentic sources from fake content. In April, the RNC released an AI-generated video of U.S. President Joe Biden in which a dystopian future was pictured and suggested another Biden-Harris term would cause war in Taiwan and economic chaos. Although the RNC stated in the description the video was AI-generated, the fact that many viewers often do not read the description suggests the disclaimer probably failed to mitigate the issue. In early June, DeSantis’ presidential campaign released a deepfake image of Trump embracing Dr. Anthony Fauci alongside authentic images of the two men together from 2020, demonstrating not only how the mixing of authentic and AI-generated images can obfuscate distinguishing one from the other, but also how unregulated AI could become a staple of the 2024 presidential election. Deepfakes have the potential to spread mis-and disinformation and demonstrate the overall utility of using AI as a political tool. In a recent meeting hosted by Arena, an AI business, it was discussed how generative AI could potentially produce mis-and-disinformation at a pace and scale that electoral campaigns have not yet experienced. AI is currently unregulated, and candidates are not required to disclose if the technology has been used in content reflecting its appeal in mis-and-disinformation campaigns. The two U.S. presidential elections in 2016 and 2020 demonstrated how powerful mis-and disinformation could be. The unregulated use of AI means that further chaos and public distrust in U.S. institutions and democracy will likely occur in the 2024 presidential election and beyond.
Replicating a voice or an image of a person used to cost around $10,000 but due to significant advancements in AI, individuals can now easily access the software for a couple of dollars. Thus, the potential abuse of AI comes not only from the presidential candidates but also from the public, who will be able to easily access the software and upload and share images or videos of their own. According to DeepMedia, a company working to detect synthetic media, there are currently three times as many deepfake videos and eight times as many voice deepfakes being posted online compared to 2022. The scale of deepfakes is concerning given its ability to further disseminate false information and the implications this can have. The power of mis-and disinformation was displayed in 2020 when 1,200 people stormed the Capitol building due to false narratives concerning electoral fraud. The U.S. political climate remains unstable due to the proliferation and longevity of false narratives about the 2020 election. A study conducted by CNN in March 2023 found that among 1,045 Republicans or Republican-leaning independents, 63% of respondents still believe that Biden did not legitimately win the 2020 election. Political tensions have remained high, creating a perfect storm for the further spread of mis-and disinformation, which could cause public disarray – both politically and economically. AI imagery is vastly powerful, and it is becoming harder to detect. In May 2023, a viral AI-generated image depicted black smoke billowing out of the Pentagon and gave the impression an attack had occurred. The image was circulated widely online via social media accounts, one of which was impersonating Bloomberg News with a “blue-checked” account, and as a result, the stock market briefly dipped, with the S&P 500 dropping 0.3%. The image was later confirmed to be AI-generated but demonstrated the potential power of AI and its influence on markets.
Deepfakes could pose a massive threat to the democratic process and societal cohesion in the United States and elsewhere through the spread of false narratives, potentially resulting in the further erosion of public trust in media. This was seen in 2020 when former Belgium Prime Minister Sophie Wilmès had a deepfake video created about her. Extinction Rebellion, an environmental advocacy group, created a deepfake video of Wilmès where she allegedly claimed that the Covid-19 pandemic was linked to the “exploitation and destruction by humans of our natural environment.” Whilst the video description did state it was fake, many viewers watched the video and were unaware of its origins. The potential spread of mis- or disinformation is not the only potential risk of deepfake or AI-generated images and videos. As people become more aware of deepfakes they might be more inclined to dismiss authentic media. In March 2019, mis-and disinformation from a deepfake helped spark an attempted military coup in Gabon, a relatively stable outlier in the region. The leader of Gabon, Ali Bongo Ondimba, had not been seen in public for several months due to several health issues. During this time, it was assumed he had died or been replaced with a body double, causing his first public appearance to be viewed with suspicion. Bongo’s facial expressions in the video seemed different, likely due to a stroke confirmed by his administration, and many in the public assumed the video was a deepfake and that he was in fact dead. Although the failed coup did not result in regime change, it did display the power of deepfakes to have real world impact.
The power of AI and its use as a political tool is being neglected both by policymakers and social-media companies, in part due to the numerous challenges in moderating online content. Former CEO of Alphabet Eric Schmidt has warned that social media companies do not have the capacity to deal with AI misinformation on their platforms, which could lead to chaos in the upcoming elections. Several social media platforms have made attempts at regulating the spread of false information on their platforms but have fallen short of the necessary efforts to counteract the threat. YouTube previously removed videos that touted false claims of fraud concerning the 2020 presidential election but has since reversed that policy over fears it could curtail political speech. With the appointment of Elon Musk as the CEO of Twitter, efforts to address the spread of false information on the platform have been reversed or gutted. Federal regulations also remain largely non-existent, as previous calls for political parties to sign an agreement promising not to use deepfakes was rejected. As campaign speech is protected, candidates can act with almost near impunity as they utilize deepfakes without having to disclose that it is AI-generated. The challenges and potential threat of AI-generated technology go beyond the U.S. context. United Nations Secretary-General António Guterres has recognized that the unregulated nature of AI has international consequences, and efforts must be taken to deal with the potential threat posed by the technology’s use to spread mis-and disinformation. Whilst AI has facilitated creative solutions and technological advances, its ability to deceive and manipulate on such a large scale highlights the inherent need to regulate the industry.