Event

Event Recap: "Election Era 2024: Navigating the Digital Divide - AI's Dance Between Threats and Defenses"

22.02.2024
ai event

On February 20, GLOBSEC hosted a truly global online discussion titled "Election Era 2024: Navigating the Digital Divide - AI's Dance Between Threats and Defenses." This virtual event brought together experts from across the world to discuss the dual role of AI in election integrity. The panel included Sophie Murphy Byrne of Logically, Miraj Chowdhury from Digitally Rights, Gülin Çavuş of Yapay Gündem, and Summer Chen, the Editor-in-Chief at the Taiwan FactCheck Center.  

Jana Kazaz of GLOBSEC moderated the discussion, starting with remarks on AI's rapid development and its impact on democratic processes and the information landscape. She particularly noted the challenge of AI-manipulated content to election integrity, using recent Slovak elections as an example, and prompted a discussion on the experiences and strategies of the speakers related to AI's misuse and potential benefits in the context of elections. 

Sophie Murphy Byrne shared insights from the Argentine presidential elections, emphasizing that “traditional disinformation campaigns, aside AI, should remain our concern.” However, she warned of “the potential for generative AI to exacerbate these challenges by making disinformation campaigns cheaper to design, easier to deploy, more scalable and supported by hyper-realistic and hyper-targeted content.” On the other hand, she added that AI can at the same time help with countering electoral interference, showcasing the AI tool developed by Logically that helps with detecting and countering disinformation. 

Miraj Chowdhury compared this to the elections in Bangladesh, where AI played a significant role, particularly in creating fraudulent videos of female candidates and AI-generated news anchors sowing conspiracies about interference in the elections from the US. He also stressed the disparity in access to AI tools, which amplifies disinformation in lower-income countries, noting that “people with more resources and power have a better ability to produce AI-generated political disinformation because they have the tools, they have the skilled people doing this for them, resulting in a kind of disinformation industry amplified by AI.” 

Gülin Çavuş followed up on the case of Bangladesh and discussed how the topic of AI was exploited by powerful individuals in Turkey, especially politicians, “to further fuel the mistrust of people to what is real and what is not.” She also confirmed that similarly to the situation in Bangladesh, AI-manipulated content was often used in smear campaigns. 

Following on Gülin, Summer Chen shared that the recent Taiwanese elections too faced a flood of manipulated content, including audio, video, and deep fake porn videos. She then shared some strategies used by the Taiwan FactCheck Center for countering such content, such as "building a close connection with their audience through tipline and chatbots, workshops empowering journalists and chief editors, making and promoting playbooks and pre-bunks of election hoaxes and building AI expert community that helps with raising awareness before elections, and experimenting with popular forms of content like TikTok videos trying to involve also influencers and celebrities". 

Key Takeaways 

  • AI presents a new threat, but our focus should not solely rest on AI-generated disinformation. Coordinated inauthentic behaviour continues to disseminate significant amounts of disinformation across platforms before elections. 
     

  • Uncovering the actors behind AI content, such as deepfakes, and understanding their intentions and motivations is crucial. This requires investigative efforts aimed at both the content and the creators. 
     

  • AI media literacy is essential. In contexts like Turkey, confusion about AI among the public and politicians underscores the need for education. Furthermore, debunking AI content demands specialized knowledge, emphasizing the importance of AI media literacy for professionals like chief editors and journalists, as demonstrated by workshops in Taiwan. 
     

  • Civil society organizations and fact-checking entities should prepare playbooks outlining possible narratives that may arise during election campaigns and strategies for debunking them. This proactive approach enhances preparedness to combat misinformation effectively. 
     

  • While fact-checkers play a vital role in identifying AI-driven disinformation, major platforms must shoulder their responsibility and take action on information flagged by fact-checkers. Collaboration between fact-checkers and platforms is essential to mitigate the spread of misinformation and safeguard the integrity of the electoral process. 

If you weren't able to attend the event, you can catch up by watching the video recording below!

Navigation