Artificial intelligence (AI) plays an increasingly important role in our daily lives, be it digital voice systems, online banking security measures, or the personalization of social media news feeds and search engine results. At the same time, however, AI is getting cheaper, more available, and more commonly used by non-democratic regimes for dissemination of state propaganda, pushback against dissent, and control over citizens as well as by various non-state actors for economic gains. The question of misuse of technology, ethics, and regulation thus comes to the fore. 

In order to address these issues, the Transatlantic Leadership Network organized a working group comprised of private, public, and think tank sector experts.  Dominika Hajdu and Miroslava Sawiris took part in one of the discussions on the Misuse of Technology Threatening Security and Human Rights and prepared a policy brief on the topic of the “Transatlantic Approach to AI” discussing the use of AI from the perspective of human rights compliance and the protection of democratic principles across the transatlantic space.  

The paper argues that with the AI-based large-scale surveillance technologies becoming cheap and readily available, China employing increasingly severe censorship and social scoring systems in the online environment, and tech giants, like Metaverse, developing ambitious futuristic visions driving technology to further blur the dividing line between online and offline spaces, a unified transatlantic approach to internet governance is urgently needed. However, both sides of the Atlantic must take bold steps if their approach is to be truly efficient in safeguarding human rights and democracy. Drawing on best practices put forward by the European Union’s AI Act, the policy brief proposes that AI systems need to be rules-oriented and assorted based on the risks they pose to their users and society on whole.  

You can read the full version of this policy brief here.