Publication

Pivotal Moment for Europe: Recommendations from CEE before Commissioner’s Hearings

04.11.2024
100words

1. It is hard to argue with Commissioner-designate Michael McGrath’s assessment that we must “bolster our collective ability to detect, analyse and proactively counter threats” to EU democracies. One crucial weakness within the EU in this respect is the wide disparity in individual member states’ capabilities to counter hybrid threats. The remedy we recommend is initiating a new reporting cycle on counter-foreign malign influence (FMI) measures, modelled after the European Semester. EU institutions, member states, and civil society could work together to define necessary measures for a baseline level of protection across EU countries, assess annual implementation progress, and develop new policies as required. By establishing fundamental standards, especially among the weakest links, we can enhance the EU’s overall security.

2. Another major task for ťhe new Commissioners is the operationalisation and enforcement of the Digital Services Act (DSA). Although parts of the regulation came into effect in August 2023, delays in establishing essential bodies, such as national Digital Services Coordinators, have hindered progress, and adequate resources for implementation at the member state level remain uncertain. A key issue for the research community is data access: a draft delegated act detailing the rules on how researchers will be enabled to study the spread of malign content on platforms is currently under the public consultation but needs swift enactment. The Commissioners should demonstrate both commitment and capacity to fully enforce the regulation and hold non-compliant actors accountable.

3. One of the core opportunities—and challenges— for the Commissioners is to leverage existing regulations, namely the AI Act and the DSA, to address AI systems not currently classified as high-risk but with strong potential for misuse to manipulate and harm citizens and exploit societal vulnerabilities. While transparency through labelling synthetic content is important, it is insufficient, as it can be circumvented unless watermarking applies universally. Essential measures include integrating producers of large generative AI models into the DSA and conducting more frequent and earlier (than 2028) evaluations of such models, along with algorithmic recommendation systems. These should be evaluated according to principles set out in AI Act’s Article 7, examining, among others, potential harm to health and safety and adverse impacts on fundamental rights, particularly for vulnerable groups.