Publication

AI in the Visegrad 4: Emerging strategies, Uncoordinated Approaches

03.12.2021

Contemporary research has hailed the ‘transformative potential’ of Artificial Intelligence (AI) across all aspects of society. Yet many states are underprepared, and lack the proper legal frameworks to address the challenges of AI in the public sphere. This policy paper touches upon a comparative overview of the Visegrad countries’ (V4) current AI landscape, charting key political and organisational developments, as well as highlighting key public sector actors. It illuminates existing AI use-cases across government and other areas of society. It notes the social, economic, political, and regulatory issues that the increased use of AI presents to the V4, while highlighting some of the security challenges associated from the growth of AI across society. It poses the following questions:

  • What are the current V4 approaches to AI in the public sector?
  • What are the current uses / risks?
  • Can the V4 nations benefit from a unified approach to AI in the public sphere

Current V4 AI Situation

The current AI situation in the V4 states might best be characterised as in its infancy, but rapidly developing. This reflects the use of AI across many small-medium states. The research notes that the V4 states currently lag behind western Europe in terms of AI research output and AI use in public services. Similarly, they are even further behind the United States and China, the two ‘big players’ in AI. This is to be expected, given the limited funding of small-medium states. There are a small number of active AI use cases in the V4, perhaps most notably the controversial (and now discontinued) unemployment profiling utilised in Poland. There is also some noted divergence in the policy priorities of the V4 nations at an EU level.

Risk analysis

Some key risks posed by public-facing AI in the V4 include concerns around supply chain security and the role of China as a rising power in the field of AI.  There are certain ethical as well as security concerns around prospective AI collaboration with China. There are also notable issues surrounding the regulation of AI development and calls for deregulation in the region. Striking a balance is likely to be key in addressing some of these concerns. The issues of monopolization and dependency upon big tech is equally problematic. As is the issue of ‘brain drain’ and the loss of top talent, most notably to Silicon Valley in the United States.

This research concludes that, currently, there is very little coordination or collaboration between the V4 states in the field of AI.

Policy Recommendations — What can be done to facilitate AI growth in the region?

  • Financial support of PPP’s (Public-Private Partnerships)
  • Improved governance through closer links with the tech community & more technocratic governance based upon expertise
  • Support for the startup industry generally
  • Create subsidized spaces for innovation to provide workspace/networking spaces (akin to Garage 48 in Tallinn, these can also support urban regeneration)
  • Hosting events (such as ‘Hackathons’ but with a specific focus on AI service development.)

Policy Recommendations — How can AI growth in the V4 be ‘Ethical, Trustworthy, and Secure’?

  • Ensuring proper oversight and regulation of development
  • Public outreach, education and confidence building
  • Public consultation in the implementation of services

Find the PDF version below to read the full report.

Author: Alex Hardy, PhD Candidate, Royal Holloway, University of London

*Alex Hardy is a visiting Think Visegrad scholar and this policy paper was produced within the Think Visegrad Non-V4 Fellowship programme.