Commentary

Help People Navigate the Infodemic: Central and Eastern Europe Deserves a Secure Online Space

on 05.05.2020

Many measures in content moderation are only partly applied to the smaller markets of Central and Eastern Europe, where COVID-19-related disinformation and manipulative content is happily travelling in the online information space.  

If crises can claim any positive contribution to society, then the ability to speed-up processes that would have otherwise taken much longer is one of them. 

As the COVID-19 pandemic shifted the daily lives and routines of many to online spaces, its governance has been largely impacted by swift decisions and enhanced cooperation on various levels. What had previously been treated with suspicion and reluctance was suddenly implemented at a surprisingly fast pace. 

One of the examples of surprisingly quick actions to help tackle the crisis has come from social media platforms. For example, in their efforts to fight unverified claims, Twitter is adjusting the search results to make sure that users received credible and official information when searching for COVID-19. 

Smaller markets left behind 

However, closer scrutiny of these platforms, serving as a key information source for many, shows that the associated quick actions leading to more reliable information have not been evenly spread across the world. 

The most complex measures are still largely implemented in the English-speaking world and countries with the biggest user bases. 

In contrast, smaller markets, such as the countries of Central and Eastern Europe (CEE) with unique languages and a few million users, are still left behind. This uneven attention is troubling; the region has not been free from the rise of divisive and hateful narratives which thrive and often originate on social media platforms. 

Besides Twitter’s adjusted search results. Facebook is alerting users if they or their relatives have engaged with harmful COVID-19-related content and provides a link to COVID-19 Information Centre with reliable sources when a user engages with any relevant groups or events. 

They also claim they remove content with false statements or conspiracies – identified in cooperation with the World Health Organisation, local health authorities and third-party fact-checkers. 

Meanwhile, Youtube has been updating the world about tougher measures in content moderation as they keep removing videos containing health-related misinformation. 

While these measures should be acknowledged as a necessary move towards greater user protection against harmful content, they are still far from necessary to secure a safe and just online space for users. 

Firstly, these measures now primarily concern the pandemic-related content, while the challenging discussion of protecting the information space against such content remains to be much broader. 

Secondly, only a part of the measures is applied in the smaller markets of the CEE countries, despite the sad fact that COVID-19-related disinformation and manipulative content is happily travelling within these countries’ online information space. 

Third-party expertise 

One of the key examples of this malfunction is Facebook’s measure to enhance the work of third-party fact-checkers to help prevent the spread of the content, which, in these times, has an ever-increased potential to spread distrust, harm and kill. 

Third-party fact-checkers are a powerful resource as they generally come from a given country, know the local language and vulnerabilities. Facebook now claims to rely more on their expertise with faster and more regular removals of the content flagged as harmful or false. 

However, Slovakia and the Czech Republic each have one person a dedicated third-party fact-checker for their markets, while Hungary, which is particularly vulnerable due to state media capture, has none. 

As we recommend in a study about the online information space prior to Slovak 2020 parliamentary elections, this case is indicative of how urgently a far-reaching reform and coordination is needed – inspired by the COVID-19-related efforts but going well beyond it. 

Much more leadership and cooperation on the EU level is also necessary, together with a stronger focus on country-specific and language-specific capacities from the side of social media giants. 

We also suggest that each EU member state should have a designated contact or office within Facebook, responsible for the communication with regulatory bodies and delivery of comprehensive databases about advertisement and reported or removed harmful content. 

Additional local experts should be involved in assessing and overseeing content moderation and receive data and the competency to resolve the cases when harmful content fails to be removed from the platform. 

For that, stronger cooperation with regulatory bodies and more investment into properly diversified staffing is necessary on the side of tech giants. 

So far, the measures taken prove that the adjustment of algorithms, focus on reliable content and stricter rules to stop spreading disinformation are possible and the regulators should use the window of opportunity to maintain and foster the efforts. 

At times of uncertainty and chaos ahead of us, it would be a much-needed contribution to help people navigate the infodemic. 

This article was originally published on the Visegrad Insight website on 5 May 2020.

Authors

Director, Centre for Democracy & Resilience

Navigation

Authors

Director, Centre for Democracy & Resilience