Publication

Algorithms Cannot Do It All

Networks visualisation

The image of a burned car accompanied by the caption ‘Voting hasn’t worked, take to the streets’ is just one example of an ad targeted at residents of Northern Ireland who live on both sides of the Peace Wall in Belfast. 

The Global Witness’ latest study shows that ads clearly breaching Facebook’s community standards, including hate speech and incitement of violence can thrive on the platform without too much trouble.

Hateful political ads are still easy to purchase on social media. The NGO has tested whether hateful ads breaching the standards can be purchased on Facebook – all of which were approved for publication.

Fortunately, this hateful campaign did not reach the people as it was just an experiment by the NGO to test the digital platforms’ internal processes’ compliance with its own policies.

Given the lack of operational transparency, no clear assessment can be made whether the approval was generated by automated tools or human revision, but since the posts clearly violated the company’s community standards, a decision was most likely made by algorithms.

Facebook algorithms still promote disinformation in Myanmar. Recently, an investigation also confirmed that even in the wake of the Burmese junta’s genocide against the country’s Rohingya Muslim minority. Facebook was used to incite violence against the Rohingyas and Facebook’s algorithms continue to amplify disinformation produced by the Myanmar military dictatorship that incites violence against anti-government, pro-democracy protesters.

Myanmar and the ads experiment in Ireland are not isolated cases. The January 6 Capitol Hill attack in Washington and the Israeli-Palestinian conflict are other examples of systemic failures in digital platform operations.

COVID-19 conspiracy theories that have been promoted online resonate in people’s minds. The impact of an unhealthy digital space, flooded by vast amounts of disinformation and hate speech, has been reflected in public attitudes. In Eastern Europe, 28% of respondents, according to GLOBSEC Trends 2021, believe that COVID-19 is a ploy to control the population.

These types of beliefs are not harmless. The high rate of COVID conspiracy theories are linked to people’s willingness to get vaccinated. While the prevalence of conspiracy narratives is nothing new, the speed with which it can spread and target the most vulnerable users online is an entirely new phenomenon.

That said, improvements have made, but more needs to be done. While significant gains were made with certain initiatives, including the EU’s voluntary Code of Practice on Disinformation, their efforts in improving their services and express the need to find a fair solution. Discussions on digital platforms’ regulation are beginning to gain momentum in the US, but the above-mentioned issues still continue to wreak havoc on the information space.

We cannot leave it to algorithms

Years of case studies prove that the efforts of digital platforms to improve the algorithms to deal with hateful and manipulative content are insufficient.  To deal with the problem, two major structural changes are needed:

1) There needs to be a general awareness of the full extent of problem, including both the successes and failures of algorithmic operations when it comes to content moderation, which must be transparently reported to the public. The platforms’ regular reports already in place should be complemented by independent audits, which cannot only look into verifying the platforms’ claims on how much of removed content was detected by algorithms, but examine how much slipped through and potentially affected thousands of users.

2) we cannot prefer one market to another. The independent audits must be disaggregated on a country-specific level to see how the measures are being implemented within each society with unique problems and languages of the world. Does the recognition of hate speech work the same way in the UK as in Montenegro? Is the attention dedicated to emerging issues and violent or hateful campaigns proportionate to the size of the community?

The EU as a testing ground

Residents of the European Union might soon experience certain changes due to the fact that an independent audit of the platforms’ actions found that illegal content is currently proposed within the Digital Services Act. At the same time, the EU Commission’s guidance on the Strengthened Code of Practice on Disinformation proposes a robust monitoring scheme with KPIs measuring the scale and impact of the platforms’ actions on a country-by-country basis.

While the EU legislation provides some light at the end of the tunnel, the rest of the world cannot be left out. The principles of transparent and independent auditing and monitoring can be applicable everywhere, irrespective of the country size and regime type.

The transatlantic community should work towards as common action as the strength of the alliance is likely to benefit from the Biden administration and the newly established Trade and Technology Council when it comes to boosting cooperation. Now is the right time to start talking about Transatlantic Principles for a Healthy Online Information Space, however, we need to look beyond the EU and NATO and instead further consider the UN Code of Practice on Disinformation.

Discussions about setting rules that are appropriate for the internet in the 21st century are ongoing. This clearly reflects the historical, political, economic and social characteristics of certain countries, with autocratic nations introducing forms of legislation that limit free speech and disregard human rights.

It is of the utmost importance that democracies build large alliances that will work together to define the future of the digital space, which will in turn define the quality of democracy in the decades ahead.

*this text has been originally published at New Europe

Authors

Director, Centre for Democracy & Resilience

Senior Research Fellow, Centre for Democracy & Resilience

Navigation

Authors

Director, Centre for Democracy & Resilience

Senior Research Fellow, Centre for Democracy & Resilience