Press release

GLOBSEC 2018 Bratislava Forum Live Coverage: Day 3

19.05.2018
Globsec Forum 2018

Stay tuned for highlights, watch our livestream  & Facebook and follow #GLOBSEC2018 on Twitter!

11:00 AM Explosive Data: Cyber Threats to Democracy session.

The amount of sensitive data collected by digital companies begins to be disturbing. Imagine that insurance companies would soon differentiate their offer based on data related to individual lifestyle - that would make Big Brother look childish asserted Michael Chertoff, former U.S. Secretary of Homeland Security in a conversation with Steve Clemons from The Atlantic.

“You are far more ubiquitously watched with government and private practices combined”
Michael Chertoff, former U.S. Secretary of Homeland security

The debate, that followed engaged Marietje Schaake, Member, European Parliament, Olaf Kolman, Chief Internet Technology Officer, Internet Society and Samir Saran, Vice President, Observer Research Foundation in a lively exchange.

Majority of people surveyed express more and more distrust in social media and online platforms. The effects of disinformation on society are now well known from notable examples such as tampering with American elections in 2016. Due to this new threat, the global community must come together to create a policy framework to protect democratic institutions from cyberattacks.

At the same time, it was raised that democracies overlooked the exterritoriality of internet rules that do not match national laws. This disturbs right-wrong compass and allows for the negative backlash of otherwise fantastic development of technology. But technology only puts some already present problems on steroids rather than creates the entirely new set of threats.

Responses from the audience recognised the seriousness of the discussion and some one even called for a more serious discussion of defence against cognitive attacks of mass manipulation and influence beyond simply admiring the problem. Another one pointed to the new challenges related to the internet of things and asked if new regulations need to be prepared because of vulnerability to external attacks.

How can countries secure the digital infrastructure not only for online processes but also the very values and integrity of our democratic societies? What are the best practices for protecting campaign data and communications? A very engaged debate among panelists and audience signaled how serious and the urgent problem has been touched upon.

 

9:00 Too Much Intel, Too Little Action
Essentially, our skills to comprehend and understand the vast swathes of information obtained by intel sources are outdated and render the information useless. Thus, big data analysis has become an essential tool to fill in the gaps left behind by our conventional methods. to understand the issue better one needs to be aware of the difference between the information and intelligence information, as the first is being assessed against what else you know, authenticated to become the latter. This means, each piece is undergoing different quality and relevance processing to become an intel.

“The biggest fault we have is that we still cannot clearly distinguish which information is relevant.”
Michael Chertoff, former U.S. Homeland Secretary

In this session, distinguished panellists including Baroness Neville-Jones, Member of the House of Lords, Michael Chertoff, former U.S. Homeland Secretary, and Hans-Jakob Schindler, Former Coordinator, ISIL (Da’esh), Al-Qaida and Taliban Monitoring Team for the UN Security Council, agreed that the Intelligence has undergone different global changes - beginning with the terrorism methods, intel methodologies, and finishing with threats themselves.

The session lively moderated by Frank Gardner, Security Correspondent of the BBC, convinced that counterterrorism should never be politicised as this is about human lives. It also proved that global cooperation will bring the improvement to intel, so sharing information and assessment as well as exchange of methodologies is crucial here.

“You cannot expect closer cooperation of intelligence services if at the top level it lacks consistence. At the operational level I think the cooperation is in a good condition.”
Baroness Neville-Jones, Member of the House of Lords of the UK

Panellists have also proven that there are lessons learned from 9/11 where significance of the information was wrongly assessed and lack of data navigation was exercised. According to Baroness Neville-Jones it was also similar in the case of Iraq, where data was badly assessed and could not bring success in the atmosphere of enormous political pressure. Thus, such situation is no longer likely to happen again because the lesson was very well learned but some failures cannot be excluded.
Panellists were all in agreement that the worst security threat nowadays is posed by nuclear powers and misuse of nuclear weapons. Surprisingly, in an opinion poll conducted at the end of the panel, the audience and internet users have found migration as the greatest challenge for Europe.

“Whatever Brexit, means, we have to minimize the impact on security.”
Hans-Jakob Schindler, Senior Advisor, Counter Terrorism Project

This shows that social and political problems may push people to go for radicalisation and at later stage to terrorism. Therefore Baroness Neville-Jones convinces that good living conditions must be created so that ill-motivated people were not tempted.
Michael Chertoff has briefly explained that unpredictability of the US administration makes the world less secure and with confusing tweeting and lack of strategy from President Trump it may become a danger. He also mentioned that controversy over Gina Haspel’s nomination should not be that harsh, as she is a career professional and this is reassuring because it is not politically pressured.

You can watch the session here.

9:00 AM Competition vs Cooperation: Stakes in the AI Race

Written by: Galan Dall, Visegrad/Insight

There are numerous key players in the AI field today: Singapore, the U.S., Japan and certainly China. Comparing Japan with China offers insight into how diverse the application of AI is being used. China has a very clear strategic plan to use AI as a driver for industry and creating future applications while Japan conceives AI transforming the society to the point where humans will like side-by-side with robots, a society 5.0.

Traditionally, the world has looked the US as the force pushing technology and science, but with AI this has changed. China wasn’t part of the discussion on how the Internet was developed, they only were able to implement the technology which they have done quite well. With AI, they want to be one of the players that decide how it develops. That being said, this competition could be healthy, it could force the US or other countries to step up their efforts in the field, but there is the worry that this could lead to a race with unintended consequences.

A key difference between China and the US is the collaboration between the government and the private sector. China even has a much-maligned security law which effectively means the government controls all the data in the country. On the positive side, they can share this information with start-ups or universities which can encourage collaboration. On the negative side, (and this is also a consequence of their strategic plan), the direction of research and innovation is chosen by the government which means they are stifling alternative development.

In comparison, recently 3100 engineers for Google signed a letter criticizing the company’s collaboration with the government, specifically with the military. This has lead to a discussion of standards and ethics related to the use of AI: eventually, after the brief discussion, it leads to the conclusion that AI should do no harm.

“It does not mean that AI is going to replace the people. Ai is helping us to do things that are boring for us.”

Tze Yun Leong, Director, AI Technology, AI Singapore

Singapore has created a wide-spread initiative to use AI to help solve issues related to health. Likewise, AI could greatly improve the efficiencies for smart cities, helping citizens with transport, access to emergency services, traffic and resources. That being said, we have to give up some moral authority when we start using AI which could create a backlash; this is especially so in “grey” areas which can’t be understood by a binary decision-making process.

“We gave AI a lot of skills that we have but still AI is not having all of them and AI is still not doing better than us.”

Danit Gal, Project Assistant Professor, Global Research Institute, Keio University

One crucial issue is the problem of trust. It was mentioned that trust should be developed through the distrust of the machines and understanding that AI is still in its infancy and will make mistakes. We need to help the machines learn from these errors, and, in the process, a relationship of trust can be developed.

Whether or not AI will make the world more secure it uncertain. When it comes to sustainability, most likely so, but if AI is used to create more and more efficient weapons, probably not.

Navigation