In keeping with previous years, the content discussed and delivered at GLOBSEC Bratislava Forum 2018 was based around key thematic clusters. What follows are summaries of talking points, important comments, photographs and social media commentaries related to panels covering: AI in conflict; Blockchain; AI in education; remote controlled terrorism; the Dark Web; the global AI race; and the age of bots.
AI in Conflict: Hyper War No Longer Sci-Fi
- Military is at the start of adoption of AI solutions and plans to deploy less controversial applications first (training, logistics etc.)
- NATO maintains doctrine that requires human on decision loop. However other players seem to be less concerned for this requirement and hence create pressure in terms of decision time. This requires innovation in how officers are prepared in order to minimize this disadvantage
- It is unclear if AI will play stabilizing or destabilizing role in world’s security system as it may provide early warning and better monitoring but also is empowering various non-state groups
“I have way bigger worries about political interoperability than about military compatibility.”
Denis Mercier, Supreme Allied Commander Transformation, NATO
Summary: The adoption of AI in military has many facets. Public is mostly concerned with autonomous systems that may make decision to kill. It seems however that AI will be at first more useful in non-combat tasks including training, logistics, monitoring and analytics. There are also huge hopes in terms of providing interoperability of defense systems of all NATO members which are not compatible now. Such approach is also aimed at building public’s trust in AI before moving to more controversial areas like offensive actions.
US is currently in the lead in terms of development of AI and it will continue for some time but to preserve its dominance it requires unified strategy and investment or it will be surpassed by China that is heavily investing in the area.
The AI may provide substantial benefits on battlefield by providing quick intelligence and indicate anomalous behavior of other actors. This however may upset the balance of nuclear deterrence that provided pace so far. On the other hand the watchful eye of partners may deer from aggressive behavior.
The audience was mostly concerned with the opaqueness of the technology. We do not know how exactly AI works and how it arrives at the conclusions it does. This leads to multiple problems including the ability of the enemy to feed the system doctored data in order to cause malfunction. This may lead to turning the weapon against its wielder.
This in turn created concerns whether such technology should be militarized. Possibly instead of arms race we should reach for de-escalation.
RT @GLOBSEC: @NATO #General Denis Mercier: “#China is better than us in utilizing already existing #technology in #defense.” #panel on #AI in #conflict: #Hyper #war no longer #scifi #GLOBSEC2018 #GoodIdeaSlovakia #cybersecurity pic.twitter.com/TnDgR8Rdr7
— Vladimir Vano (@vladimirvano) May 17, 2018
GLOBSEC Talk: Parallel & Undercover World of Blockchain
- Blockchain allows for unlimited, free market, decentralised competition by being not just disruptive but also foundational. In order to create the world of tomorrow, which will be determined by 100,000s of transactions a second, the current hierarchical structures will have to be abandoned.
- There is no foreseeing what this world of the future will be as the blockchain community will have to grow naturally and in unpredictable ways
Summary: Blockchain will be a platform, similar to the internet today. However, individual governments, organisations or companies (banks) will not be the institutions that give something value or validate that something is real, rather it will be a set of algorithms which are unbiased and unable to be make a mistake.
Hence, restrictive regimes will lose control over their populace, as an example. Apparently, it can be the basis for solving almost any issue through the computation of big data and the creation of “digital cities”, communities which focus on developing the blockchain technology, but who are generally unknown to each other, and they are categorised in five groups: developers, miners, wallets, exchangers, and the merchants. The trust which is built in the system is based on the validation of the smart transactions.
There are two weak points mentioned after a question from the audience. First, blockchain has an issue with scalability as they don’t have enough people working on the platform. Second, government regulation is slowing down a process which should be moving and developing much faster.
Considering the fact that this session came at the end of the day, the audience’s enthusiasm for the topic and presenter was notable. It was a very engaging discussion and presentation, but there was some hesitance from the audience to accept everything being said about the new technology. However, this was addressed by the presenter with the analogy of trying to explain the current capability of the internet to someone from the early 1990s, who couldn’t possibly fathom the extent to which the technology affects our lives today.
Education Disrupted: Building Skills in the Age of AI
- Education is about learning how to think and developing the ability to work with others (both individually and in a team); training is focused on vocational activities. Unfortunately, these two are often conflated by governments.
- Currently, AI is not developed to a level where it could replace teachers; however, it can aid in learning especially in specific areas (language, etc.).
- While STEM is important, STEAM (adding in the arts) is far more.
Summary: The idea of learning/education being a process we go through when we are young is outdated and wholly unrealistic. Even for those with advanced degrees, it is necessary to continue to learn throughout one’s career or they will be replaced by more capable (and likely younger) colleagues.
“Education is about learning how to think and develop the ability to work with others. Training is focused on vocational activities.”
Peter Vesterbacka, former Mighty Eagle, Angry Birds
Numerous examples can from the comparison of the Finnish and US school systems. The US has many wonderful schools, but the quality of any given school district is largely determined by the general income level. Of course, there are other factors, family and neighbourhood environment, but, in juxtaposition, in Finland the best school is always the closest, regardless of demographic.
To prepare the next generation, we need systemic changes to the way we view and understand education. As an example, some newly-arrived parents might be upset if their child is in a Finnish kindergarten where they perceivably only play all day long: children learn through play, and exemplary teachers know this.
A member of the audience asked if the very formulation of schools (the rigidness of the institutions) is of value; essentially, will they be necessary in the future. Many thought it is possible for schools to slow down the learning process, and they often (still) focus on memorisation through hours and hours of homework which is now obsolete since the birth of Google.
What is the proper education? "It should inspire kids to learn&remember it even in next years. It allows integration&correlation of knowledge. If kids can create next logical steps-it's done properly?" @Larry Schuette, inspiring debate #Globsec2018 pic.twitter.com/OYkuLIaiTD
— Ludmila Majlathova (@LudmilaMajlath) May 18, 2018
It was agreed that a “proper” education will be in some way personalised to the needs and learning styles of the students. If the student learns the best through watching YouTube videos, so be it; we no longer need to educate with the teleological perspective of trying to achieve the highest test scores.
If we focus on making people well-rounded, able to think critically and work with others, then vocational skills can be learned with nano-degrees which take little time and can be learned when the job/occupation presents itself.
The audience was very engaged in the topic and had many, thought-provoking questions which offered considerable value to the discussion.
Remote-Controlled Terrorism: What’s the Price of Freedom?
- The amount of coaching that occurs could be staggering. The “coach” could direct the terrorist several times a day, whenever unexpected developments occur.
- There is considerable difficulty for AI to determine what is terrorist propaganda and what is important journalism covering the events. Thus, it still requires a human element which is both slow and can lead to errors.
The communication structure between a terrorist and their coach is multifaceted. While they mainly communicate via the Internet, they really function on many platforms (and perhaps even in person on occasion).
The recruiters offer the “would-be” terrorists with support similar to a close-knit friendship group or family. Often communicating several times a day over a series of months and even giving step-by-step instructions when spanners are thrown in the works of the original plot. In most cases, the coaches give the terrorists a sense of belonging and group identity which is missing from the lives.
Social media platforms are working to delete terrorist messaging, but the difficulty surrounding this is immense. First, often the language used is highly localised, meaning that a comment directed towards someone in London might be harmless, but when the same expression is used somewhere else, it could lead to someone being targeted. Also, AI is less helpful here as the machines have difficulty distinguishing between journalism and terrorist propaganda.
The audience was appreciative towards the panel, and two comments lead to considerable discussion: first, regarding the limitations of companies to self-regulate and to what extent they need to comply with governments around the world in order to thwart the impact of potential threats. This lead to the issue that governments have different definitions of offensive material, so companies like Facebook need to have global standards which may go against the wishes of governments from around the world. And the second, related to the capability of some terrorist organisations to use the Internet when the infrastructure in the region of which they operate (Syria, Nigeria) are underdeveloped or completely lacking. Usually, this is done with great delay and by getting the videos to another area with better infrastructure.
Globsec Talk: Depth and Darkness of the Web
- To determine how you feel about Darknets, you need to simply answer the question: do we have the right to have unrecorded conversations. If yes, then the very existence of Darknets is more complex and nuanced then a lot the media discussion would have you believe.
- Darknets do not fully hide your identity; there are numerous attacks which can reveal your IP address
- When the internet stops serving the interests of the consumer but that of businesses, new technologies will develop to meet these unmet needs. This is what is happening with Darknets.
The creation of Darknets is the expression of creators who want control over their data. It seems to originate from a lack of trust. Many of the desires for the internet today: encrypted communication, the right to be forgotten, the desire not to have your data sold and utilised by advertisers, have led to the creation of these Darknets.
Facebook created a version of their website for the Tor network so that decedents from countries could communicate with others around the world.
People have varying opinions on Darknets but so do governments. The US State Department (through the Navy) actual funds the Tor network while the FBI is focused on bringing it down. While many of the cites working on the Tor network are actually run by law enforcement to gather data on users, so the issue is far more nuanced than it is often portrayed.
The audience was at times combative about the definition and purpose of Darknets. Many had conflicting ideas about their value and use as well as the accompanying ethical issues.
Competition vs Cooperation: Stakes in the AI Race
There are numerous key players in the AI field today: Singapore, the U.S., Japan and certainly China. Comparing Japan with China offers insight into how diverse the application of AI is being used. China has a very clear strategic plan to use AI as a driver for industry and creating future applications while Japan conceives AI transforming the society to the point where humans will like side-by-side with robots, a society 5.0.
Traditionally, the world has looked the US as the force pushing technology and science, but with AI this has changed. China wasn’t part of the discussion on how the Internet was developed, they only were able to implement the technology which they have done quite well. With AI, they want to be one of the players that decides how it develops. That being said, this competition could be healthy, it could force the US or other countries to step up their efforts in the field, but there is the worry that this could lead to a race with unintended consequences.
A key difference between China and the US is the collaboration between the government and the private sector. China even has a much-maligned security law which effectively means the government controls all the data in the country. On the positive side, they can share this information with start-ups or universities which can encourage collaboration. On the negative side, (and this is also a consequence of their strategic plan), the direction of research and innovation is chosen by the government which means they are stifling alternative development.
In comparison, recently 3100 engineers for Google signed a letter criticizing the company’s collaboration with the government, specifically with the military. This has lead to a discussion of standards and ethics related to the use of AI: eventually, after brief discussion it leads to the conclusion that AI should do no harm.
Singapore has created a wide-spread initiative to use AI to help solve issues related to health. Likewise, AI could greatly improve the efficiencies for smart cities, helping citizens with transport, access to emergency services, traffic and resources. That being said, we have to give up some moral authority when we start using AI which could create a backlash; this is especially so in “grey” areas which can’t be understood by a binary decision-making process.
“We gave AI a lot of skills that we have but still AI is not having all of them and AI is still not doing better than us.”
Danit Gal, Project Assistant Professor, Global Research Institute, Keio University
One crucial issue is the problem of trust. It was mentioned that trust should be developed through the distrust of the machines and understanding that AI is still in its infancy and will make mistakes. We need to help the machines learn from these errors, and, in the process, a relationship of trust can be developed.
Whether or not AI will make the world more secure it uncertain. When it comes to sustainability, most likely so, but if AI is used to create more and more efficient weapons, probably not.
Age of Bots and Robotisation of Truth
Proliferation of the Internet was once thought to produce better informed citizens and voters. However, nowadays we witness wide-ranging abuse of these platforms and freedoms to undermine democratic values and processes. The possibilities of public mobilisation have altered and are misused to sway public opinion and erode trust in the institutions. Automated bots generating disinformation, false narratives and propaganda have made it way too tempting for anti-democratic political actors. Such activity, however, already has far reaching real-life consequences for political parties, media, public institutions and other democratic anchors.
Under the moderation of Stephanie Liechtenstein (Reporter, Wiener Zeitung, Vienna), four experts were set on stage: Jānis Sārts, Director, Strategic Communications Centre of Excellence, NATO, Riga; Scott Carpenter, Managing Director, Jigsaw, Alphabet Inc, Washington, D.C.; Laura M Rosenberger, Director, Alliance for Securing Democracy, Washington, D. C. and Daniel Milo, Head of Strategic Communication Programme, GLOBSEC Policy Institute, Bratislava.
In the first half of the panel, discussants tried to define what one should understand as (ro)bot activity. Laura claimed and others reinforced that bots were just one piece of the information ecosystem, created to promote certain topics and manipulate users. However, all agreed that not all bots were necessarily bad: from the example of the amber alerts for missing children to page indexing, there are many positive bots as well, doing useful jobs or performing internet maintenance functions.
Daniel Milo kicked in with the proportions of bot-related traffic compared to human-generated traffic: according to his estimates a high percentage of online traffic is created by bots and crawlers, meaning all sorts of automated systems, but not necessarily malwares. He cited a another research claiming bots account for more activity on Twitter than humans do. Finally, referring to the 2016 US election meddling, Milo stated that there were five pro-Trump bots for one pro Hillary bots – these are political bots demonizing political opponents, their supporters or simply demobilizing given parts of voter bases.
Sarts said 85% percent of Russian language Twitter content is created by bots, while the proportion remains at 40% in English. As a result, decision-makers may act upon truly biased perception of the “will” of people – as they get a distorted picture through the social media. Scott added that misrepresentation of facts hence in turn distorts reality, as it becomes very difficult to get a complete picture of the reality out there. He remarked that even cross-platform discussions are rare – between corporations, and cross-border discussions of national authorities or investigative journalists should also be encouraged.
Especially that in the first instances of successful disinformation campaigns, even the mainstream media picked up quotes from fake accounts or fake personas. The panel reminded us how easy it is to organize races upon fake calls through social media.
Laura’s later remark highlighted the complexity of this ecosystem: the ads purchased by the Saint Petersburg based IRA actually promoted an application, which not only installed the promised feature but at the same time it set up a plugin into the browser, later creating automated traffic to websites of disinformation, driving up their hit number, credibility etc.
As for what one might do against all this wave, two points were shared among the panelists: on the one hand it would be necessary to push for more transparency, supporting online platforms to flag bot-created content. Scott, on the other hand, emphasized that while fact checking got a significant momentum, the fundamental question is whether the individual users are actually curious at all about facts: and then this is not an online problem but a much deeper question of social organization and responsibility.