What’s the one topic that would have Hollywood, Vladimir Putin, top military brass, ambitious Chinese businessmen and others talking to each other for days on end? The answer is Artificial Intelligence (AI). Debates concerning AI are wide and varied, but particularly frenzied when it comes to warfare. To this end, the prospect of robots and autonomous machines outstripping and eventually dominating humanity remains the stuff of nightmares, not to mention the occasional sci-fi blockbuster.

However, overhyped debates concerning the challenges and opportunities posed by autonomous machines divert attention from the fact that AI is already changing our approach to national security in a number of less mind-boggling but far-reaching ways. AI’s role in hybrid warfare, AI-associated reversal of relations between military and tech, and recalibrated balance of power between allies and adversaries are worthy of limelight due to profound implications on the global and military stage. It is not the remote promise of autonomous superintelligence that is propelling these processes though, but forms of narrow AI that have already been developed or currently a work in progress.

Power Plays

When it comes to all things political, Vladimir Putin is a particularly vocal advocate of AI as a tool for power projection. As he sees it, whoever masters the colossal opportunities and threats posed by AI first will eventually dominate the global political landscape. The Russian President is not the only world leader thinking (and acting) along these lines. Last July, China released  an ostentatious plan to become a front-runner in AI by 2030, an ambition that merely underpins Beijing’s determination to become tomorrow’s leading  economic and military power. For its part, the United States has put forward a “Third Offset” strategy that is less a detailed roadmap and more a focus on how intelligent systems might replace nuclear weapons and precision-guided weapons as the ultimate guarantor of its security.

To be sure, world powers increasingly turn to AI to provide solutions for boosting power projection and delivering combat advantage, including when it comes to the evolving concept of hybrid warfare. The beauty – and danger – of AI is that it can be used for a wide range of purposes from weapon development and intelligence to logistics, training, and influence operations.

Not Just Killer Robots

The passion for AI stems from the achievements of the commercial sector. Think about the time the bank sent you a fraud detection message because a transaction looked different from your regular behaviour. Or how doctors are monitoring, allocating resources and adjusting responses to this year’s outbreak of Aussie flu. How does Amazon know about your interests and just the right time to tell you about an interesting new product? Indeed, AI’s use doesn’t end there, with energy, logistics, education and other sectors also relying on this technology.

While business leaders proclaim AI as the ‘new electricity’ defence and security actors are way behind the curve when it comes to leveraging associated technologies. Paradoxically, this can be partially attributed to the fact that public and private actors in these fields remain predominantly focussed on the development of futuristic autonomous platforms at the expense of narrow AI and algorithms that identify patterns and trends. Autonomous combat machines feature heavily in all the mentioned strategies due to their perceived potential to deliver the ultimate combat advantage. But this promise is far-fetched. Although less captivating and mind-blowing, commercially-developed narrow AI solutions are well-placed to enhance security architectures in contrasting ways which mirror their use in the commercial sector.

For instance, computer vision powered by AI and machine learning is rapidly improving and could help armed forces and intelligence agencies to accelerate the screening and scouring of vast amounts of data. From there, AI could support the delivery of troops and resources to key locations in much the same way it is used in retail and the management of supply chains. Algorithms could also help military medical personnel to monitor the health of soldiers, calculate critical nutrition needs, and predict illnesses. In addition, AI can help to forecast enemy behaviour, monitor battlefield conditions and increase situation awareness. Ultimately, these applications will help cash-strapped governments’ efforts to cut costs and increase efficiency.

Changed Relations Between Military and Tech

The “borrowing” of AI technology and ways to use them from the commercial sector implies that relations between military and tech are nothing like we have seen before. There was a time when the private sector relied on technological ‘hand me downs’ from the likes of the Defense Advanced Research Projects Agency (DARPA) to inform and guide its efforts to improve business best practices. However, when it comes to AI these roles have effectively been reversed, with breakthroughs informed by readily-available dual-use technologies.

This change is not necessarily to the detriment of the defence sector. However the latter has to adjust to the reality that the leading technology resides in the private sector, and do so quickly. Again, the speed and comprehensiveness of this adjustment varies in different countries.  Thanks to its centralized state system, China has rapidly mastered the concept of private-public synergy. For instance, Baidu, “the Chinese Google”, has opened a research lab that cooperates with academics focused on military applications. China is also offering unprecedented investment into private enterprises working on AI.

Although Russia’s own AI sector is so far losing the global competition, the country’s security apparatus benefits from the understanding that effective integration and customization of commercially available algorithms might be more important than discovering one’s own technology. Russia also fathoms the potential of (in Russia’s case dangerously unrestrained) using AI not only in conventional warfare but all dimensions of hybrid war.

For its part, the United States is looking to Silicon Valley to help the Pentagon harness AI for military purposes. In 2015, the Department of Defense (DoD) launched the Defense Innovation Unit   in order to build partnerships with private AI companies. There is also a growing perception that AI has significant potential for national security and that the military has to speed up the development and acquisition of AI-based weapons and corresponding command and control structures.

However, building public-private partnerships is far more difficult in Western countries than china or Russia, thanks in no small part to the contrasting roles of the state.  The hurdles are many, and not every state is comfortable with the peculiarities of the new military-industrial/tech complex and its national idiosyncrasies. In addition, the corporate sector’s goal of maximizing returns on investment is not always in line with the specific needs of public sector counterparts. Finally, the government’s involvement in research and innovation initiatives raises questions concerning data privacy. The development of customized technology for public security needs might involve the sharing of sensitive classified data with a private entity.

But the underlying dilemma is that commercial technology is readily available for anyone willing to pay. China, for example, has poured money into U.S.-based start-ups, an issue that’s prompted calls for Washington to better scrutinize investments and shield cutting-edge technology vital for national security. Regardless of the hurdles, if the technology is not harvested by your own security structures, the adversary might already be working on a similar application.

The Battle for Hearts and Minds

Winning hearts and minds has long been recognized as an imperative for sustainable military success and societal stability. Yet, given that hybrid warfare has largely eclipsed conventional warfare, the importance of mastering AI is no longer a choice but a sine qua non for a state’s defence and security architecture.

The fake news phenomenon that has swayed public opinion and played such an important part in elections over the past few years was largely possible because of AI systems. The content and outcomes generated by bots and other applications are both frightening and appalling. However, in an environment where adversaries skilfully integrate AI into influence operations, outrage and indignation needs to be buttressed by effective response and adaptation.

Accordingly, AI technologies should not only be viewed as an offensive weapon, but also a supplementary defence tool that can be wielded in a variety of ways. Education and critical thinking of the population is indispensable. So is the ability to detect bots and neutralise them. Fake audios and videos produced by AI are becoming indistinguishable from real ones. However, what humans struggle to detect due in no small part to the need to process mountains of data – can be screened and identified by AI. Facebook and other entities are already using AI to help identify and remove terrorist propaganda. Although by no means the final solution, AI algorithms can be developed to serve as supplementary means to counter bots and fakes.

With social media and other online platforms offering mountains of data on virtually every citizen, AI can also be used to detect influence operations and identify terrorist plots. AI systems are also used for educational purposes. The reality of bots that can engage in intelligent debate and counter logical arguments is scary, to say the least. However, it would be illogical not to use them for defensive purposes and in the national interest. For instance, intelligent machines can be used to train students in logical argumentation against the most impeccably tricky opponent generated by an adversary.

Defence and security actors increasingly acknowledge that security, peace and stability is unachievable without resilient mindsets. Indeed, the only way to achieve resilience is by cooperating with civilian authorities. To assist, the European Union, NATO and many countries have established specialised units to tackle the problem of influence operations and fake or “post-factual” information. The wider public sector has also started to work with social media companies and online platforms to increase their willingness and ability to prevent AI systems grazing on data and contain the flow of fake or tainted information. But there is more work to be done. AI offers both challenges and solutions to one of the most pressing security challenges of modern times.

Off-balance Relations: Alliances and Balance of Power

AI is also altering the balance of power between global actors and among alliances in several ways. First, the effective integration of sophisticated and commercially available AI systems offers power and increased competitive edge to otherwise small actors in terms of capabilities, population, and economic muscle. Second, AI opens the playing field (or battlefield) for non-state actors. Terrorists and extremist organisations can acquire these technologies for combat purposes or influence operations as successfully as traditional state actors.

Third, AI has the potential to shake up the established relations and practices of the world’s leading security organisation. Over the past few years, the United States has urged its fellow NATO members to commit to spending 2% of GDP on defence to bridge the gap in capabilities. However, as things currently stand, US investment and adjustment to AI’s potential far outstrips its European allies. Given the exponential growth of AI, the capabilities gap might increase if the current trend continues.

On the other hand, smaller NATO partners are uniquely poised to exploit the fact that AI can deliver oversized power to small actors. Tech-savvy European countries that are already making efforts to compete with the Silicon Valley can bridge the capabilities gap by combining smarter defence spending with the agile integration of available innovative technologies, if not the development of their own. For this to happen, states need to develop a better understanding of AI’s potential and current possibilities, focus on experimental integration of technology (primarily narrow AI) in a controlled environment, and ensure support for and identification of domestic and foreign-based start-ups that can produce customised technology.

How Should Thinking and Practice be Adjusted?

  1. Embrace that AI is here to stay
    States that fail to factor AI into national security risk overlooking that the technology is an increasingly essential feature of modern society. And as its growing use by adversaries demonstrates, AI is by no means an unequivocal force for good. Concerns also exist regarding AI’s long-term impact on humanity. However, ethical issues should not get in the way of harnessing the security benefits offered by AI. States need to strike a balance between safeguarding citizens from the most pernicious effects of AI, and using it to enhance defence, security and intelligence architectures.
  2. Invest in AI now
    Breakthroughs in AI have been largely driven by the commercial sector. For the public sector to bridge, financial investment and close cooperation with private industries, including small businesses and start-ups, will be vital. The same can also be said of investment into infrastructure, customization, integration and the legal framework surrounding AI. Integration of available technologies will not only enhance but also accelerate competitive edge.
  3. Supplement technological progress with underlying doctrinal and organisational change
    Put simply, technology is unlikely to boost national security without doctrinal and organisational change. The threats posed by hybrid warfare and oversized empowerment of small actors should prompt states into revamping security strategies to include a corresponding spectrum of integrated responses. Organisational change should also take into account speed of procurement, training, and defence spending. Long multi-year procurement and approval cycles are hardly compatible with the speed of today’s technological innovation and upgrades. They are also unlikely to enhance cooperation between militaries and businesses striving to maximise returns on investment. New technology, and particularly current and future forms of AI, also require a new type of human actors – the ones that are not only trained in how technology works but can adjust their thinking and behaviour to the new type of challenges.
  4. Work with allies
    Alliances should not overlook the security potential of AI for the sake of balanced capabilities and relations among partners. The GLOBSEC NATO Adaptation Initiative has aptly called for the creation of an AI Centre of Excellence. Such a centre should be not a bureaucratic fancy but part of a viable, innovative and sustainable security strategy.

States should also remember that the AI sector is truly global, with much of the innovation and creative development happening outside one geographical region. This suggests that effective national security policies might rest on a state’s ability to reach out to the global marketplace as well as strengthening cooperation with allies to expand talent and innovation pool.

There is much more to AI than autonomous weapons and a futuristic war of the machines. The ‘unfuturistic’ impact of readily available AI is either ignored at one’s peril or streamlined to reap national security benefits.