Commentary

Business Talks - SparkCognition

23.01.2018

Mr. Amir HUSAIN, Founder and CEO of Spark Cognition Inc

SparkCognition and GLOBSEC partnered in an initiative that addresses the nature of NATO adaptation and the challenges it must overcome to remain a viable and credible alliance for peace and stability in the transatlantic area. The final report of this initiative makes clear that the global strategic landscape will significantly change over the next decade. How do you see technology developing on the medium and long term?

The strategic importance of AI can hardly be overstated, and it’s only going to grow in the coming years. Essentially, AI enables warfare to occur at never-before-seen speeds, and those who do not invest in AI will be unable to even keep up in military encounters, let alone emerge victorious. Many countries are already investing heavily in AI, and early adopters will see the first and most significant strategic gains.

China, for example, has been aggressive in its pursuit of AI research and talent, and the results of this are showing. China is now publishing more AI research than American scholars, and more Chinese than American papers are being accepted into major conferences for AI research. Furthermore, China is increasingly beating U.S. researchers to major technological milestones. In 2016, when Microsoft triumphantly announced that they had developed language comprehension software that could match the human ability to recognize and understand speech, the excitement was dampened by a tweet from Andrew Ng, Baidu’s then-chief scientist: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

This worldwide “space race” for AI is also impacting how the technology will develop medium- and long-term. It is collectively observed and understood that the field of AI is developing in an exponential fashion due to a resurgence in scientific interest and research. This resurgence has been enabled by the prevalence of low-cost computing (or cloud computing), academic excitement over new achievements inspiring more academics to conduct further research, and the relatively near-term tech transition to market, resulting in capital success. This pattern of development is decidedly non-linear, as the whole industry is now motivated to produce transformational societal capabilities, such as self-driving cars from Tesla and Google.

Unlike the two previous AI booms, this one shows no sign of stopping or coming to a plateau anytime soon. Experts predict that we may achieve artificial general intelligence--that is to say, broad human-like intelligence in a machine--in the next 20 to 30 years. Deep neural networks have not yet even reached their full potential, but they are already being taught as an underlying basis of AI on which other applications can be built. In essence, there is substantial room for growth, even as huge leaps of progress are being made.

According to the report, not only will political events influence this new geopolitical context, but future warfare will also evolve to a large extent due to technological advances. These not only pose new threats to NATO but also offer the possibility to strengthen its military capabilities. What are your thoughts on the role of new technology in future warfare and, moreover, what should be the Alliance’s priorities?

What makes this new form of warfare unique is the unparalleled speed enabled by automating decision-making and leveraging artificial intelligence. The implications of these developments are numerous and game-changing. It’s quite clear that artificial intelligence will be heavily applied to the military sphere. So in many ways, the military represents a high-potential opportunity to rapidly develop the capabilities that will ensure that artificial intelligence can be employed safely in a highly autonomous environment. The military advantage of AI is one that no nation can ignore. So in some cases, in order to gain a military advantage, countries will be investing very aggressively in high degrees of autonomy that may only show up in the civilian sphere many years later.

The laws of armed conflict are a great template to discuss what specifically we mean by ethics. The application of AI capabilities on the field of battle is inevitable. We can’t just pretend that a ban will prevent the use of artificial intelligence on the battlefield. As scientists, we must take the responsibility to develop the framework that will enable safe deployment of AI capabilities. We must also ensure that the decisions made by AI systems in the field of battle are explainable and can be understood, improved, or if necessary guarded against during the training cycle. Investment in science, technology, and research for the deployment, testing, and proving out of these capabilities is more important now than ever before in order to maintain a military advantage.

To illustrate these principles, join me for a thought experiment originally conceived of by my friend and collaborator General John Allen of the United States Marine Corps, a four-star general and past deputy commander of U.S. Central Command:

It is 2018, and a captain is contemplating damage to his ship after a surprise attack. This, however, was no ordinary attack. The battle damage was devastating, and constituted the beginning of what the U.S. would soon discover was a widespread, strategic attack. The guided-missile destroyer had not recognized that its systems were under cyber attack before the situation turned kinetic.

The speed of the attack quickly overwhelmed the ship’s combat systems. New developments were occurring in seconds or less. Before anyone could even react, the battle was over.

The captain had survived, but he was severely wounded, as were many crew members. Fires were burning out of control, and the ship was listing badly from flooding. Evidently the autonomous platforms knew exactly where to strike the ship to both maximize damage and reduce the chances of survivability. With his capacity to command the ship now seriously compromised and the flooding out of control, the captain did what no U.S. skipper had done for generations—he issued the order to abandon ship.

Now consider the revised version of this thought experiment, in which artificial intelligence is employed:

It is 2027, and another attack has occurred. An artificially intelligent cyber defense system was the first to detect what appeared to be an attempt at a major cyber intrusion. The initial attack and successful defense occurred within microseconds. The ship was then able to detect a massive incoming swarm attack and forward threat information to the rest of the fleet, enabling other units to prepare for an impending attack.

The captain moved quickly, donning the augmented reality headgear and gauntlets to assimilate and react to the totality and complexity of the battle he was about to lead. With a sweep of his hand in virtual reality, he initiated the anti-swarm batteries. In that instant, naval warfare changed forever.

Hours later, after checking diagnostics that showed the health of his ship and crew, the captain reflected on the engagement. The attack had come seemingly from nowhere. The cyber defense system had detected the initial intrusion, and not only had it protected the ship, but it also had reasoned the attack was a precursor to something larger. This hypothesis had been formed, researched, and validated in less than a second. Within 10 seconds, the ship initiated battle stations on its own and the captain had donned his augmented reality ensemble. The entire battle had unfolded and was over in minutes.

AI systems had foiled a coordinated, complex cyber and autonomous swarm attack. The captain was struck by the realization that at nearly every point where human actions and decisions were required they had nearly risked the ship. Though he was a master of the combat systems of the USS Infinity (DDG-500), he had just experienced the near mind-numbing speeds of AI-driven warfare. He had become the first U.S. commander to fight in the environment of hyperwar.

One of the classic ethical questions surrounding AI concerns ‘the passenger and the pedestrian’. Should the car continue, striking the pedestrian or should it swerve, potentially killing its passenger? For the machine itself, this is a simple question that can be programmed years before the accident might happen. The choice we prefer just needs to be coded. This is clearly not a technical, but an ethical question. What is the role for the industry in these policy debates?

First of all, morality is a human construct. When we talk about morality in the sense of implementing these ideas in artificial intelligence, we’re really talking about implementing compliance with the law, and we’re implementing the ability for these systems to not do harm when they’re not sure. We want to make sure that AI systems do not carry out actions that are against their programming, and that other participants in a system can step in to block the actions of one agent that might have been hacked. It’s not about these systems learning to think about morals and ethics as we humans do; instead it’s a question of compliance. The truth is that as long as a new technology exists, it will be used. Those who opt out risk greater potential loss by placing themselves at a substantial disadvantage. Designing and implementing regulations to ensure that AI systems remain compliant with our laws and beliefs will be up to industries working together with policy makers.

If AI is going to be more omnipresent in our daily lives and adapt itself to our behaviour, how can we assure the security of these systems? How does SparkCognition take cyber security into account to counter the risks of hacking and espionage?

SparkCognition’s patented artificial intelligence platform, DeepArmor, is able to detect and prevent malware, viruses, worms, trojans, and ransomware in milliseconds. By taking a mathematical approach, we are able to provide industry-leading protection against zero-day and polymorphic threats, which can otherwise slip through the cracks of traditional antivirus solutions. Our unique approach is able to provide unified protection across clients, servers, mobile, and IoT devices.

Additionally, SparkCognition’s SparkPredict® can use its anomaly detection to recognize and flag abnormal behavior, allowing operators to prevent malicious actors from seizing control of systems.

With the development of new technology comes a lot of questions. Computers have already proven to be better at performing certain activities than humans. Today, with artificial intelligence we create machines that, like humans, can quickly adapt to a changing context. Our ability to adapt is, however, not always infallible. How can we filter human errors and prevent bad intentions to be translated in the algorithms?

In general, AI is not susceptible to many of the same errors and biases as humans. For example, experts in traditional signature-based security are expected to comb through millions of files to determine if they are malicious or benign. Even with the best security experts, this methodology results in false positives and other misclassifications due to human bias and a lack of time and staff.

Machines suffer from none of these flaws. They can process information and react to it nearly instantaneously. Machines also cannot be impacted by outside factors. A human can typically only hold, at most, a few variables at one time in conscious thought. AI does not have a problem holding thousands of variables in “conscious” thought at once. AI can take in and process wider ranges of information than any human being ever could, and then use that information to make logically sound decisions and then execute those decisions almost instantly.

That being said, there are still ways bias could make its way into an AI system. A machine learning algorithm is only as good as the data it’s been given, and if the only data available is of poor quality, the model won’t be any better. Similarly, if the data contains implicit biases, those biases will be learned by the AI. For instance, if you were to train an AI to recommend salaries for employees in the U.S. based on current salary data, it would learn to recommend slightly lower salaries for women than for men.

In effect, machines can remove or eliminate certain kinds of human error, but it cannot fix problems and prejudices that exist on a societal level. We as humans must continue working to address these problems if we do not want to see them reflected in our machines.

Amir, allow us to close this interview with a more philosophical question. Do you ever see artificial intelligence ever becoming self-aware?

Research in artificial intelligence currently emphasizes addressing specific, reasonably sized problems. This more practical approach has laid the groundwork for further advancement in AI. The idea is not to build programs capable of emulating human capabilities, but programs that can excel at tasks where humans struggle, such as the analysis of massive data sets. From this, we have programs like AlphaGo, which has defeated human grandmasters in the game of Go. But this intelligence applies only to the narrow field for which it is designed. For example, AlphaGo cannot play other board games, let alone perform any other functions. Eventually, it is possible that we will be able to create more general artificial intelligence that emulates human sentience. For now, however, that remains a distant possibility.

 

CEO Amir Husain founded SparkCognition with the desire to build a company that would be at the forefront of the “AI 3.0” revolution. An undisputed tech leader in Austin and in the industry at large, Amir built multiple venture-funded startups between 1999 and 2009, at which point he took over as President and CEO of VDIworks. For his inventions, Amir has been awarded 22 US patents and has over 40 pending patent applications. He serves as advisor and board member to several major institutions, including IBM Watson, University of Texas Department of Computer Science, MakerArm, ClearCube Technology, uStudio and others; and his work has been published in leading tech journals, including Network World, IT Today, and Computer World. In 2015, Amir was named Austin’s Top Technology Entrepreneur of the Year. As the driving force at SparkCognition, he is known for his honest, open, approachable leadership style. Amir's book, The Sentient Machine, was published by Simon & Schuster in November, 2017.

SparkCognition is an AI leader that offers business-critical solutions for customers in energy, oil and gas, manufacturing, finance, aerospace, defense, and security. A highly awarded company recognized for cutting-edge technology, SparkCognition develops AI-powered cyber-physical software for the safety, security, reliability, and optimization of IT, OT, and the Industrial IoT.

Navigation