Thought Leadership AI In Security And Defence

[ad_1]

By Lee Eccleshare, Head of Research, AMPLYFI

Artificial intelligence (AI) has rapidly spread through virtually every sector, revolutionising processes and augmenting capabilities. From healthcare to finance, from transportation to entertainment, there is hardly an industry untouched by the transformative powers of AI. With its remarkable ability to perform complex tasks with unparalleled efficiency, it comes as no surprise that the defence industry has also embraced it.

A particularly interesting way in which AI can be leveraged for defence is its ability to rapidly analyse open source web content to arrive at data-backed intelligence about potentially malicious actors.

The work completed by the Center for Security and Emerging Technology (CSET), a defence-oriented research organisation based in Georgetown University is a great example of this. Partnering with AMPLYFI, CSET has been using open source content from the Chinese internet to make calculations about the advancing AI capability of the Chinese State.

One way they have been doing this has been to harvest and analyse millions of Chinese job postings for emerging signals and areas of AI interest/expertise. With this data, they have been able to provide important insight on Chinese AI capability directly to policy makers in Washington DC.

As well as informing foreign policy, AI can also be used as a tool to monitor the development of technologies that could be used in an offensive manner. The UK Ministry of Defence (MOD) published its Defence Technology Framework in 2019, and reported on seven areas of technological interest; including Advanced Materials, Artificial Intelligence and Autonomous Systems. In order to develop defensive capabilities and countermeasures to guard against their use, it is important that defence organisations have a full and complete picture of the current state-of-the art, as well as the likely advances coming in the next wave of development.

Given the constant stream of patents, academic articles and commercial announcements in these areas, it is almost impossible for human analysts to keep up-to-date. This is where AI can be leveraged to augment human analysis to find, analyse and understand millions of relevant documents.

AI is able to serve as an additional tool to augment analysts or provide an additional lens by which to view an entity/technology. A key strength is its unbiased nature in which it can make calculations based on data alone. Experts, by definition, are biassed as they have become experts through their formative experiences, leading them to think or act in a certain way. It’s not to say that AI is more useful than an expert, just that it is able to see things differently and offer a potentially interesting viewpoint.

Humans, for example, are bad at assessing large sets of data for patterns or causal relationships, no matter how expert they are in a particular field. It turns out that AI, and computers in general, are great at this type of large data set analysis. It is said that the average human can hold seven things in their short term memory, so when an expert analyst is trying to piece together a complex problem with multiple variables, this is quite a limitation. It’s safe to say that machines, on the other hand, do not have to wrestle with this restriction.

As well as using AI to respond to threats, it is also worth considering the ways in which it could be used to de-escalate or deter offensive action. The provision of clear, unbiased and unemotional analysis could, in some circumstances, be used to lift the “fog of war”.

Consider a scenario where satellite images show troop movements towards a contested border after a deal is reached to de-escalate. It would be easy for a human to hastily interpret this as an offensive action and seek to match it  – however, a trained imaging algorithm might take the same satellite images and highlight damaged bridge infrastructure in the area, thus suggesting a reason for an indirect route away from the border. By enabling near-instant analysis of a much wider array of data, AI might be able to offset human bias/emotion from high-stakes decision making.

In this hypothetical example, an AI model was used as a mechanism to build trust between two factions. The analyst didn’t have to trust the intentions of the potential adversary, but instead was able to trust the unbiased analysis of an AI model.

Perhaps in the future there may be a role for transparent and trusted AI models to lay the foundations of trust between opposed forces close to conflict.

[ad_2]

Leave a Comment