Intelligence and espionage services need to embrace artificial intelligence (AI) in order to protect national security as cyber criminals and hostile nation-states increasingly look to use the technology to launch attacks.
The UK’s intelligence and security agency GCHQ commissioned a study into the use of AI for national security purposes. It warns that while the emergence of AI create new opportunities for boosting national security and keeping members of the public safe, it also presents potential new challenges, including the risk of the same technology being deployed by attackers.
“Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities,” says the report from the Royal United Services Institute for Defence and Security Studies (RUSI).
“In time, other threat actors, including cybercriminal groups, will also be able to take advantage of these same AI innovations”.
The paper also warns that the use of AI in the intelligence services could also “give rise to additional privacy and human rights considerations” when it comes to collecting, processing and using personal data to help prevent security incidents ranging from cyber attacks to terrorism.
SEE: Cybersecurity: Let’s get tactical (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)
The research outlines three key areas where intelligence could benefit from deploying AI to help collect and use data for more efficiency.
They are the automation of organisational processes including data management as well as the use of AI for cybersecurity in order to identify abnormal network behaviour and malware and respond to suspected incidents in real time.
The paper also suggests that AI can also aid intelligence analysis and that by using augmented intelligence, algorithms could support a range of human analysis processes.
However, RUSI also points out that artificial intelligence isn’t ever going to be a replacement for agents and other personnel.
“None of the AI use cases identified in the research could replace human judgement. Systems that attempt to ‘predict’ human behaviour at the individual level are likely to be of limited value for threat assessment purposes,” says the paper.
The report does note that deploying AI to boost the capabilities of spy agencies could also lead to new privacy concerns, such as the amount of information being collected around individuals and when cases of suspect behaviour become active investigations — and finding the line between the two.
Ongoing cases against bulk surveillance could indicate the challenges the use of AI could face – and existing guidance on procedure may need changes to meet the challenges of using AI in intelligence.
Nonetheless, the report argues that despite some potential challenges, artificial intelligence has the potential to ” enhance many aspects of intelligence work”.
READ MORE ON CYBERSECURITY