AI should be harnessed to counter malicious innovations of cyber criminals: joint report

Three security-related organizations identify threats leveraging AI and make recommendations on how to combat them 

Photo: Bigstock

A recently-released report on current and future threats leveraging artificial intelligence (AI) calls upon law enforcement agencies, the cybersecurity industry, and other related parties to take action to counter the malicious use of this key technology, which is expected to assume a more central role in everyday life in the future.  

The report, "Malicious Uses and Abuses of Artificial Intelligence" was issued by Europol's European Cybercrime Centre, the United Nations Interregional Crime and Justice Research Institute, and cybersecurity company Trend Micro. The three entities said in a joint statement that the report provides law enforcers, policymakers and other organizations with information on existing and potential attacks leveraging AI, and recommendations on how to mitigate these risks.

The report said criminals are developing systems that use AI to enhance the effectiveness of malware and to disrupt anti-malware and facial recognition systems. It also called for the development of new screening technology to mitigate the risk of disinformation campaigns and extortion, among others. 

"AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology." said Edvardas Šileris, Head of Europol's Cybercrime Centre. "This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems."

The report warned that AI could be used to support convincing social engineering attacks at scale; document-scraping malware to make attacks more efficient; evasion of image recognition and voice biometrics; ransomware attacks, through intelligent targeting and evasion; and data pollution, by identifying blind spots in detection rules. Deepfakes, namely videos that were doctored using AI, are currently the best-known use of the technology as an attack vector.

"Cybercriminals have always been early adopters of the latest technology and AI is no different. As this report reveals, it is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works," said Martin Roesler, head of forward-looking threat research at Trend Micro. He said raising awareness about these threats helps create a safer digital future for us all.

The three organizations made several recommendations: harness the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity industry and policing; continue research to stimulate the development of defensive technology; promote and develop secure AI design frameworks; de-escalate politically loaded rhetoric on the use of AI for cybersecurity purposes; and leverage public-private partnerships and establish multidisciplinary expert groups. 

"As AI applications start to make a major real-world impact, it's becoming clear that this will be a fundamental technology for our future," said Irakli Beridze, Head of the Centre for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute. "However, just as the benefits to society of AI are very real, so is the threat of malicious use."

img
Rare-earth elements between the United States of America and the People's Republic of China
The Eastern seas after Afghanistan: the UK and Australia come to the rescue of the United States in a clumsy way
The failure of the great games in Afghanistan from the 19th century to the present day
Russia, Turkey and United Arab Emirates. The intelligence services organize and investigate