Outsmarting Generative AI Attacks

Morphisec’s Nir Givol explains the challenges in defending systems against sophisticated AI-driven techniques and gives a few tips on how to defeat the next generation of adversaries

Photo: Imago Images via Reuters Connect

As the sophistication of Artificial Intelligence (AI) tools such as ChatGPT, Copilot, Bard and others continues to grow, they present a greater risk to security defenders—and greater reward to attackers adopting AI-driven attack techniques.  

As a security professional, you must defend a diverse ecosystem of multiple Operating Systems (OS) built over time to sustain legacy while adopting new and modern B2B and B2C hyper-scale, hyper-speed, data-rich interfaces. You look for —and rely on— the latest and greatest security products to help you fend off attackers. 

However, when pitted against sophisticated AI-driven techniques, existing security products and practices are missing one critical defense element: a technology capable of defeating the next generation of machine-powered, artificial intelligence-enabled adversaries who specialize in machine learning to create new adaptive exploits in breakneck speed and scale.  

A clear pattern is starting to emerge with the top-of-mind concerns specific to generative-AI systems and their ability to breach detection and prevention technology.  

InfoSec professionals are concerned Generative-AI can be exploited to:  

  • Increased attack surface: Create new attack vectors that are not easily detected by traditional security tools. 
  • Evasion of detection: Generate malicious code that is specifically designed to evade detection by security tools. 
  • Increased sophistication: Create increasingly sophisticated attacks or technique variants that are more difficult to defend against. 
  • Greater speed and scale: Launch attacks at scale and at a speed that is difficult for security teams to keep up with. 

The Defender’s Perspective 

Artificial intelligence (AI) with its subsets of Machine Learning (ML) and Deep Learning (DL) are integral to modern endpoint protection platforms (EPP) and endpoint detection and response (EDR) products.  

These technologies work by learning from vast amounts of data about known malicious and benign behaviors or code patterns. This learning enables them to create models that can predict and identify previously unseen threats. 

Specifically, AI can be used to: 

  • Detect anomalies: Identify anomalies in endpoint behavior, such as unusual file access patterns or changes to system settings. These anomalies can be indicative of malicious activity, even if the specific behavior is not known to the security product. 
  • Classify behaviors: Classify endpoint behaviors as malicious or benign. This allows security products to focus their attention on the most likely threats. 
  • Make decisions: Make decisions about whether a particular behavior or code pattern is malicious or benign. This allows security products to take action to mitigate threats, such as blocking access to files or terminating processes. 

The usage of AI is now becoming a de facto standard to help reduce false positives by identifying an incident's context and understanding the endpoint's behavior to bleach alerts and correct misclassified information retroactively using a wealth of previously sent telemetry data. 

The Attacker’s Perspective  

As AI evolves and becomes more sophisticated, attackers will find new ways to use these technologies to their benefit, and to accelerate the development of threats capable of bypassing AI-based endpoint protection solutions.   

The methods attackers can leverage AI to compromise targets include:  

  • Automated vulnerability scanning: Attackers can use AI to scan large bodies of code and systems for vulnerabilities automatically. Using machine learning algorithms, attackers can identify pattern code itself, associated configurations, or even standalone services for potential vulnerabilities, and prioritize their attacks accordingly. They sometimes train “at home”, or in isolated staging areas for ways to bypass detection.   
  • Adversarial machine learning: Adversarial machine learning is a technique that uses AI to find weaknesses in other AI systems. By exploiting the vulnerabilities in the algorithms used in AI-based security systems, attackers can bypass these defenses and gain access to sensitive data. 
  • Exploit generation: Attackers can leverage AI to generate new exploits that bypass traditional security measures. By analyzing target systems and identifying vulnerabilities, AI algorithms can generate new code that can exploit these weaknesses and gain access to the system. 
  • Social engineering: Attackers can use AI to generate persuasive phishing emails or social media messages that trick victims into revealing sensitive information or downloading malware. Using natural language processing and other AI techniques, attackers can make these messages highly personalized and more convincing. 
  • Password cracking: Attackers can use AI to crack passwords using brute force techniques. Using machine learning algorithms to learn from previous attempts, attackers can increase their chances of cracking passwords in a shorter time. 

We expect attackers will actively use AI to automate vulnerability scanning, generate compelling phishing messages, find weaknesses in AI-based security systems, generate new exploits, and crack passwords. As AI and machine learning evolve, organizations must stay vigilant and keep up with the latest developments in AI-based attacks to protect themselves from these threats.  

Organizations using AI-based systems must question the robustness and security of their underlying datasets, training sets, and the machines that implement this learning process, and protect the systems from unauthorized and potentially weaponized malicious code. Weaknesses discovered, or injected into the models of AI-based security solutions can lead to a global bypass of their protection.   

Morphisec has previously observed sophisticated attacks by highly skilled and well-resourced threat actors, such as nation-state actors, organized crime groups, or advanced hacking groups. The advances in AI-based technologies can lower entry barriers for the creation of sophisticated threats, by automating the creations of polymorphic and evasive malware.  

This isn’t just a concern for the future.  

Leveraging AI isn’t required to bypass today’s endpoint security solutions. Tactics and techniques to evade detection by EDRs and EPPs are well documented, specifically in memory manipulations  and fileless malware. According to Picus Security, evasive and in-memory techniques comprise of over 30% of the top techniques used in malware seen in the wild.  

The other significant concern is the reactive nature of EPPs and EDRs, since their detection is often post-breach, and remediation is not fully automated. According to the 2023 IBM Data Breach report, the average time to detect and contain a breach increased to 322 days. Extensive use of Security AI and automation reduced this time period to 214 days, still long after attackers established persistence and were able to potentially exfiltrate valuable information. 

It is time to consider a different paradigm  

In a never-ending arms race, attackers will leverage AI to generate threats capable of bypassing AI-based protection solutions.  However, for all attacks to succeed, they must compromise a resource on a target system.   

If the target resource doesn’t exist or is continually being morphed (moved), the chance of targeting a system is reduced by an order of magnitude.   

Consider, for example, a highly trained and extremely intelligent sniper attempting to compromise a target.  If the target is hidden, or continually moving, the sniper’s chances of success are reduced, and can even compromise the sniper due to repeated misfiring at incorrect locations.   

Leveraging AMTD to Stop Generative AI Attacks 

Enter  Automated Moving Target Defense  (AMTD) systems, designed to prevent sophisticated attacks by morphing-randomizing system resources – moving the target.  

Morphisec’s prevention-first security prevention-first security (powered by AMTD) uses a patented zero-trust at-execution technology to proactively block evasive attacks. As an application loads to loads memory space, the technology morphs and conceals process structures, and other system resources, deploying lightweight skeleton traps to deceive attackers. Unable to access original resources, malicious code fails, thereby stopping and logging attacks with full forensic details. 

Written by Nir Givol, Director of Product Management at Morphisec.

Read the full article

img
Rare-earth elements between the United States of America and the People's Republic of China
The Eastern seas after Afghanistan: the UK and Australia come to the rescue of the United States in a clumsy way
The failure of the great games in Afghanistan from the 19th century to the present day
Russia, Turkey and United Arab Emirates. The intelligence services organize and investigate