Tech »  Topic »  The Shift to Continuous AI Model Security and Pen Testing

The Shift to Continuous AI Model Security and Pen Testing


Aaron Shilts of NetSPI on Security Challenges, Threats of AI Models Michael Novinson (MichaelNovinson) • May 9, 2024

The challenges of securing proprietary data within AI models and the paradigm shift in enterprise security are brought about by the widespread adoption of AI models. Adversaries are exploiting vulnerabilities in AI models, employing techniques like "jailbreaking" to extract or manipulate proprietary information, said Aaron Shilts, president and CEO, NetSPI.

See Also: The Security Testing Imperative

Jailbreaking could pose serious threats, particularly in sensitive industries like healthcare, where patient records and health data must remain confidential, he said.

"There are different techniques that bad actors can use to get the wrong information out and that leads to a data breach. Another example is using an AI model to generate something nefarious that you don't want it to create. For instance, information on weapons or making drugs and things like that," Shilts said ...


Copyright of this story solely belongs to bankinfosecurity . To see the full text click HERE