Tech »  Topic »  Infosec experts divided on AI's potential to assist red teams

Infosec experts divided on AI's potential to assist red teams


CANALYS FORUMS APAC Generative AI is being enthusiastically adopted in almost every field, but infosec experts are divided on whether it is truly helpful for red team raiders who test enterprise systems.

"Red teaming" sees infosec pros simulate attacks to identify vulnerabilities. It's a commonly used tactic that has been adapted to test the workings of generative AI applications by bombarding them with huge numbers of prompts in the hope some produce problematic results that developers can repair.

Red teams wield AI as well as testing it. In May, IBM's red team told The Register it used AI to analyze info at a major tech manufacturer's IT estate, and found a flaw in an HR portal that allowed wide access. Big Blue's red team reckoned AI shortened the time required to find and target that flaw.

Panel prognostications

The recent Canalys APAC Forum in Indonesia convened ...


Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE