AI Hallucinations Create a New Software Supply Chain Threat
securityweek
Package hallucinations represent a common issue within code-generating Large Language Models (LLMs) that opens the door for a new type of supply chain attack, researchers from three US universities warn.
Referred to as ‘slopsquatting’, package hallucination occurs when the code generated by a LLM recommends or references a fictitious package.
Researchers from the University of Texas at San Antonio, University of Oklahoma, and Virginia Tech warn that threat actors can exploit this by publishing malicious packages with the hallucinated names.
“As other unsuspecting and trusting LLM users are subsequently recommended the same fictitious package in their generated code, they end up downloading the adversary-created malicious package, resulting in a successful compromise,” the academics explain in a recently published research paper (PDF).
Considered a variation of the classical package confusion attack, slopsquatting could lead to the compromise of an entire codebase or software dependency chain, as any code relying on the ...
Copyright of this story solely belongs to securityweek . To see the full text click HERE