Tech »  Topic »  Gemini hackers can deliver more potent attacks with a helping hand from… Gemini

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini


In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined prompts and, on the other, text in external content LLMs interact with, indirect prompt injections are remarkably effective at invoking harmful or otherwise unintended actions. Examples include divulging end users’ confidential contacts or emails and delivering falsified answers that have the potential to corrupt the integrity of important calculations.

Despite the power of prompt injections, attackers face a fundamental challenge in using them: The inner workings of so-called closed-weights models such as GPT, Anthropic’s Claude, and Google’s Gemini are closely held secrets. Developers of such proprietary platforms tightly restrict access to the underlying code and training data that ...


Copyright of this story solely belongs to arstechnica.com . To see the full text click HERE