Prompt injection is a vulnerability in Large Language Models (LLMs) where attackers use carefully crafted prompts to make the model ignore its original instructions or perform unintended actions. This can lead to unauthorized access, data breaches, or manipulation of the model’s responses.
Read more:
Shah, D. (2023, May 25). The ELI5 Guide to Prompt Injection: Techniques, Prevention Methods & Tools. Lakera. https://www.lakera.ai/blog/guide-to-prompt-injection