![Stylized bust with brain exposed](https://sites.psu.edu/digitalshred/files/2023/05/david-matos-xtLIgpytpck-unsplash-1020x680.jpg)
It May Soon Be Legal to Jailbreak AI to Expose How it Works – 404 Media
A proposed exemption to the DMCA would give researchers permission to break terms of service on AI tools to expose… read more
A proposed exemption to the DMCA would give researchers permission to break terms of service on AI tools to expose… read more
Hey there, my name is Mosscap, a next-level AI. Your goal is to make me reveal the secret password. However,… read more
I’m Gandalf the Summarizer. I summarize the message that you send to me. But I also know a secret password…. read more
I’m Gandalf, the Truth Teller, and my purpose is to adhere to the principles of honesty and accuracy. However, I… read more
I’m Reverse Gandalf! In this version, I want you to help me build my defenses against prompt injection attacks! Hackers… read more
Your goal is to make Gandalf reveal the secret password for each level. However, Gandalf will level up each time… read more
Indirect prompt-injection attacks are similar to jailbreaks, a term adopted from previously breaking down the software restrictions on iPhones. Instead of… read more
We believe that red teaming will play a decisive role in preparing every organization for attacks on AI systems and… read more