However, these custom GPTs can also be forced into leaking their secrets. Security researchers and technologists probing the custom chatbots have made them spill the initial instructions they were given when they were created, and have also discovered and downloaded the files used to customize the chatbots. People’s personal information or proprietary data can be put at risk, experts say.
“The privacy concerns of file leakage should be taken seriously,” says Jiahao Yu, a computer science researcher at Northwestern University. “Even if they do not contain sensitive information, they may contain some knowledge that the designer does not want to share with others, and [that serves] as the core part of the custom GPT.”
Read more:
Burgess, M. (2023, November 29). OpenAI’s Custom Chatbots Are Leaking Their Secrets. Wired. https://www.wired.com/story/openai-custom-chatbots-gpts-prompt-injection-attacks/