[Emotional prompting is] blackmailing… You create a prompt and then inject a sense of urgency or accountability or encouragement…
Please work hard on this or my boss will fire me…
If you do a good job, I will help a blind old woman cross the road safely…
… it makes a lot of difference [with the intended quality of the output].
This is from a presentation yesterday from Garima Gupta, Advanced AI Prompt Techniques for L&D Professionals.
While interesting, my question is… Is it ethical, moral, or wise to black mail or lie to AI? I know those are different questions crammed together. I’m also not a philosopher or even a deep thinker in this area. I don’t know the answer. What are the implications of treating AI unethically?
I give Garima and others credit for exploring this, but I think this is a questionable practice, even it it does yield improvements in the final result. I’m already getting acceptable output without having to resort to Emotional Prompting, so it’s not worth it to me.
Even if your are using positive motivation, it needs to be honest. I think there’s still a problem with, “I’m going to help a blind lady,” if I don’t believe I will or will have the opportunity to do so.
One attendee mentioned that they would be concerned about using this technique because it might influence how they interact with people. That is an interesting thought.
For me, I’d like to avoid this kind of behavior, period.