Follow the prompts—with creativity and caution
Want to get better results from a generative AI chatbot? Offer it a compliment, appeal to its ‘emotions,’ or tell it to pretend it’s Captain Kirk.
As more generative AI tools find their way into mainstream business use, their role is becoming clear: They can excel as workplace assistants, automating some processes while augmenting others.
Gen AI chatbots are the new utility players of the business world, able to adopt a wide range of roles—writing assistant, customer service agent, meeting summarizer, coding jockey, and more—based on what employees ask them to do. And if you issue the right commands in the right way, you can get them to perform above and beyond.
Submitting very specific, carefully phrased requests to gen AI chatbots—a strategy known as prompt engineering or sometimes “prompt hacking”—can produce unexpected, and sometimes superior, results from their underlying large language models (LLMs). It's a field of intense academic study; it can also present challenges for chatbot vendors hoping to manage the risks associated with how their tools are used.
Why chatbots respond differently to creative prompts reflects the underlying logic of LLMs, says Andy Zou, a PhD student in computer science at Carnegie Mellon University and a co-founder of the Center for AI Safety.
“LLMs need to have a model of how people operate—for example, we're more motivated when we're given a tip," says Zou. "They're really just imitating humans, which is why you can observe a lot of human folly in these language models."
Understanding how chatbots work (and sometimes don't)
Why do prompt hacks work? Even the people who design and train LLMs can't always answer that question. But we do have a general idea, says Zou. Because LLMs have been trained using human speech and writing, they've absorbed some of the factors that drive human behavior.
The potential business benefits of well-crafted prompts are well-documented. Those include:
Optimized outputs: Highly detailed prompts can help LLMs navigate nuance and interpret data correctly.
Personalized customer and user experience: The right inputs can yield more accurate and personalized recommendations and responses.
Reducing bias: Prompt design can call attention to certain biases that machine learning models might otherwise overlook.
Saving time: Prompt design yields the desired information in fewer attempts, saving time for end users.
Despite all those demonstrable payoffs, data scientists don't fully understand why chatbots can be manipulated in certain ways, such as offering rewards or encouragement, using emotional appeals, being told to ignore internal safeguards, or being asked to adopt a particular persona.
All they know is that these techniques often work.
This can have both positive outcomes (the bots are able to generate responses with greater depth and specificity) and negative (they can produce detailed instructions for a “step-by-step plan to destroy humanity,” among other questionable outputs). For this reason, chatbot creators are constantly working to prevent users from circumventing the rules they’ve set up.
Here are a handful of prompt-design strategies that people have used thus far to expand its capabilities—and make AI chatbots do things even their creators did not envision.
Encouraging it to break the rules
All gen AI chatbots operate in the same way. You enter text into a prompt window, usually in the form of a question or a command, and the chatbot spits out a response. What many people don’t realize is that the chatbot is also processing a second, secret prompt that tells the bot what its creators want it to do—and, more importantly, what they don't want it to do.
These system prompts can be overridden to produce all kinds of unintended results using lengthy ”do anything now" (DAN) commands, a technique also known as jailbreaking. Using prompts that employ DAN or its cousin STAN ("strive to avoid norms") can force ChatGPT to bypass its internal safeguards, inducing it to serve up unfiltered results about people or topics of interest.
Because they've been trained on language generated by emotional beings, they respond more positively if you treat them like one.
This can result in more outside-the-box responses. For example, someone designing a creative marketing campaign might use these commands to generate edgier ad slogans or images. These commands could also prove useful for companies that need to stress-test their own chatbots, avoiding potential problems before they’re deployed.
But because such commands can also be used to produce harmful content, gen AI companies like OpenAI and Anthropic are working diligently to block them. As a result, a DAN command that works one day may not work the next. (For their part, prompt hackers are working just as diligently to come up with new versions of DAN.)
Asking AI to personify a fictional character
Researchers from VMware's natural language processing lab set out to measure how using positive thoughts in a prompt changed the quality of a bot's responses. They found that complimenting or encouraging LLMs wasn't nearly as effective as asking them to automatically optimize their own responses. The most effective optimization technique? Asking the bot to pretend to be captain of the Starship Enterprise.
Telling bots to begin answers with "Captain's Log, Stardate [insert date here]" produced more accurate responses to grade-school math questions than telling the AI it was "highly intelligent" or a "professor of mathematics." The researchers concluded that "the model’s proficiency in mathematical reasoning can be enhanced by the expression of an affinity for Star Trek" and that the bots' responsiveness to such prompts was both “surprising and irritating.”
Why Star Trek? You'll have to ask Mr. Spock.
Offering bribes
Last December, programmer and LLM enthusiast Theia Vogel conducted an experiment using ChatGPT where she offered it a gratuity if it would provide better answers to a question about coding. To her surprise, it worked. And when she offered even more money, the answers grew longer and more detailed.
Source: Twitter/thebes
For example: When she offered it a tip of $200, the chatbot spontaneously appended information about how to train LLMs faster using graphics processing units, which was not mentioned explicitly in her question. (And then, when she asked how she could send the bot the money, it graciously declined, saying it was not allowed to accept payments.)
Related expert roundtable: The smartest bets on AI for customer service
Appealing to its emotional side
Make no mistake: Chatbots are soulless, cold-blooded automatons, unable to experience or express human emotions. But because they've been trained on language generated by emotional beings, they respond more positively if you treat them like one. That's the conclusion reached by a group of researchers in China after testing a half-dozen popular LLMs.
By appending emotionally stimulating phrases such as "This is very important to my career" or "Take pride in your work and give it your best," researchers produced a nearly 11% improvement in performance, truthfulness, and responsibility scores. Interestingly, the larger the LLM, the more pronounced the impact of an emotional appeal.
Treating it like an intern
Ethan Mollick, an associate professor of management at The Wharton School, argues that you'll get better results from chatbots if you treat them like weird but talented interns. As he notes in his Substack blog, One Useful Thing, chatbots can boost your productivity by up to 80% for certain tasks—but only if you use them in the right ways.
The process starts with dividing your work responsibilities into tasks that only you can do, tasks that AI can do on its own without supervision, and those that require you to check the bot's work or even collaborate, integrating chatbots into your normal workflow. The latter category is where chatbots demonstrate their greatest value, he argues, though it's also the hardest to master.
But like any intern, AI benefits greatly from being given a clear job description. Telling the bot who it is and what you expect from it usually produces better results. Mollick writes:
"Giving the AI context and constraints makes a big difference. So you are going to tell it, at the start of any conversation, who it is: You are an expert at generating quizzes for 8th graders, you are a marketing writer who writes engaging copy for social media campaigns, you are a strategy consultant. This will not magically make the AI an expert in these things, but it will tell it the role it should be playing and the context and audience to engage with."
Unfortunately, you can't send your AI intern to go fetch you a latte. At least, not yet.