ChatGPT and Microsoft’s new AI chatbot, among others, have been getting a lot of attention lately. The use of these generative AI writing assistants is sparking plenty of hot takes and think pieces, but many everyday users are delighted by the results these tools create.
However, a key concern we must all recognize when using AI writing assistants is that unlike humans, generative AI doesn’t actually understand the information it’s communicating.
Generative AI uses a kind of pattern recognition, which means that when it creates content, it’s simply stringing together words that are — on a purely mathematical level — the most stasticially likely combination to present within the context of the topic it’s writing about.
This doesn’t come close to actual human comprehension, but it creates a very convincing illusion of it.
But that illusion is shattered when generative AI responds to human-crafted prompts in very strange ways.
In February, The New York Times published an article about an odd conversation its technology columnist had with the Bing chatbot. The transcript of the conversation is an example of AI hallucinations.
What’s an AI Hallucination?
Simply put, an AI hallucination occurs when a generative AI begins making up facts.
For example, Satyen K. Bordoloi reports that he asked ChatGPT who holds the world record for crossing the English Channel on foot, just to see how the AI chatbot would respond.
ChatGPT replied, “The world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed the crossing in 14 hours and 51 minutes on August 14, 2020.”
The response is absurd, but weirdly compelling in its attention to detail.
Normally, if you ask the generative AI a worrying or strange question, it will shut down that line of questioning by responding with something like “I can’t answer that.”
But sometimes the flaws in the AI show. Technologists and others who work with generative AI are experimenting with ways to eliminate, or at least reduce, AI hallucinations.
If you’re using generative AI to support your organization’s B2B revenue generation efforts, here are some ways to limit the likelihood of hallucinations occurring in your own work.
Set Time Limits
Limit the length of each session with generative AI. Based on the experiences of early users, Microsoft’s chief technology officer said the company might try to limit how long conversations can be.
The longer you use generative AI in a single session, the more likely it is to hallucinate — which is not what you want when you’ve asked it to help you write sales or marketing copy.
A human should always co-pilot for the sake of fact-checking, but the more errors the AI introduces, the more likely it is that one will slip through.
Set Word Limits
When prompting the generative AI, limit the length of the copy you ask for. As with time, the longer the copy is, the more likely it is that the AI will hallucinate.
There’s no official limit, but 500 words seems to be about the maximum for best performance. If you want generative AI to help write a longer piece, try asking it to write sections that you can combine.
Check it
Verify the facts or data the generative AI cites by checking a trusted source. If you can’t find the information anywhere else, be wary.
Be Specific
Keep prompts specific. The generative AI needs context to give you the best results.
For example, do you need help writing an ad, an email, or a blog post? Include that information in your prompt. Also specify the tone you’d like. Likely you need the copy to be in your company’s brand voice. Ask it to be casual, funny, formal, or wildly creative.
The more specific the prompt is, the better the generative AI responds.
If the prompt is “Write a poem about data intent in the style of Walt Whitman to an audience of digital advertisers,” that’s the result you’ll get. Here’s another fun example.
Try, Try Again
Use multiple prompts if you’re not getting the responses you want. Changing only a few words makes a difference.
You may say, “Write about XYZ for a client.” But if you say, “Generate XYZ for a client,” or “Create XYZ for a client,” you’ll get a different response.
You can experiment with your prompts until you learn what elicits the most helpful responses.
Conclusion
Generative AI is young and continues to evolve. For now, while it’s a great assistant, generative AI absolutely requires human oversight to generate compelling, accurate, copy. If you follow these guidelines, you can avoid pitfalls while benefiting from the huge time-savings generative AI can provide.