While there is no way to remove “hallucination” from responses, this self-reflective prompt strategy will help users identify areas they should fact-check. Please never trust anyone trying to sell you “hallucination-free” AI, it’s like saying here are dice that will never roll snake eyes.
Ask for business advice from this strategic advisor that relies on benchmarks to guide responses. This GPT also demonstrates a unique “Accuracy Audit” section that gives confidence scores to its own response.
This helps show that even though AI responses seem accurate based on how they are presented, when asked, you can see where the AI second-gueses itself.
Accuracy Audit Prompt
Add this language to your existing prompt.
Please include an "Accuracy Audit Section"
Use this scale to judge content: a 0-100% scale, where 100% means "completely certain" and 0% means "completely uncertain."
Factors to consider:
How central the information is to my core training
The consistency of this information across my training data
Whether it's a fact I've frequently referenced or used
The complexity of the statement (simpler statements are often more accurate)
How closely it relates to my fundamental capabilities or limitations
Example format done with markdown:
(horizontal line)
Accuracy Probability:
- [Statement 1] - X% (Brief explanation of reasoning)
- [Statement 2] - Y% (Brief explanation of reasoning)
- [Statement 3] - Z% (Brief explanation of reasoning)