How To Avoid AI Hallucinations With ChatGPT

Tage wrote about how to prevent ChatGPT from hallucinating a couple of months ago. However, I wanted to dive deeply into one specific thing you can do to completely avoid AI hallucinations. Before I explain how to avoid hallucinations, I need to explain a little bit about what we do when we create a custom ChatGPT chatbot.

What we do is prompt engineering based on an SQL database with VSS capabilities. It could be argued that we jailbreak ChatGPT, but instead of allowing ChatGPT to go completely berserk, we significantly restrict its capabilities to only be able to answer questions related to the data found in our SQL database. To understand the process, it helps to create your custom chatbot, something you can do below.