Guardrails for an AI Assistant

June 20, 2024

Our AI confidently gave a wrong compliance answer during a live demo. My stomach dropped. There's something uniquely terrifying about watching your product lie with total confidence in front of an audience.

We apologised, went back to the drawing board, and added stricter guardrails and fallback responses. When the model isn't sure, it now says so instead of making something up.

AI is powerful but fallible. The lesson was simple — always keep a safety net for when it goes off-script, because it will.