Passive Testing
Passive Testing is a new way to evaluate your AI system — by analyzing interactions that already happened, instead of sending new test inputs.
What is Passive Testing?
Section titled “What is Passive Testing?”With the current approach (active testing), Mibo sends real inputs to your system every time you run a test. This works great, but each run costs API calls and tokens.
Passive Testing flips this around. Instead of generating new interactions, Mibo evaluates ones that already took place — from your production logs, past executions, or stored traces. No new requests are sent, and no extra API calls are made.
Use cases
Section titled “Use cases”Post-mortem analysis
Section titled “Post-mortem analysis”Something went wrong in production? Feed the interaction data to Mibo and get a full quality breakdown — including the Failure Matrix, AI Judge scores, and stage-level analysis. Understand exactly where things broke down without having to reproduce the issue.
Shadow testing
Section titled “Shadow testing”Continuously analyze real user interactions to catch quality issues as they happen. Instead of waiting for complaints, Mibo evaluates every interaction against your test criteria in the background.
Regression detection
Section titled “Regression detection”Compare quality before and after a system update. Run your existing test suite against old interaction data to find exactly when and where a regression was introduced.
How it will work
Section titled “How it will work”- Connect your interaction logs or stored traces to Mibo.
- Mibo replays them through its evaluation engine.
- You get the same quality scores, AI Judge evaluations, and Failure Matrix — without touching your system.