Custom API
Choose Custom API if your AI system has its own URL that accepts inputs and returns responses, and it’s not built on Flowise or n8n. This works with any endpoint (whether it’s built with LangChain, your own code, or any other framework).
Setting up your connection
Section titled “Setting up your connection”-
Add your endpoint
Go to your project, click Add Platform, and select Custom API.
-
Enter the endpoint URL
Paste the URL where your system receives inputs. This is the address Mibo will send test messages to.
-
Choose the request method
Select how inputs should be sent, usually POST. If you’re not sure, POST is the right choice for most systems.
-
Set up authentication (if needed)
If your system requires credentials, choose the type:
- Bearer Token: a single secret key that Mibo sends with each request.
- Custom Headers: name-value pairs for more complex authentication setups.
- None: your system is open and doesn’t require authentication.
-
Configure the message template
This tells Mibo how to format the request body sent to your system. The template uses placeholders that get replaced with real values each time a test runs:
Placeholder Replaced with {input}The test case’s input text {VARIABLE_NAME}The value of VARIABLE_NAMEfrom the test case’s context{external_id}(Optional) The trace ID for this execution The default template is
{ "input": "{input}" }. If your system expects a different structure, customize it here.Example: if your template is:
{"question": "{input}","userId": "{USER_ID}"}And your test case has this context:
{ "USER_ID": "usr-123" }Then Mibo sends this request body when the test runs:
{"question": "What is the capital of France?","userId": "usr-123"}See context variables to learn how to define variables in your test cases.
-
Configure response parsing
Tell Mibo where to find the text reply in the response. Use dot notation to navigate into the JSON structure:
API response Path Result {"text": "Hello"}textHello{"data": {"message": "Hello"}}data.messageHello{"choices": [{"message": {"content": "Hello"}}]}choices.0.message.contentHelloFor arrays, use the index number (
0for the first element). Bothchoices.0.textandchoices[0].textwork.If you leave it empty, Mibo auto-detects by looking for common keys:
text,message,content,output,answer,response. -
Test your connection
Click Test Connection to verify everything works. Mibo will send a quick input and show you the result.
Trace collection
Section titled “Trace collection”Traces capture what happened inside your system during a test: which tools it called, what data it used, and the steps it took to build a response. Traces are optional, but they enable deeper analysis like the Failure Matrix.
You have two options for collecting traces:
Included in the response (Inline)
Section titled “Included in the response (Inline)”Your system already includes trace data alongside its answer. You just tell Mibo where to find it in the response by specifying the path to the trace field.
This is the simplest option if your system supports it.
Sent separately (Push)
Section titled “Sent separately (Push)”Your system sends trace data to Mibo separately, after responding to the input. Mibo waits for it to arrive before completing the evaluation.
This works well for systems where trace data is generated asynchronously or by a separate service. You can configure how long Mibo should wait for the trace to arrive (Trace Timeout).
Push mode also enables passive testing: your system can send traces to Mibo at any time (not just during active tests), and Mibo will automatically evaluate them against your active test cases.
Trace modes at a glance
Section titled “Trace modes at a glance”| Inline | Push | |
|---|---|---|
| How trace arrives | Included in the API response | Your system POSTs to Mibo’s trace endpoint |
| Extra config | Trace data path (where in the response to find it) | Trace timeout (how long to wait) |
| Passive testing | No | Yes: incoming traces trigger automatic evaluation |
Using the Test Architect with Custom API
Section titled “Using the Test Architect with Custom API”When creating tests with the Test Architect, you can upload a JSON document describing your endpoint’s behavior to help the AI generate more accurate tests. This could be an OpenAPI spec, a sample request/response pair, or any JSON that describes your endpoint’s structure.
Attach the JSON file in the Test Architect chat, or paste its contents directly.
Troubleshooting
Section titled “Troubleshooting”- “Connection failed”: check that the URL is correct and your system is running. Make sure there are no typos in the address.
- “Authentication error”: verify your token or headers. The most common issue is a missing or expired key.
- “Empty response”: the response parsing path might be wrong. Check what field name your endpoint uses for its text reply.
- “Timeout”: your system is taking too long to respond. Check if it’s under heavy load or if the endpoint is correct.