Trace Format Reference
When you send a trace to Mibo, the data field is passed to a platform-specific parser that extracts the response text, node executions, and tool calls. The parser is selected based on the platform type the trace is routed to, not by inspecting the data.
The recommended pattern
Section titled “The recommended pattern”Regardless of the platform, the same principle applies: structure your trace data so Mibo can find what it needs to evaluate.
At a minimum, include the response text in a common key:
{ "data": { "text": "The AI responded with this message." }}This is enough for semantic assertions and basic checks like response_regex or json_match.
Adding tool calls
Section titled “Adding tool calls”If your AI uses tools or functions, add a tool_calls array:
{ "data": { "text": "I booked your appointment for tomorrow at 9 AM.", "tool_calls": [ { "name": "create_booking", "arguments": { "date": "2026-03-09", "time": "09:00" } } ] }}Adding node-level detail (for node_call assertions)
Section titled “Adding node-level detail (for node_call assertions)”If your system has multiple steps or nodes (like a workflow), the best approach is to use each node name as a key mapping to its output data. This structure lets Mibo match nodes by name, which is what makes node_call assertions work.
{ "data": { "Fetch Data": { "output": { "items": 42 }, "type": "http-request", "status": "success" }, "AI Agent": { "output": { "text": "Based on the 42 items found..." }, "type": "ai-agent", "status": "success", "tools_called": [ { "name": "summarize", "input": { "count": 42 }, "output": { "summary": "..." } } ] } }}With this structure, you can write assertions like:
{ "target": "node_call", "condition": "MUST_CALL", "expected_name": "AI Agent", "expected_arguments": { "text": { "matcher": "contains", "value": "42 items" } }}The node names in your trace ("Fetch Data", "AI Agent") are the same names you reference in expected_name.
Platform-specific formats
Section titled “Platform-specific formats”Each platform parser has its own expectations for the exact data structure. See the detailed format for your platform:
Common fields (all platforms)
Section titled “Common fields (all platforms)”These fields go in the request body alongside data, not inside it:
{ "data": { ... }, "platformId": "550e8400-e29b-41d4-a716-446655440000", "externalId": "your-tracking-id", "metadata": { "workflowId": "wf-123", "chatflowId": "cf-456", "environment": "production", "version": "1.2.0" }}| Field | Required | Description |
|---|---|---|
data | Yes | The trace payload (format depends on platform type) |
platformId | No | Explicitly route to a platform |
externalId | No | Your own tracking identifier (max 255 chars) |
metadata | No | Extra context stored with the trace. Can include platform identifiers for auto-routing (e.g., chatflowId, workflowId), environment info, or any other key-value pairs. |
Headers
Section titled “Headers”| Header | Required | Description |
|---|---|---|
x-api-key | Yes | Your Mibo API key |
Content-Type | Yes | application/json |
Content-Encoding | No | gzip for compressed payloads |
Quick decision guide
Section titled “Quick decision guide”Not sure which format to use?
-
Is your system n8n?
Use the Mibo Testing Node. It sends traces automatically in the optimized format. No manual formatting needed.
-
Is your system Flowise?
Send the Flowise prediction response as-is in
data. IncludeagentFlowExecutedDataif you want node-level assertions. -
Is it anything else?
Connect as Custom API. Put your response text in a common key (
text,message,output, etc.) and addtool_callsif your AI uses tools.