Skip to content

Trace Format Reference

When you send a trace to Mibo, the data field is passed to a platform-specific parser that extracts the response text, node executions, and tool calls. The parser is selected based on the platform type the trace is routed to, not by inspecting the data.

Trace data flow: POST /public/traces → stored as trace.data → platform parser (Custom API, Flowise, or n8n) → extracted text, node_calls, tool_calls → assertions evaluated

Regardless of the platform, the same principle applies: structure your trace data so Mibo can find what it needs to evaluate.

At a minimum, include the response text in a common key:

{
"data": {
"text": "The AI responded with this message."
}
}

This is enough for semantic assertions and basic checks like response_regex or json_match.

If your AI uses tools or functions, add a tool_calls array:

{
"data": {
"text": "I booked your appointment for tomorrow at 9 AM.",
"tool_calls": [
{
"name": "create_booking",
"arguments": {
"date": "2026-03-09",
"time": "09:00"
}
}
]
}
}

Adding node-level detail (for node_call assertions)

Section titled “Adding node-level detail (for node_call assertions)”

If your system has multiple steps or nodes (like a workflow), the best approach is to use each node name as a key mapping to its output data. This structure lets Mibo match nodes by name, which is what makes node_call assertions work.

{
"data": {
"Fetch Data": {
"output": { "items": 42 },
"type": "http-request",
"status": "success"
},
"AI Agent": {
"output": { "text": "Based on the 42 items found..." },
"type": "ai-agent",
"status": "success",
"tools_called": [
{
"name": "summarize",
"input": { "count": 42 },
"output": { "summary": "..." }
}
]
}
}
}

With this structure, you can write assertions like:

{
"target": "node_call",
"condition": "MUST_CALL",
"expected_name": "AI Agent",
"expected_arguments": { "text": { "matcher": "contains", "value": "42 items" } }
}

The node names in your trace ("Fetch Data", "AI Agent") are the same names you reference in expected_name.

Each platform parser has its own expectations for the exact data structure. See the detailed format for your platform:

These fields go in the request body alongside data, not inside it:

{
"data": { ... },
"platformId": "550e8400-e29b-41d4-a716-446655440000",
"externalId": "your-tracking-id",
"metadata": {
"workflowId": "wf-123",
"chatflowId": "cf-456",
"environment": "production",
"version": "1.2.0"
}
}
FieldRequiredDescription
dataYesThe trace payload (format depends on platform type)
platformIdNoExplicitly route to a platform
externalIdNoYour own tracking identifier (max 255 chars)
metadataNoExtra context stored with the trace. Can include platform identifiers for auto-routing (e.g., chatflowId, workflowId), environment info, or any other key-value pairs.
HeaderRequiredDescription
x-api-keyYesYour Mibo API key
Content-TypeYesapplication/json
Content-EncodingNogzip for compressed payloads

Not sure which format to use?

  1. Is your system n8n?

    Use the Mibo Testing Node. It sends traces automatically in the optimized format. No manual formatting needed.

  2. Is your system Flowise?

    Send the Flowise prediction response as-is in data. Include agentFlowExecutedData if you want node-level assertions.

  3. Is it anything else?

    Connect as Custom API. Put your response text in a common key (text, message, output, etc.) and add tool_calls if your AI uses tools.