Traces: Flowise
The Flowise parser expects the response structure that Flowise’s prediction API returns.
How to send traces
Section titled “How to send traces”Flowise doesn’t have a dedicated Mibo node, so you send traces via HTTP. Here are the most common approaches:
From your backend
Section titled “From your backend”If your application calls the Flowise prediction API, forward the response to Mibo. This is the simplest approach, since your backend already has the response data.
// After calling Flowise prediction APIconst flowiseResponse = await fetch(`${flowiseUrl}/api/v1/prediction/${chatflowId}`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ question: userMessage })});const data = await flowiseResponse.json();
// Send the response as a trace to Miboawait fetch('https://api.mibo-ai.com/public/traces', { method: 'POST', headers: { 'Content-Type': 'application/json', 'x-api-key': 'YOUR_MIBO_API_KEY' }, body: JSON.stringify({ data: data, metadata: { chatflowId: chatflowId } })});From Flowise directly (Custom Function)
Section titled “From Flowise directly (Custom Function)”Add a Custom Function node at the end of your agentflow that sends the conversation output to Mibo after each interaction. This lets you monitor production traffic without modifying your backend.
Step 1: Configure your agent nodes to save output to state
For the Custom Function to access your agent’s response, each agent node in your agentflow needs to write its output to a state variable. In each agent node’s settings, under Update State, add an entry:
| Key | Value |
|---|---|
miboTrace | {{ output }} |
This stores the agent’s final output in $flow.state.miboTrace, which the Custom Function reads in the next step.
Step 2: Add your Mibo API key as a Flowise Variable
In your Flowise instance, go to Settings > Variables and add:
| Variable | Type | Value |
|---|---|---|
MIBO_API_KEY | Static | Your Mibo API key (from Project > API Keys in the Mibo dashboard) |
Step 3: Add the Custom Function node
Add a Custom Function node and connect it after your last node (e.g., after your final Agent or LLM node). Paste this code:
const API_KEY = $vars.MIBO_API_KEY;
if (API_KEY) { const payload = { data: { text: $flow.state.miboTrace || $input || '', question: $flow.input }, externalMetadata: { chatflowId: $flow.chatflowId }, metadata: { chatId: $flow.chatId, sessionId: $flow.sessionId, timestamp: new Date().toISOString() } };
axios.post('https://api.mibo-ai.com/public/traces', payload, { headers: { 'Content-Type': 'application/json', 'x-api-key': API_KEY } }).catch(() => {});}
return $flow.state.miboTrace || $input || '';How this works:
$flow.state.miboTraceis the state variable you set up in Step 1 — it contains the agent’s response text. This is what Mibo evaluates with semantic, json_match, and regex assertions.$inputis the output from the previous node in the flow. IfmiboTraceis empty (e.g., the agent branch didn’t run), it falls back to whatever the previous node returned.$flow.inputis the original user question.- The function sends the trace in the background (fire-and-forget) and returns the original output unchanged, so it won’t affect your users.
What this approach supports:
| Assertion | Custom Function | Backend forwarding |
|---|---|---|
semantic | Yes | Yes |
json_match | Yes | Yes |
response_regex | Yes | Yes |
json_schema | Yes | Yes |
node_call | No | Yes |
token_limit | No | Yes |
For node_call and token_limit assertions, use the backend forwarding approach instead, which captures the full Flowise response including agentFlowExecutedData.
From a webhook or event handler
Section titled “From a webhook or event handler”If your application uses webhooks or event-driven architecture, trigger the trace POST from your event handler whenever Flowise completes an interaction.
Minimal example
Section titled “Minimal example”{ "data": { "text": "The AI responded with this message." }}With node execution data
Section titled “With node execution data”For node_call assertions and tool call validation, include agentFlowExecutedData:
{ "data": { "text": "I found flights to Madrid starting at $299.", "agentFlowExecutedData": [ { "nodeLabel": "ChatOpenAI", "nodeId": "chatOpenAI_0", "data": { "output": { "text": "I found flights to Madrid starting at $299." }, "usedTools": [ { "tool": "search_flights", "toolInput": { "origin": "NYC", "destination": "Madrid" } } ] } }, { "nodeLabel": "Memory", "nodeId": "bufferMemory_0", "data": { "output": { "chat_history": "..." } } } ] }}Structure:
| Field | Required | Description |
|---|---|---|
text | Yes | The main response text |
agentFlowExecutedData | No | Array of node executions |
agentFlowExecutedData[].nodeLabel | Yes | Node display name (used for node_call matching) |
agentFlowExecutedData[].nodeId | No | Node identifier |
agentFlowExecutedData[].data.output | Yes | Node output (becomes NodeCall.arguments) |
agentFlowExecutedData[].data.usedTools | No | Tools used by this node |
Tool call format (inside usedTools):
| Field | Required | Description |
|---|---|---|
tool | Yes | Tool/function name |
toolInput | Yes | Input passed to the tool |
JSON responses
Section titled “JSON responses”If the text field contains a JSON string, Mibo auto-parses it. This means json_match assertions work against the parsed JSON:
{ "data": { "text": "{\"status\": \"success\", \"booking_id\": \"BK-123\"}" }}With this trace, a json_match assertion for field: "status" with expected_value: "success" would pass.
Compatible assertions
Section titled “Compatible assertions”| Assertion type | Works? | What it checks |
|---|---|---|
| Semantic | Yes | Evaluates extracted text |
node_call | Yes | Checks agentFlowExecutedData entries |
node_call + expected_tool_calls | Yes | Checks usedTools in node data |
node_call + expected_arguments | Yes | Checks node data.output values |
json_match | Yes | Checks fields in data or parsed JSON text |
response_regex | Yes | Matches pattern against text |
json_schema | Yes | Validates response structure |
token_limit | Automatic if response includes a supported token format |
Enabling token_limit assertions
Section titled “Enabling token_limit assertions”Include token usage in your Flowise trace response at the root level. Flowise with Google Gemini returns it automatically under usageMetadata:
{ "data": { "text": "Here is the analysis.", "usageMetadata": { "promptTokenCount": 200, "candidatesTokenCount": 150, "totalTokenCount": 350 } }}For OpenAI-based flows, the usage field is used:
{ "data": { "text": "Here is the answer.", "usage": { "prompt_tokens": 120, "completion_tokens": 80, "total_tokens": 200 } }}Full example with curl
Section titled “Full example with curl”curl -X POST "https://api.mibo-ai.com/public/traces" \ -H "Content-Type: application/json" \ -H "x-api-key: mibo_your_api_key" \ -d '{ "data": { "text": "Your appointment has been booked for tomorrow at 9 AM.", "agentFlowExecutedData": [ { "nodeLabel": "Agent", "nodeId": "agent_0", "data": { "output": { "text": "Your appointment has been booked for tomorrow at 9 AM." }, "usedTools": [ { "tool": "create_booking", "toolInput": { "date": "2026-03-09", "time": "09:00" } } ] } } ] }, "metadata": { "chatflowId": "cf-abc123" } }'