Testing
Test cases, test runs, expected output validation, batch testing.
Test cases let you define expected inputs and outputs for a process. Run tests individually or in batches to verify that your workflows produce correct results.
Test Cases
A test case captures:
- Input data to feed into a specific input node
- Expected output schema for validation
- Expected file outputs
- Fields to ignore during comparison
Test Case Properties
| Field | Type | Description |
|---|---|---|
id | UUID | Test case identifier |
processId | UUID | Process this test belongs to |
name | string | Descriptive test name |
description | string | What this test verifies |
inputNodeId | string | Which input node receives the test data |
inputData | JSONB | Test input payload |
expectedOutputSchema | JSONB | JSON Schema that the output must match |
expectedFiles | JSONB | Expected file metadata |
ignoreFields | string[] | JSON paths to exclude from validation |
tags | string[] | Tags for categorization and filtering |
disabled | bool | Skip this test in batch runs |
Creating a Test Case
API: POST /test-cases/by-process/:process_id
{
"name": "Standard carbon steel order",
"description": "Verifies correct quote for a basic carbon steel order",
"inputNodeId": "input-node-uuid",
"inputData": {
"sender": "buyer@steelcustomer.com",
"subject": "Steel Order Request - Carbon Steel",
"body": "Hi, we need a quote for 5000 lbs of carbon steel. Standard delivery. Thanks, John Smith",
"thread_id": "test_thread_123"
},
"expectedOutputSchema": {
"type": "object",
"properties": {
"total_price": {
"type": "number",
"minimum": 2000,
"maximum": 2500
},
"steel_type": {
"type": "string",
"enum": ["carbon"]
},
"weight_lbs": {
"type": "number",
"enum": [5000]
}
},
"required": ["total_price", "steel_type", "weight_lbs"]
},
"ignoreFields": ["_metadata", "estimated_delivery"],
"tags": ["carbon", "standard", "smoke"]
}Input Data
The inputData field simulates what the input node would produce. For a Gmail input node, provide the email fields (sender, subject, body, thread_id). For a webhook input, provide the JSON payload.
Expected Output Schema
The expectedOutputSchema is a JSON Schema that the process output is validated against. Use it to assert:
- Required fields exist
- Values are within expected ranges
- Types are correct
- Enum values match
Ignore Fields
ignoreFields is a list of JSON paths excluded from validation. Useful for fields that vary between runs:
["_metadata", "timestamp", "execution_id", "message_id"]
Tags
Tags categorize tests for filtering and selective execution:
["smoke", "carbon-steel", "regression"]
Running Tests
Individual Test
API: POST /test-execution/:test_case_id/run
Triggers a single test case execution. The process runs with the test's input data, and the output is validated against the expected schema.
Batch Testing
API: POST /test-runs/batch
Runs multiple test cases in sequence. Requires a system API key.
{
"processId": "process-uuid",
"testCaseIds": ["test-1-uuid", "test-2-uuid", "test-3-uuid"],
"tags": ["smoke"]
}If testCaseIds is provided, those specific tests run. If tags is provided, all test cases with matching tags run. Disabled test cases are skipped.
Test Runs
A test run records the result of executing a test case.
Test Run Properties
| Field | Type | Description |
|---|---|---|
id | UUID | Test run identifier |
testCaseId | UUID | Which test case was run |
processId | UUID | Which process was tested |
executionId | string | Engine execution ID |
processExecutionId | UUID | Process execution record ID |
status | string | passed, failed, error, timeout |
startedAt | timestamp | When the test started |
completedAt | timestamp | When the test finished |
durationSeconds | int | Test duration |
validationResults | JSONB | Schema validation details |
actualOutput | JSONB | Actual output data |
schemaErrors | string[] | List of schema validation errors |
fileComparisonResults | JSONB | File comparison details |
runByUserId | UUID | Who ran the test |
Test Statuses
| Status | Meaning |
|---|---|
passed | Output matches expected schema, no errors |
failed | Output does not match expected schema |
error | Execution failed before producing output |
timeout | Execution timed out |
Validation Results
The validationResults field contains detailed schema validation output:
{
"valid": false,
"errors": [
{
"field": "total_price",
"message": "expected number >= 2000, got 1500",
"path": "$.total_price"
},
{
"field": "steel_type",
"message": "expected one of [carbon], got stainless",
"path": "$.steel_type"
}
]
}Querying Test Results
List Test Runs
API: GET /test-runs/
Query parameters:
| Param | Type | Description |
|---|---|---|
processId | UUID | Filter by process |
testCaseId | UUID | Filter by test case |
status | string | Filter by status |
page | int | Page number |
pageSize | int | Items per page |
Test Statistics
API: GET /test-runs/stats
Returns aggregated statistics:
{
"total": 50,
"passed": 42,
"failed": 5,
"error": 2,
"timeout": 1,
"passRate": 0.84,
"averageDuration": 12.5
}Single Test Run Detail
API: GET /test-runs/:test_run_id
Returns the full test run with validation results, actual output, and comparison details.
Test Patterns
Smoke Tests
Quick tests that verify the basic flow works:
{
"name": "Basic flow smoke test",
"tags": ["smoke"],
"inputData": { "body": "Simple test input" },
"expectedOutputSchema": {
"type": "object",
"required": ["output"]
}
}Boundary Tests
Test edge cases and boundary values:
{
"name": "Zero weight order",
"tags": ["boundary"],
"inputData": { "body": "Quote for 0 lbs of steel" },
"expectedOutputSchema": {
"type": "object",
"properties": {
"total_price": { "type": "number", "enum": [0] }
}
}
}Regression Tests
Capture specific bugs that were fixed:
{
"name": "Regression: stainless steel type extraction",
"description": "Previously extracted 'stainless steel' as two words instead of steel_type=stainless",
"tags": ["regression"],
"inputData": { "body": "Need 1000 lbs of stainless steel" },
"expectedOutputSchema": {
"type": "object",
"properties": {
"steel_type": { "type": "string", "enum": ["stainless"] }
}
}
}File Output Tests
Verify file outputs:
{
"name": "PDF report generation",
"tags": ["files"],
"expectedFiles": {
"count": 1,
"files": [
{
"namePattern": "report-*.pdf",
"minSize": 1024
}
]
}
}