Sea12Docs
Engine & Data

Execution Engine

How the engine works: graph traversal, node state machine, concurrency, timeouts, error handling, retry.


The execution engine is the runtime that processes workflows. It manages graph traversal, node execution, data propagation, concurrency, timeouts, and error handling.


Execution Types

Process Execution

Event-driven execution triggered by input nodes (Gmail, webhook, schedule). The engine traverses the full process graph and records results.

Function Execution

Callable execution used internally by schematization coupling and script nodes. Functions are smaller graphs with input/output nodes that can be invoked programmatically. Function executions can be nested.


Execution Lifecycle

Trigger fires
    │
    ▼
Create ExecutionContext
    │  ├── Snapshot process graph (nodes + edges)
    │  ├── Initialize all node states to "pending"
    │  ├── Precompute input ancestor cache
    │  └── Validate template references
    │
    ▼
Start timeout monitor
    │
    ▼
startReadyNodes()
    │  └── For each node with all parents complete: start concurrent worker
    │
    ▼
executeNode() [per node, concurrent]
    │  ├── Mark node "running"
    │  ├── Copy input data from parents
    │  ├── Resolve templates in config
    │  ├── Run node function (type-specific logic)
    │  ├── handleNodeSuccess() → store result
    │  └── handleNodeCompletion()
    │       ├── Propagate data to children
    │       ├── Persist node state to DB
    │       └── Start newly ready children
    │
    ▼
All nodes complete or error
    │
    ▼
Record execution result (status, output, duration)

Node State Machine

Every node progresses through these states:

pending ──→ running ──→ completed
                │
                ├──→ failed
                └──→ skipped
StateMeaning
pendingWaiting for all parent nodes to complete
runningCurrently executing
completedFinished successfully, output data available
failedEncountered an error
skippedSkipped because no input data was available, or it's on an inactive conditional branch

State Transitions

A node transitions to running when all of its parent nodes are in a terminal state (completed, skipped, or failed with skip propagation). This ensures data dependencies are satisfied before execution begins.


Concurrency Model

The engine executes independent nodes concurrently. Multiple independent nodes run in parallel:

[Email Input] ──→ [Schematization] ──→ [Script A] ──→ [Output]
                                    ──→ [Script B] ──↗

In this graph, Script A and Script B run in parallel because they share the same parent and have no dependency on each other. The Output node waits for both to complete.


Data Propagation

When a node completes, its output data is propagated to all children:

  1. The node's result is stored in the global data bucket (keyed by node label).
  2. For each child node:
    • The parent's output data is merged into the child's InputData map.
    • The key is the parent node's label.
    • All ancestor data (not just direct parents) is forwarded for template resolution.
  3. If a child has multiple parents, it waits for all and merges all their outputs.

Data Bucket

The bucket is a map of all completed node outputs during an execution:

JSON
{
  "Email Input": {
    "sender": "buyer@example.com",
    "body": "Order for 5000 lbs carbon steel"
  },
  "Extract Order": {
    "Order Request": {
      "weight_lbs": 5000,
      "steel_type": "carbon"
    }
  },
  "Calculate Quote": {
    "output": {
      "total_price": 2250.00
    }
  }
}

Any downstream node can reference any upstream node's data via templates, regardless of direct edge connections.

File Propagation

Files flow along edges separately from data. Each node's output files are tagged with the node label. Downstream nodes access files via:

input["files"]["Node Label"][0]

Files are URL references (not in-memory), so passing files between nodes is lightweight.


Template Resolution

Before a node executes, the engine scans its config JSON for template expressions ({{...}}). Each expression is resolved against the data bucket.

Resolution Order

  1. Parse the expression from {{...}}
  2. Check if it's a function call (e.g., uuid(), now("..."))
  3. If not a function, treat as a path reference
  4. Resolve the path against the bucket data
  5. If unresolvable, leave the template as-is (literal string)

Type Preservation

When a template is the entire JSON value, the resolved type is preserved:

JSON
{"count": "{{input[\"Node\"][\"count\"]}}"}

If count is the number 42, the result is {"count": 42} (number, not string).

When a template is part of a larger string, the result is always a string:

JSON
{"message": "Count is {{input[\"Node\"][\"count\"]}}"}

Result: {"message": "Count is 42"} (string).

See Data Flow and Templates for the complete template reference.


Skip Propagation

Nodes are skipped when:

  1. No input data: A node has no parent, or all parents were skipped, and it has no input ancestors.
  2. Inactive conditional branch: A conditional node routes to handle "true", so nodes connected to handle "false" are skipped.
  3. Orphan detection: Nodes with no path from any input node are identified during initialization and skipped.

When a node is skipped:

  • Its status is set to skipped
  • Its children may also be skipped (if all their parents are skipped)
  • Skipped nodes appear in the execution history with no input/output data

Timeout Management

Each execution has a timeout (configurable, with a system default). The timeout is managed by a background monitor that tracks elapsed time.

Wait node exception

The timeout is paused while a wait node is active. This allows processes with wait nodes to pause for extended periods without hitting the timeout.

When a timeout fires:

  1. The execution context is cancelled
  2. All running nodes detect the cancellation and exit
  3. The execution is recorded with status timeout

Error Handling

First Error Cancels

When any node fails, the execution context is cancelled. All other running nodes detect the cancellation and stop. The execution is recorded with status failed.

Exceptions:

  • Loop with continueOnError: Failed iterations don't cancel the loop. The loop continues processing remaining items and records which iterations failed.

Retry

Processes can have a maxRetries setting. When an execution fails:

  1. If retries remain, the execution is automatically rescheduled
  2. The retry uses fresh graph data (picks up any config changes)
  3. Each retry is a separate execution record

Error Webhook

If the process has an error_webhook_url, the engine POSTs error details on failure:

JSON
{
  "process_id": "uuid",
  "execution_id": "exec-id",
  "status": "failed",
  "error": "node 'Script' failed: nil index",
  "error_category": "SCRIPT_ERROR",
  "failed_nodes": [
    {
      "node_id": "uuid",
      "node_name": "Script",
      "error": "nil index",
      "error_category": "SCRIPT_ERROR"
    }
  ],
  "started_at": "...",
  "completed_at": "...",
  "duration_seconds": 3
}

Graph Snapshot

When an execution starts, the engine takes a snapshot of the process graph (nodes, edges, configs). This snapshot is stored in the graph_snapshot column of the execution record.

This ensures that:

  • Editing a process while an execution is running doesn't affect the running execution
  • Historical executions always show the graph as it was at execution time
  • Execution results can be correlated with the exact config that produced them