Skip to main content
Version: Current

Patterns

This document describes common patterns for building workflows in opscotch.

Synthesized Storage Pattern

The Synthesized Storage pattern uses steps to create a virtual storage system where data can be stored and retrieved across workflow executions. This pattern is useful when you need to maintain state between workflow runs or share data between different workflows.

How it works

  1. Create storage step: A step that uses context.setPersistedItem() to store data
  2. Retrieve storage step: A step that uses context.getPersistedItem() to retrieve stored data
  3. Key naming convention: Use consistent naming like storage:{keyName} to organize stored items

Example

{
"steps": [
{
"stepId": "storeData",
"trigger": {
"runOnce": true
},
"resultsProcessor": {
"script": "context.setPersistedItem('storage:myData', JSON.stringify(myData))"
}
},
{
"stepId": "retrieveData",
"trigger": { "type": "http" },
"urlGenerator": { "script": "..." },
"resultsProcessor": {
"script": "var data = JSON.parse(context.getPersistedItem('storage:myData')); ..."
}
}
]
}

Use cases

  • Caching API responses
  • Storing configuration between restarts
  • Maintaining counters or aggregations

Controller Pattern

The Controller pattern separates workflow logic into distinct roles:

  • Controller step: Makes decisions and orchestrates other steps
  • Worker steps: Perform specific tasks

How it works

  1. Controller step: Uses sendToStep to call worker steps based on conditions
  2. Worker steps: Perform specific operations (API calls, transformations, etc.)
  3. Result handling: Controller collects results and makes final decisions

Example

{
"steps": [
{
"stepId": "controller",
"trigger": { "type": "http" },
"resultsProcessor": {
"script": "
var data = JSON.parse(context.getBody());
if (data.type === 'A') {
context.sendToStep('processTypeA', JSON.stringify(data));
} else {
context.sendToStep('processTypeB', JSON.stringify(data));
}
"
}
},
{
"stepId": "processTypeA",
"resultsProcessor": { "script": "..." }
},
{
"stepId": "processTypeB",
"resultsProcessor": { "script": "..." }
}
]
}

Multiple Triggers Pattern

A single step can respond to multiple trigger types, allowing flexible workflow activation.

How it works

Configure multiple triggers on a step - when any trigger fires, the step executes.

Example

{
"stepId": "unifiedProcessor",
"trigger": {
"http": { ... },
"timer": { ... },
},
"resultsProcessor": { "script": "..." }
}

Use cases

  • Same processing logic for manual and scheduled execution
  • HTTP receiver loads data into step queue, timer trigger batch proccessing off queue.

Error Handling Pattern

When calling another step via sendToStep, always check for errors before processing the result.

Preferred pattern

{
"stepId" : "callApi",
"resultsProcessor" : {
"script" : "
var response = context.sendToStep(stepId, body);
if (response && response.isErrored()) {
context.log('Step failed: ' + JSON.stringify(response));
return;
}

// Only proceed with response if not errored
context.sendToStep('processResult', JSON.stringify(response));
"
}
}

Key points

  • Always check response.isErrored() first
  • Log errors for debugging
  • Handle error case explicitly before proceeding

Data Property Pattern

Use the data property to pass configuration to processors via context.getData() or context.getRestrictedDataFromHost(String host) functions. In practice this can effectively be thought of as "parameter passing" to processors. This makes processors reusable and configurable.

Configuration

The data property is an object on the following configurations:

Data merging

Data is merged hierarchically with deeper levels taking precedence:

  • Primitives are overwritten - including types (last wins)
  • Objects and arrays are merged (additive)

Data merging flow:

Data propertyMerged objects
bootstrap.databootstrap.data
host.databootstrap.data + host.data
workflow.databootstrap.data + workflow.data
step.databootstrap.data + workflow.data + step.data
processor.databootstrap.data + workflow.data + step.data + processor.data

Merge behavior

When data is merged from multiple levels:

  • Last merged wins: The most specific (deepest) level's value takes precedence
  • Primitives are overwritten: String, number, boolean values at the deeper level replace values from higher levels
  • Objects and arrays are additive: They are merged together, combining their contents rather than replacing

Example: If you have:

// bootstrap.data
{ "config": { "timeout": 5000 }, "tags": ["prod"] }

// step.data
{ "config": { "retries": 3 }, "tags": ["beta"] }

The merged result would be:

{ "config": { "timeout": 5000, "retries": 3 }, "tags": ["prod", "beta"] }

Authentication Pattern

Always use the authentication processor for secure HTTP authentication.

An authenticationProcessor runs immediately before each outbound HTTP call for that step is made. It is intended to add secrets to that outgoing request, such as tokens, cookies, or Authorization headers. It is not used for inbound HTTP requests handled by an http trigger.

Authentication logic should be isolated into dedicated authentication steps. Any step that is executed from an authenticationProcessor must be a scripted-auth step, not a normal scripted step. This ensures the flow runs with AuthenticationJavascriptContext, which can access restricted authentication data and cannot call non-authentication steps.

Configuration

  1. Mark host as authentication host in bootstrap:
{
"hosts": {
"secureApi": {
"authenticationHost": true,
"host": "https://api.example.com",
"data": {
"apiKey": "secret-key-value"
}
}
}
}
  1. Use authenticationProcessor to call a dedicated scripted-auth step:
{
"steps": [
{
"stepId": "callSecure",
"authenticationProcessor": {
"script": "
context.sendToStep('applySecureApiAuth');
"
},
"urlGenerator": { "script": "context.setUrl('secureApi', '/data')" },
"resultsProcessor": { "script": "..." }
},
{
"stepId": "applySecureApiAuth",
"type": "scripted-auth",
"resultsProcessor": {
"resource": "/general/authentication/standard-restricted-data-as-header.js",
"data": {
"fromHost": "secureApi",
"keyOfValue": "apiKey",
"headerName": "Authorization"
}
}
}
]
}

Key points

  • Never put authentication in host headers (not secure)
  • Authentication code should run only in scripted-auth steps reached from authenticationProcessor
  • Authentication processor automatically redacts credentials from logs
  • Only authentication host data is accessible in authentication context
  • Authentication flows may call only other scripted-auth steps
  • Changes made in the authentication flow are for the pending HTTP request and authentication state, and are not visible to non-authentication contexts