Working with Workflows
Workflows are the runtime unit that turns configuration, triggers, and processors into a working outcome.
Use this guide to understand how workflows execute, how state and context move between steps, and how to structure processor resources. For reusable solution shapes such as controller steps, authentication flows, or HTTP event normalization, see Patterns.
This page is not an exhaustive introduction. If you are new to opscotch, start with the Getting Started guide first.
Configuration recap
At a high level:
- The bootstrap references a workflow file and defines host and data information
- The workflow file can contain multiple workflows
- A workflow is made up of 1 or more steps
- A step has a series of processors that execute JavaScript to achieve an outcome
- A processor can call other steps to create graph-style execution
- A workflow is started by a step trigger
- Many steps call HTTP endpoints, but this is not required
Configuration immutability
Once loaded, bootstrap and workflow configuration are treated as immutable runtime inputs. This keeps execution predictable and avoids mid-run behavior changes.
Bootstrap configuration immutability
The bootstrap configuration becomes immutable after startup. After the agent loads the bootstrap:
- All host configurations, data properties, and deployment settings are locked
- Changes to the bootstrap file on disk after startup have no effect
- To apply bootstrap changes, the agent must be restarted
Workflow configuration immutability
Workflow definitions are also immutable after each load, but the way a new version is loaded depends on how the workflow is delivered:
| Workflow Type | Update Mechanism |
|---|---|
| Packaged workflows | Configuration updates when a new version of the package is deployed |
| Unpackaged (raw JSON) workflows | Configuration reloads when the workflow file is updated on disk |
Packaged workflows
When workflows are packaged as .oapp files, the configuration is baked into the package. To update a packaged workflow:
- Modify the workflow configuration in the source
- Repackage the application
- Deploy the new package version
Unpackaged (raw JSON) workflows
For workflows loaded directly from JSON files on disk:
- The agent monitors the workflow file for changes
- When the file is updated, the agent reloads the configuration automatically
- No restart is required for unpackaged workflow updates
- Workflows are not reloaded when only a resource changes. To force a workflow reload in that case, make a whitespace change to the workflow file.
This allows workflow configuration changes without interrupting unrelated workflows or restarting the agent.
Workflow/App reload caution
When an app or workflow reloads, its in-memory state is refreshed and any non-persisted state is lost.
This is especially important if one app holds state on behalf of another. A reload clears that state unless you have persisted it explicitly.
Workflow Persistence
Use workflow persistence when state must survive between runs.
By default, persistence files are created in the agent working directory. You can override that path on a workflow step, but it is usually better to set the bootstrap persistenceRoot.
The main consideration is to ensure that the directory used for persistence exists and is writable by the agent.
If you expect workflows to resume after a restart or redeployment, the persistence files must still be available to the agent.
Persistence usage
Use context.setPersistedItem(String key, String value) to persist a value. The stored value is a string.
Use context.getPersistedItem(String key) to read a persisted value. The returned value is also a string.
Fully understand what you are trying to achieve
Before you write a workflow, be clear about the outcome you want and how you would achieve it without opscotch.
A practical starting point is to prove the interactions with tools such as curl, Postman, or your browser's network tools.
Some useful habits:
- Write down the discrete steps you expect the workflow to perform
- Understand how each outbound call authenticates
- Understand how you need to split and iterate data
- Use the
commentproperty to describe what the workflow is trying to achieve
{
"comment" : [
"Retrieve leave requests from HR system",
"Flow:",
"1. Pull the last check date from persistence and query the api for leaves modified since the last check date",
"2. For each leave response: filter for leave with the same modified date and date submitted",
"3. Pull user for the leave and generate the leave message",
"4. Send to notifying system"
]
}
- Use the
commentproperty to describe what each step is trying to achieve
{
"comment" : ["1. Pull the last check date from persistence and query the api for leaves modified since the last check date"]
}
Is it feasible?
Before proceeding with implementation, verify that your desired outcome is achievable within opscotch's capabilities. Any solution implemented with opscotch must map back to opscotch's capabilities and limitations — this is not negotiable.
How will the workflow be started?
The workflow must be triggered by one of the supported trigger types defined in the Step trigger configuration:
| Trigger Type | Use Case |
|---|---|
timer | Scheduled recurring execution at fixed intervals |
http | HTTP request initiates the workflow |
tcp | TCP message initiates the workflow |
fileWatcher | File system changes trigger execution |
runOnce | Execute once at startup |
deploymentAccess | Called from another deployment |
Feasibility check: Can your workflow's triggering requirement be mapped to one of these trigger types? If not, opscotch cannot run your workflow.
What inputs are required?
Data must flow into the workflow through one of these mechanisms:
| Input Mechanism | Description |
|---|---|
| Trigger input | Data passed via HTTP request body/query, TCP message, or file content |
context.files() | File content read from configured file access permissions |
| HTTP calls | Data fetched from external systems during workflow execution |
| Bootstrap/host data | Static data configured in the bootstrap file |
Feasibility check: For each piece of data your workflow needs:
- Identify where it comes from
- Verify the input mechanism is available (file permissions configured, host access granted, etc.)
- If the data requires user interaction at runtime, consider if it can be pre-configured or fetched programmatically
What outputs are required?
Define how results are produced:
| Output Mechanism | Description |
|---|---|
| HTTP response | Return data via the HTTP trigger response |
context.sendMetric() | Emit metrics for collection/alerting |
context.log() | Write to workflow logs |
| OTEL telemetry | Emit traces via OpenTelemetry |
| HTTP call | Send data to external systems |
context.files() | write_to_file data to configured file system |
Feasibility check: Can your required outputs be mapped to one or more of these mechanisms? If you need to produce outputs not in this list (e.g., push notifications to arbitrary systems), ensure the target is reachable via HTTP or file write.
What capabilities are required?
Processors run in a restricted JavaScript environment with specific constraints:
| Capabilities | Available? |
|---|---|
| Plain JavaScript | Yes - no imports, no Node.js, vanilla JS only |
context API | Yes - see JavascriptContext |
| HTTP requests | Yes - via configured hosts |
| File operations | Yes - via configured file access |
| Crypto operations | Yes - via context.crypto() |
| External libraries | No - no require(), import, or Node.js APIs |
| Node.js built-ins | No - no fs, path, crypto module access |
| Async JavaScript | No - no await, Promise, setTimeout, setInterval, or callbacks |
Important: Within a single processor, JavaScript runs synchronously to completion without yielding. However, opscotch runs processors on a global event loop, so you can simulate JavaScript async patterns by using sendToStep to yield control back to the event loop and continue processing in subsequent steps.
Feasibility check: For each processing requirement:
- Can it be expressed in plain JavaScript?
- Can it be accomplished using the
contextAPI methods? - If external services are needed, are they configured as hosts in the bootstrap?
Common Infeasibility Patterns
If your answer to any of these questions is "no", the outcome may not be feasible:
- No suitable trigger: Your use case requires event types not supported
- No input path: Required data cannot be obtained via available mechanisms
- No output path: Results cannot be delivered to required destinations
- Requires libraries or Node.js: Solution depends on npm packages,
require()/import, or Node.js built-in modules likefs,path, orcrypto - Requires async JavaScript: Solution depends on
await,Promise,setTimeout,setInterval, or callbacks
When in Doubt
If feasibility is unclear, prototype the critical path first:
- Configure minimal bootstrap with required hosts/permissions
- Create a test workflow that fetches one input and produces one output
- Verify the mechanism works before building the full solution
Using processor resources
Processor code can live in files called resources.
When packaging, set resourceDirs to the directory roots that contain those files.
Processors can take a single resource or script property. If both are present, script overrides resource.
scriptlets you write JavaScript directly in the processor
{
"urlGenerator" : {
"script" : "console.log('hello')"
}
}
resourcelets you keep the code in a file that is loaded into thescriptproperty at package time
{
"urlGenerator" : {
"resource" : "myProcessor.js"
}
}
Because resource is a file, you can reuse it on multiple processors.
For guidance on parameterizing reusable processors, see Data Property Pattern. For schema-first resource authoring, see Resource documentation with doc.
Step processor order
The processors on a step do not all run every time. Which ones run depends on the step type and whether the step is making an HTTP call.
Standard step order
For a normal scripted step, the processors run in this order:
urlGeneratorpayloadGeneratorauthenticationProcessor- HTTP call
resultsProcessor
If the step defines HTTP status handling for the returned status code, that status-specific handler runs instead of the resultsProcessor.
If the HTTP call cannot be made because the connection fails, httpConnectFailedProcessor runs instead of resultsProcessor.
Split and aggregate step order
For a scripted-split-aggregate step, the order is:
splitGenerator- For each item produced by
splitGenerator, runurlGenerator - For each item, run
payloadGenerator - For each item, run
authenticationProcessor - For each item, make the HTTP call
- For each item, run
itemResultProcessor - After all items have been processed, run
resultsProcessor
The resultsProcessor for a split step receives the collected item results as a JSON array string.
If a status-specific HTTP handler is defined for an item response, that handler runs for that item instead of itemResultProcessor.
If an item's HTTP connection fails, httpConnectFailedProcessor runs for that item.
When each processor runs
| Processor | When it runs | Requirements |
|---|---|---|
splitGenerator | Runs first on scripted-split-aggregate steps | Only used on scripted-split-aggregate steps |
urlGenerator | Runs before any HTTP call | Required if the step will make an HTTP call |
payloadGenerator | Runs after urlGenerator and before authentication | Only runs if it is defined. Requires urlGenerator |
authenticationProcessor | Runs after payloadGenerator and before the HTTP call | Only runs if it is defined. Requires urlGenerator |
itemResultProcessor | Runs after each successful item call in a scripted-split-aggregate step | Only used on scripted-split-aggregate steps. Skipped if a status-specific handler matches the response |
resultsProcessor | Runs after a successful call on a normal step, or after all items on a split step | Normally required. If you omit it and rely on status-specific handlers, make sure you have handlers for every status you expect to receive. Skipped when a matching status-specific handler runs |
httpConnectFailedProcessor | Runs when the HTTP request cannot connect or fails before a usable response is available | Only runs if it is defined |
Important behavior to keep in mind
- If there is no
urlGenerator, there is no HTTP call. In that case the step can still run aresultsProcessordirectly. - If
payloadGeneratoris not defined, the outbound HTTP body is cleared before the call is made. - If you define status-specific HTTP handlers, they take priority over
resultsProcessororitemResultProcessorfor matching status codes. - A step cannot use
payloadGeneratororauthenticationProcessorunless it also has aurlGenerator.
Which JavaScript context is used
The JavaScript context object is not always the same. It depends on whether the processor is running as part of normal workflow execution or as part of authentication resolution.
What counts as an authentication flow
An authentication flow means either of these:
- execution inside any step's
authenticationProcessor - execution on a step whose
typeisscripted-auth
If either of those is true, the processor should be treated as running in an authentication flow.
Default rule
When a processor is running as part of normal step execution, and not in an authentication flow, the context is JavascriptContext.
This applies to:
splitGeneratorurlGeneratorpayloadGeneratoritemResultProcessorresultsProcessorhttpConnectFailedProcessor
Authentication rule
When a processor is running in an authentication flow, the context is AuthenticationJavascriptContext.
This always applies to:
authenticationProcessor
It also applies to any processor on a step with type: "scripted-auth".
Practical lookup
| Situation | context type |
|---|---|
| Any normal processor execution outside authentication flow | JavascriptContext |
authenticationProcessor | AuthenticationJavascriptContext |
Any processor on a scripted-auth step | AuthenticationJavascriptContext |
| Any processor execution that is part of an authentication flow | AuthenticationJavascriptContext |
What this means when authoring workflows
- Do not assume a processor always gets the same
contextjust because it has the same processor name. - If execution is inside an
authenticationProcessor, treat the processor as running withAuthenticationJavascriptContext. - If the step type is
scripted-auth, treat the processor as running withAuthenticationJavascriptContext. - If execution is not part of authentication handling, treat the processor as running with
JavascriptContext.
Understanding Step Scope and Context
At any moment, workflow execution is scoped to one step. The current body, properties, and step-local state are all interpreted from that step's point of view.
What Data Is Available in a Step
The data available to processors on a step comes from several sources with different scopes and lifetimes:
| Data Source | Description | Scope | Lifetime | Think of it as |
|---|---|---|---|---|
| data property | Configuration merged from bootstrap → workflow → step → processor | Processor on this specific step | Consistent across invocations of the same processor | Static configuration |
| Trigger input | Data from the trigger event (HTTP body, file content, timer tick) | This specific step invocation | Varies per invocation | Event input |
| Context (body/properties) | Data passed between steps via sendToStep | The running workflow execution | Changes as the workflow progresses | Per-run state |
| Step-local data | Data stored via setPersistedItem, queue operations, stepProperties | The specific step | Persists across invocations | Step-owned state |
Understanding Each Data Source
1. data property (merged configuration)
- Defined in bootstrap, workflow, step, or processor configuration
- Merged hierarchically with more specific levels overriding less specific ones
- Consistent for a given processor every time it runs
- Accessed via
context.getData() - For merge rules and examples, see Data Property Pattern.
2. Trigger input
- The data that triggered this step's execution
- For HTTP: the request body, headers, method
- For file watcher: the file content or path
- For timer: the tick event (may be empty)
- Only available when the step is invoked by its trigger
3. Context (body and properties)
- Passed explicitly via
context.sendToStep("nextStep", message) - Carries the state of the current workflow execution
- Different for each workflow run but consistent within that run
- Accessed via
context.getBody()andcontext.getProperty()
4. Step-local data
- Persisted data, queue operations, and stepProperties
- Mutations affect future invocations of this step
- Can persist across different workflow executions
- Accessed via
context.getPersistedItem(),context.getStepProperties(),context.queue()
Key Concept: Step Identity
Each step has an identity based on its stepId and the workflow it belongs to. This identity determines:
- What persisted data is associated with it
- What queues belong to it
- What stepProperties are available
When you call sendToStep("anotherStep", message), you pass body and properties into that step's execution context. The receiving step processes that input from its own step scope.
For reusable designs built on top of these mechanics, see the Patterns guide.
Using pre-made processor resources
Processor code can also be reused through files called resources. Store resource files in a resource directory so they are included when the workflow is packaged.
Resources often take parameters in the form of data properties. The expected input should be clear in the comments of the resource file.
The Community Resources repository, using the branch that matches your runtime version, is the official shared repository for pre-made resources. Third parties can also contribute or publish their own resource repositories.
Resource directories and files have no required structure, but they are usually grouped by the solution or problem they support. Resources that are generic or broadly reusable are often kept under a general directory. Examples include:
standard-url-generator.js: construct a basic URLstandard-forwarder.js: forward to another stepstandard-data-as-body.js: use adataproperty as the request bodystandard-json-resultprocessor.js: encapsulate complex JSON result aggregationstandard-restricted-data-as-auth.js: use host data for authenticationstandard-restricted-set-cookie-as-auth.js: use a cookie for authentication
As a convention, omit the resources directory name when referencing the file. Prefer "resource" : "/my-resource.js" over "resource" : "/resources/my-resource.js".
Skim a resource file before you adopt it. That helps you understand what it expects, what it changes, and whether it is the right fit for your workflow.
When a resource file is not found, the app workflow load will abort with an error saying that the resource was not found.
Resource documentation with doc
Each resource file should use the doc object to declare its interface and schemas. The doc object is available in the JavaScript runtime and provides both documentation and schema validation.
The doc object
The doc object provides the following methods:
| Method | Description | Lifecycle |
|---|---|---|
doc.inSchema(schema) | Define JSON Schema for input validation | Compiled on load, executed on resource invocation before run |
doc.dataSchema(schema) | Define JSON Schema for data property validation | Compiled on load, executed on resource invocation before run |
doc.run(function) | Define the execution function | On resource invocation |
doc.outSchema(schema) | Define JSON Schema for output validation | Compiled on load, executed on resource invocation after run |
doc.description(text) | Description of what the step does | Read on load |
The .run() method is the only required function. It contains the processor code. The other schema and description methods are optional, but they are useful when the processor reads the body, reads data, or sets an outbound body that should be validated or documented.
Schema validation
When schemas are defined with doc.inSchema(), doc.dataSchema(), or doc.outSchema(), the runtime automatically validates against them. Design each schema to accommodate every expected shape. Only mark a field as required if it is present on every valid code path. This provides:
- Automatic validation of required properties and types
- User-friendly error messages when validation fails
- Less need for null checks, type assertions, or ad hoc validation in processor code
Example resource using doc
The .run() method is required - it contains the processor code. All other methods are optional but preferred:
doc
.description("Retrieves items from DynamoDB by key")
.inSchema({
type: "array",
items: { type: "string" },
description: "List of keys to get"
})
.dataSchema({
required: ["tableName", "keyField", "keyFieldType"],
properties: {
tableName: { type: "string" },
keyField: { type: "string" },
keyFieldType: { type: "string" }
}
})
.run(() => {
// Processor logic goes here
const keys = JSON.parse(context.getBody());
const tableName = context.getData("tableName");
const keyField = context.getData("keyField");
const keyFieldType = context.getData("keyFieldType");
// ... rest of the logic
});
Using doc ensures your resources are well-documented and automatically validated at runtime.
How to call another step
From any processor a step can be called just like a function.
Calling a step in the same workflow
Use the context method context.sendToStep("nextStepId", "message to send to step") to call another step. This yields control to the global event loop, allowing subsequent steps to run.
{
"steps" [
{
"stepId" : "stepOne",
"resultsProcessor" : {
"processors" : [
{
"script" : "console.log('this is stepOne')"
},
{
"script" : "context.sendToStep('stepTwo', 'message')"
}
]
}
},
{
"stepId" : "stepTwo",
"resultsProcessor" : {
"script" : "console.log('this is stepTwo')"
}
}
]
}
When this executes it will
- print out
this is stepOne - call to
stepTwo(yields to event loop) - print out
this is stepTwo
Note on event loops: While individual processors run synchronously to completion, opscotch runs processors on a global event loop. Using sendToStep yields control back to this event loop, allowing other work to proceed. This can be used to simulate async patterns in your workflow design.
Error handling
When calling another step with sendToStep(...), always inspect the returned response before using it.
The full recommended pattern, including how to distinguish user errors from system failures, is documented in Error Handling Pattern.
How can I stop a workflow run with end()
Use context.end() when you want to stop the current step run immediately.
This is useful when:
- a branch has completed all required work and nothing else in the current run should execute
- a processor has already produced the desired output and should not fall through into later logic
- a helper step called with
sendToStep(...)should finish early and return control to the caller
{
"stepId" : "router",
"resultsProcessor" : {
"script" : "
var request = JSON.parse(context.getBody());
if (request.ignore === true) {
context.log('ignoring request');
context.end();
}
context.sendToStep('processRequest', context.getBody());
"
}
}
context.end() is different from return:
returnexits the current JavaScript functioncontext.end()stops the current step run immediately
This distinction matters because a processor can have more framework execution around it than the local JavaScript function body. end() is the explicit signal that the current run should stop now.
Using end() with sendToStep(...)
When a step is called with sendToStep(...), context.end() only stops the called step's run. It does not stop the caller automatically.
{
"stepId" : "caller",
"resultsProcessor" : {
"script" : "
var result = context.sendToStep('worker', context.getBody());
context.log('worker finished');
context.sendToStep('nextStep', result.getBody());
"
}
}
{
"stepId" : "worker",
"resultsProcessor" : {
"script" : "
if (!context.getBody()) {
context.end();
}
context.setBody('done');
"
}
}
In this pattern:
workerends its own run earlycallerstill receives the returnedJavascriptStateContextcallercontinues unless it also decides to callcontext.end()
This makes end() a useful control-flow pattern for guard clauses, early exits, and helper steps that should terminate cleanly without treating the run as an error.
Calling a step asynchronously (fire and forget)
opscotch processors run synchronously within a single processor. JavaScript does NOT support await, Promise, setTimeout, setInterval, or callbacks. The code runs to completion without yielding.
However, opscotch runs processors on a global event loop. You can simulate async patterns by using sendToStepAndForget to yield to the event loop - the step will run when capacity is available.
Use context.sendToStepAndForget("stepId", "message") when you want to trigger another step without waiting for it to complete:
{
"stepId" : "initiator",
"resultsProcessor" : {
"script" : "
console.log('triggering step');
context.sendToStepAndForget('asyncProcessor', 'work to do');
console.log('this prints immediately, the called step runs later');
"
}
}
Key differences between sendToStep and sendToStepAndForget:
| Aspect | sendToStep | sendToStepAndForget |
|---|---|---|
| Execution | Synchronous - calling JavaScript pauses until called step completes | Asynchronous - returns immediately, step runs when capacity available |
| Result | Returns the result of the called step | Returns nothing (void) |
| Use when | You need the result | You don't need the result |
Note: While JavaScript within a processor runs synchronously, using sendToStepAndForget yields to the global event loop, allowing other work to proceed. The step runs when capacity is available.
Calling a step in another deployment
When you have multiple deployments (bootstrap configurations) in the same agent, you can trigger steps in one deployment from another.
Prerequisites
The bootstrap must configure cross-deployment access:
// In the bootstrap file that will make the call:
{
"allowDeploymentAccess": [
{
"id": "appBridge",
"deploymentId": "remote-deployment-id",
"access": "call"
}
]
}
// In the bootstrap file that will receive the call:
{
"allowDeploymentAccess": [
{
"id": "appBridge",
"access": "receive",
"anyDeployment": false
}
]
}
Making the cross-deployment call
Use context.sendToStep("deploymentAccessId", "stepName", "message"):
{
"stepId" : "callOtherDeployment",
"resultsProcessor" : {
"script" : "
// Call step 'processor' in deployment referenced by 'appBridge'
var result = context.sendToStep('appBridge', 'processor', 'data to process');
console.log('got result: ' + result);
"
}
}
Fire and forget across deployments
Use context.sendToStepAndForget("deploymentAccessId", "stepName", "message") for non-blocking cross-deployment calls:
{
"stepId" : "notifyOtherDeployment",
"resultsProcessor" : {
"script" : "
context.sendToStepAndForget('appBridge', 'notify', 'notification message');
"
}
}
Passing headers in cross-deployment calls
Both sendToStep and sendToStepAndForget support passing headers:
{
"resultsProcessor" : {
"script" : "
context.sendToStep('appBridge', 'process', 'data', { 'X-Request-Id': '123' });
"
}
}
Authentication processing
An authenticationProcessor runs immediately before an outbound HTTP call on that step. Its job is to prepare the outgoing request, usually by obtaining secrets and applying request mutations such as tokens, cookies, or Authorization headers.
For configuration guidance and recommended structure, see Authentication Pattern.
Processor execution contexts
Processor execution uses one of two runtime contexts.
In normal workflow execution, the JavaScript context object is a JavascriptContext.
When execution starts in a step's authenticationProcessor, or in any step reached from that authentication flow via sendToStep(...), the context object is an AuthenticationJavascriptContext.
The difference is primarily about security boundaries and allowed side effects:
- JavascriptContext is the general-purpose processor context used for normal workflow logic.
- AuthenticationJavascriptContext is the restricted context used while preparing authentication for an outbound HTTP call.
AuthenticationJavascriptContextcan access authentication-specific capabilities such as restricted host/bootstrap data viagetRestrictedDataFromHost(...)and authentication step properties viagetAuthenticationPropertiesFromStep(...)andsetAuthenticationPropertiesOnStep(...).AuthenticationJavascriptContextcannot send metrics and cannot call non-authentication steps. Here, a non-authentication step means any Step whose type is notscripted-auth, so authentication flows may call only otherscripted-authsteps.- Mutations made in the authentication flow are intended for that pending HTTP request and related authentication state. They should not be treated as normal workflow state that is available to other, non-authentication execution contexts.
In practice, use JavascriptContext for normal workflow orchestration and business logic, and use AuthenticationJavascriptContext only for acquiring, caching, and applying authentication material.
How to set a header on all requests to a host
Opscotch lets you define headers in the bootstrap host section. Those headers are applied to every request sent to that host.
For example, to set the Content-Type: application/json header on all requests to myHost:
{
"myHost" : {
"host" : "https://example.com",
"headers" : {
"Content-Type" : "application/json"
}
}
}
The comment property
The comment property is a lightweight way to add human-readable notes throughout the workflow configuration.
A comment can be a string:
{
"comment" : "This is a valid comment"
}
or an array of strings:
{
"comment" : [
"These are also",
"valid comments"
]
}
Comments can be used on:
- workflows
- steps
- processors