Skip to main content

Core application

The Foundations section of the Temporal Developer's guide covers the minimum set of concepts and implementation details needed to build and run a Temporal Application—that is, all the relevant steps to start a Workflow Execution that executes an Activity.

In this section you can find the following:

How to install the Temporal CLI and run a development server

This section describes how to install the Temporal CLI and run a development Temporal Service. The local development Temporal Service comes packaged with the Temporal Web UI.

For information on deploying and running a self-hosted production Temporal Service, see the Self-hosted guide, or sign up for Temporal Cloud and let us run your production Temporal Service for you.

Temporal CLI is a tool for interacting with a Temporal Service from the command line and it includes a distribution of the Temporal Server and Web UI. This local development Temporal Service runs as a single process with zero runtime dependencies and it supports persistence to disk and in-memory mode through SQLite.

Install the Temporal CLI

The Temporal CLI is available on MacOS, Windows, and Linux.

MacOS

How to install the Temporal CLI on Mac OS

Choose one of the following install methods to install the Temporal CLI on MacOS:

Install the Temporal CLI with Homebrew

brew install temporal

Install the Temporal CLI from CDN

  1. Select the platform and architecture needed.
  1. Extract the downloaded archive.

  2. Add the temporal binary to your PATH.

Linux

How to install the Temporal CLI on Linux

Choose one of the following install methods to install the Temporal CLI on Linux:

Install the Temporal CLI with Homebrew

brew install temporal

Install the Temporal CLI from CDN

  1. Select the platform and architecture needed.
  1. Extract the downloaded archive.

  2. Add the temporal binary to your PATH.

Windows

How to install the Temporal CLI on Windows

Follow these instructions to install the Temporal CLI on Windows:

Install the Temporal CLI from CDN

  1. Select the platform and architecture needed and download the binary.
  1. Extract the downloaded archive.

  2. Add the temporal.exe binary to your PATH.

Start the Temporal Development Server

Start the Temporal Development Server by using the server start-dev command.

temporal server start-dev

This command automatically starts the Web UI, creates the default Namespace, and uses an in-memory database.

The Temporal Server should be available on localhost:7233 and the Temporal Web UI should be accessible at http://localhost:8233.

The server's startup configuration can be customized using command line options. For a full list of options, run:

temporal server start-dev --help

How to install a Temporal SDK

A Temporal SDK provides a framework for Temporal Application development.

An SDK provides you with the following:

NPM

This project requires Node.js 16.15 or later.

Create a project

npx @temporalio/create@latest ./your-app

Add to an existing project

npm install @temporalio/client @temporalio/worker @temporalio/workflow @temporalio/activity @temporalio/common
note

The TypeScript SDK is designed with TypeScript-first developer experience in mind, but it works equally well with JavaScript.

How to find the TypeScript SDK API reference

The Temporal TypeScript SDK API reference is published to typescript.temporal.io.

Where are SDK-specific code examples?

You can find a complete list of executable code samples in Temporal's GitHub repository.

Additionally, several of the Tutorials are backed by a fully executable template application.

Use the TypeScript samples library stored on GitHub to demonstrate various capabilities of Temporal.

Where can I find video demos?

Temporal TypeScript YouTube playlist.

How to import an ECMAScript module

The JavaScript ecosystem is quickly moving toward publishing ECMAScript modules (ESM) instead of CommonJS modules. For example, node-fetch@3 is ESM, but node-fetch@2 is CommonJS.

For more information about importing a pure ESM dependency, see our Fetch ESM sample for the necessary configuration changes:

  • package.json must have include the "type": "module" attribute.
  • tsconfig.json should output in esnext format.
  • Imports must include the .js file extension.

Linting and types in TypeScript

If you started your project with @temporalio/create, you already have our recommended TypeScript and ESLint configurations.

If you incrementally added Temporal to an existing app, we do recommend setting up linting and types because they help catch bugs well before you ship them to production, and they improve your development feedback loop. Take a look at our recommended .eslintrc file and tweak to suit your needs.

How to connect a Temporal Client to a Temporal Service

A Temporal Client enables you to communicate with the Temporal Service. Communication with a Temporal Service includes, but isn't limited to, the following:

  • Starting Workflow Executions.
  • Sending Signals to Workflow Executions.
  • Sending Queries to Workflow Executions.
  • Getting the results of a Workflow Execution.
  • Providing an Activity Task Token.
caution

A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service.

When you are running a Temporal Service locally (such as the Temporal CLI), the number of connection options you must provide is minimal. Many SDKs default to the local host or IP address and port that Temporalite and Docker Compose serve (127.0.0.1:7233).

Creating a Connection connects to the Temporal Service, and you can pass the Connection instance when creating the Client.

If you omit the Connection and just create a new Client(), it will connect to localhost:7233.

import { Client } from '@temporalio/client';

async function run() {
const client = new Client();

// . . .

await client.connection.close();
}

run().catch((err) => {
console.error(err);
process.exit(1);
});

How to connect to Temporal Cloud

When you connect to Temporal Cloud, you need to provide additional connection and client options that include the following:

For more information about managing and generating client certificates for Temporal Cloud, see How to manage certificates in Temporal Cloud.

For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see Temporal Customization Samples.

Create a Connection with a connectionOptions object that has your Cloud namespace and client certificate.

import { Client, Connection } from '@temporalio/client';
import fs from 'fs-extra';

const { NODE_ENV = 'development' } = process.env;
const isDeployed = ['production', 'staging'].includes(NODE_ENV);

async function run() {
const cert = await fs.readFile('./path-to/your.pem');
const key = await fs.readFile('./path-to/your.key');

let connectionOptions = {};
if (isDeployed) {
connectionOptions = {
address: 'your-namespace.tmprl.cloud:7233',
tls: {
clientCertPair: {
crt: cert,
key,
},
},
};

const connection = await Connection.connect(connectionOptions);

const client = new Client({
connection,
namespace: 'your-namespace',
});

// . . .

await client.connection.close();
}
}

run().catch((err) => {
console.error(err);
process.exit(1);
});

How to develop a basic Workflow

Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a Workflow Definition.

In the Temporal TypeScript SDK programming model, Workflow Definitions are just functions, which can store state and orchestrate Activity Functions. The following code snippet uses proxyActivities to schedule a greet Activity in the system to say hello.

A Workflow Definition can have multiple parameters; however, we recommend using a single object parameter.

type ExampleArgs = {
name: string;
};

export async function example(
args: ExampleArgs,
): Promise<{ greeting: string }> {
const greeting = await greet(args.name);
return { greeting };
}

How to define Workflow parameters

Temporal Workflows may have any number of custom parameters. However, we strongly recommend that objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. All Workflow Definition parameters must be serializable.

You can define and pass parameters in your Workflow. In this example, you define your arguments in your client.ts file and pass those parameters to workflow.ts through your Workflow function.

Start a Workflow with the parameters that are in the client.ts file. In this example we set the name parameter to Temporal and born to 2019. Then set the Task Queue and Workflow Id.

client.ts

import { example } from './workflows';

...
await client.workflow.start(example, {
args: [{ name: 'Temporal', born: 2019 }],
taskQueue: 'your-queue',
workflowId: 'business-meaningful-id',
});

In workflows.ts define the type of the parameter that the Workflow function takes in. The interface ExampleParam is a name we can now use to describe the requirement in the previous example. It still represents having the two properties called name and born that is of the type string. Then define a function that takes in a parameter of the type ExampleParam and return a Promise<string>. The Promise object represents the eventual completion, or failure, of await client.workflow.start() and its resulting value.

interface ExampleParam {
name: string;
born: number;
}
export async function example({ name, born }: ExampleParam): Promise<string> {
return `Hello ${name}, you were born in ${born}.`;
}

How to define Workflow return parameters

Workflow return values must also be serializable. Returning results, returning errors, or throwing exceptions is fairly idiomatic in each language that is supported. However, Temporal APIs that must be used to get the result of a Workflow Execution will only ever receive one of either the result or the error.

To return a value of the Workflow function, use Promise<something>. The Promise is used to make asynchronous calls and comes with guarantees.

The following example uses a Promise<string> to eventually return a name and born parameter.

interface ExampleParam {
name: string;
born: number;
}
export async function example({ name, born }: ExampleParam): Promise<string> {
return `Hello ${name}, you were born in ${born}.`;
}

How to customize your Workflow Type

Workflows have a Type that are referred to as the Workflow name.

The following examples demonstrate how to set a custom name for your Workflow Type.

In TypeScript, the Workflow Type is the Workflow function name and there isn't a mechanism to customize the Workflow Type.

In the following example, the Workflow Type is the name of the function, helloWorld.

snippets/src/workflows.ts

export async function helloWorld(): Promise<string> {
return '👋 Hello World!';
}

How to develop Workflow logic

Workflow logic is constrained by deterministic execution requirements. Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code.

In the Temporal TypeScript SDK, Workflows run in a deterministic sandboxed environment. The code is bundled on Worker creation using Webpack, and can import any package as long as it does not reference Node.js or DOM APIs.

note

If you must use a library that references a Node.js or DOM API and you are certain that those APIs are not used at runtime, add that module to the ignoreModules list.

The Workflow sandbox can run only deterministic code, so side effects and access to external state must be done through Activities because Activity outputs are recorded in the Event History and can read deterministically by the Workflow.

This limitation also means that Workflow code cannot directly import the Activity Definition. Activity Types can be imported, so they can be invoked in a type-safe manner.

To make the Workflow runtime deterministic, functions like Math.random(), Date, and setTimeout() are replaced by deterministic versions.

FinalizationRegistry and WeakRef are removed because v8's garbage collector is not deterministic.

Expand to see the implications of the deterministic Date API

import { sleep } from '@temporalio/workflow';

// this prints the *exact* same timestamp repeatedly
for (let x = 0; x < 10; ++x) {
console.log(Date.now());
}

// this prints timestamps increasing roughly 1s each iteration
for (let x = 0; x < 10; ++x) {
await sleep('1 second');
console.log(Date.now());
}

How to develop a basic Activity

One of the primary things that Workflows do is orchestrate the execution of Activities. An Activity is a normal function or method execution that's intended to execute a single, well-defined action (either short or long-running), such as querying a database, calling a third-party API, or transcoding a media file. An Activity can interact with world outside the Temporal Platform or use a Temporal Client to interact with a Temporal Service. For the Workflow to be able to execute the Activity, we must define the Activity Definition.

  • Activities execute in the standard Node.js environment.
  • Activities cannot be in the same file as Workflows and must be separately registered.
  • Activities may be retried repeatedly, so you may need to use idempotency keys for critical side effects.

Activities are just functions. The following is an Activity that accepts a string parameter and returns a string.

snippets/src/activities.ts

export async function greet(name: string): Promise<string> {
return `👋 Hello, ${name}!`;
}

How to develop Activity Parameters

There is no explicit limit to the total number of parameters that an Activity Definition may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload.

A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB.

Also, keep in mind that all Payload data is recorded in the Workflow Execution Event History and large Event Histories can affect Worker performance. This is because the entire Event History could be transferred to a Worker Process with a Workflow Task.

Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a function or method signature.

This Activity takes a single name parameter of type string.

snippets/src/activities.ts

export async function greet(name: string): Promise<string> {
return `👋 Hello, ${name}!`;
}

How to define Activity return values

All data returned from an Activity must be serializable.

There is no explicit limit to the amount of data that can be returned by an Activity, but keep in mind that all return values are recorded in a Workflow Execution Event History.

In TypeScript, the return value is always a Promise.

In the following example, Promise<string> is the return value.

export async function greet(name: string): Promise<string> {
return `👋 Hello, ${name}!`;
}

How to customize your Activity Type

Activities have a Type that are referred to as the Activity name. The following examples demonstrate how to set a custom name for your Activity Type.

You can customize the name of the Activity when you register it with the Worker. In the following example, the Activity Name is activityFoo.

snippets/src/worker-activity-type-custom.ts

import { Worker } from '@temporalio/worker';
import { greet } from './activities';

async function run() {
const worker = await Worker.create({
workflowsPath: require.resolve('./workflows'),
taskQueue: 'snippets',
activities: {
activityFoo: greet,
},
});

await worker.run();
}

Important design patterns for Activities

The following are some important (and frequently requested) patterns for using our Activities APIs. These patterns address common needs and use cases.

Share dependencies in Activity functions (dependency injection)

Because Activities are "just functions," you can also create functions that create Activities. This is a helpful pattern for using closures to do the following:

  • Store expensive dependencies for sharing, such as database connections.
  • Inject secret keys (such as environment variables) from the Worker to the Activity.

activities-dependency-injection/src/activities.ts

export interface DB {
get(key: string): Promise<string>;
}

export const createActivities = (db: DB) => ({
async greet(msg: string): Promise<string> {
const name = await db.get('name'); // simulate read from db
return `${msg}: ${name}`;
},
async greet_es(mensaje: string): Promise<string> {
const name = await db.get('name'); // simulate read from db
return `${mensaje}: ${name}`;
},
});
See full example

When you register these in the Worker, pass your shared dependencies accordingly:

import { createActivities } from './activities';

async function run() {
// Mock DB connection initialization in Worker
const db = {
async get(_key: string) {
return 'Temporal';
},
};

const worker = await Worker.create({
taskQueue: 'dependency-injection',
workflowsPath: require.resolve('./workflows'),
activities: createActivities(db),
});

await worker.run();
}

run().catch((err) => {
console.error(err);
process.exit(1);
});

Because Activities are always referenced by name, inside the Workflow they can be proxied as normal, although the types need some adjustment:

activities-dependency-injection/src/workflows.ts

import type { createActivities } from './activities';

// Note usage of ReturnType<> generic since createActivities is a factory function
const { greet, greet_es } = proxyActivities<
ReturnType<typeof createActivities>
>({
startToCloseTimeout: '30 seconds',
});

Import multiple Activities simultaneously

You can proxy multiple Activities from the same proxyActivities call if you want them to share the same timeouts, retries, and options:

export async function Workflow(name: string): Promise<string> {
// destructuring multiple activities with the same options
const { act1, act2, act3 } = proxyActivities<typeof activities>();
/* activityOptions */
await act1();
await Promise.all([act2, act3]);
}

Dynamically reference Activities

Because Activities are referenced only by their string names, you can reference them dynamically if needed:

export async function DynamicWorkflow(activityName, ...args) {
const acts = proxyActivities(/* activityOptions */);

// these are equivalent
await acts.activity1();
await acts['activity1']();

// dynamic reference to activities using activityName
let result = await acts[activityName](...args);
}

Type safety is still supported here, but we encourage you to validate and handle mismatches in Activity names. An invalid Activity name leads to a NotFoundError with a message that looks like this:

ApplicationFailure: Activity function actC is not registered on this Worker, available activities: ["actA", "actB"]

How to start an Activity Execution

Calls to spawn Activity Executions are written within a Workflow Definition. The call to spawn an Activity Execution generates the ScheduleActivityTask Command. This results in the set of three Activity Task related Events (ActivityTaskScheduled, ActivityTaskStarted, and ActivityTask[Closed])in your Workflow Execution Event History.

A single instance of the Activities implementation is shared across multiple simultaneous Activity invocations. Activity implementation code should be idempotent.

The values passed to Activities through invocation parameters or returned through a result value are recorded in the Execution history. The entire Execution history is transferred from the Temporal service to Workflow Workers when a Workflow state needs to recover. A large Execution history can thus adversely impact the performance of your Workflow.

Therefore, be mindful of the amount of data you transfer through Activity invocation parameters or Return Values. Otherwise, no additional limitations exist on Activity implementations.

To spawn an Activity Execution, you must retrieve the Activity handle in your Workflow.

import { proxyActivities } from '@temporalio/workflow';
// Only import the activity types
import type * as activities from './activities';

const { greet } = proxyActivities<typeof activities>({
startToCloseTimeout: '1 minute',
});

// A workflow that calls an activity
export async function example(name: string): Promise<string> {
return await greet(name);
}

This imports the individual Activities and declares the type alias for each Activity.

How to set the required Activity Timeouts

Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a Schedule-To-Close Timeout or a Start-To-Close Timeout. These values are set in the Activity Options.

How to get the results of an Activity Execution

The call to spawn an Activity Execution generates the ScheduleActivityTask Command and provides the Workflow with an Awaitable. Workflow Executions can either block progress until the result is available through the Awaitable or continue progressing, making use of the result when it becomes available.

Since Activities are referenced by their string name, you can reference them dynamically to get the result of an Activity Execution.

export async function DynamicWorkflow(activityName, ...args) {
const acts = proxyActivities(/* activityOptions */);

// these are equivalent
await acts.activity1();
await acts['activity1']();

let result = await acts[activityName](...args);
return result;
}

The proxyActivities() returns an object that calls the Activities in the function. acts[activityName]() references the Activity using the Activity name, then it returns the results.

How to run Worker Processes

The Worker Process is where Workflow Functions and Activity Functions are executed.

  • Each Worker Entity in the Worker Process must register the exact Workflow Types and Activity Types it may execute.
  • Each Worker Entity must also associate itself with exactly one Task Queue.
  • Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types.

A Worker Entity is the component within a Worker Process that listens to a specific Task Queue.

Although multiple Worker Entities can be in a single Worker Process, a single Worker Entity Worker Process may be perfectly sufficient. For more information, see the Worker tuning guide.

A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively.

How to run a Worker on Docker in TypeScript

note

To improve worker startup time, we recommend preparing workflow bundles ahead-of-time. See our productionsample for details.

Workers based on the TypeScript SDK can be deployed and run as Docker containers.

We recommend an LTS Node.js release such as 18 or 20. Both amd64 and arm64 architectures are supported. A glibc-based image is required; musl-based images are not supported (see below).

The easiest way to deploy a TypeScript SDK Worker on Docker is to start with the node:20-bullseye image. For example:

FROM node:20-bullseye

# For better cache utilization, copy package.json and lock file first and install the dependencies before copying the
# rest of the application and building.
COPY . /app
WORKDIR /app

# Alternatively, run npm ci, which installs only dependencies specified in the lock file and is generally faster.
RUN npm install --only=production \
&& npm run build

CMD ["npm", "start"]

For smaller images and/or more secure deployments, it is also possible to use -slim Docker image variants (like node:20-bullseye-slim) or distroless/nodejs Docker images (like gcr.io/distroless/nodejs20-debian11) with the following caveats.

Using node:slim images

node:slim images do not contain some of the common packages found in regular images. This results in significantly smaller images.

However, TypeScript SDK requires the presence of root TLS certificates (the ca-certificates package), which are not included in slim images. The ca-certificates package is required even when connecting to a local Temporal Server or when using a server connection config that doesn't explicitly use TLS.

For this reason, the ca-certificates package must be installed during the construction of the Docker image. For example:

FROM node:20-bullseye-slim

RUN apt-get update \
&& apt-get install -y ca-certificates \
&& rm -rf /var/lib/apt/lists/*

# ... same as with regular image

Failure to install this dependency results in a [TransportError: transport error] runtime error, because the certificates cannot be verified.

Using distroless/nodejs images

distroless/nodejs images include only the files that are strictly required to execute node. This results in even smaller images (approximately half the size of node:slim images). It also significantly reduces the surface of potential security issues that could be exploited by a hacker in the resulting Docker images.

It is generally possible and safe to execute TypeScript SDK Workers using distroless/nodejs images (unless your code itself requires dependencies that are not included in distroless/nodejs).

However, some tools required for the build process (notably the npm command) are not included in the distroless/nodejs image. This might result in various error messages during the Docker build.

The recommanded solution is to use a multi-step Dockerfile. For example:

# -- BUILD STEP --

FROM node:20-bullseye AS builder

COPY . /app
WORKDIR /app

RUN npm install --only=production \
&& npm run build

# -- RESULTING IMAGE --

FROM gcr.io/distroless/nodejs20-debian11

COPY --from=builder /app /app
WORKDIR /app

CMD ["node", "build/worker.js"]

Properly configure Node.js memory in Docker

By default, node configures its maximum old-gen memory to 25% of the physical memory of the machine on which it is executing, with a maximum of 4 GB. This is likely inappropriate when running Node.js in a Docker environment and can result in either underusage of available memory (node only uses a fraction of the memory allocated to the container) or overusage (node tries to use more memory than what is allocated to the container, which will eventually lead to the process being killed by the operating system).

Therefore we recommended that you always explicitly set the --max-old-space-size node argument to approximately 80% of the maximum size (in megabytes) that you want to allocate the node process. You might need some experimentation and adjustment to find the most appropriate value based on your specific application.

In practice, it is generally easier to provide this argument through the NODE_OPTIONS environment variable.

Do not use Alpine

Alpine replaces glibc with musl, which is incompatible with the Rust core of the TypeScript SDK. If you receive errors like the following, it's probably because you are using Alpine.

Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /opt/app/node_modules/@temporalio/core-bridge/index.node)

Or like this:

Error: Error relocating /opt/app/node_modules/@temporalio/core-bridge/index.node: __register_atfork: symbol not found

How to run a Temporal Cloud Worker

To run a Worker that uses Temporal Cloud, you need to provide additional connection and client options that include the following:

  • An address that includes your Cloud Namespace Name and a port number: <Namespace>.<ID>.tmprl.cloud:<port>.
  • mTLS CA certificate.
  • mTLS private key.

For more information about managing and generating client certificates for Temporal Cloud, see How to manage certificates in Temporal Cloud.

For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see Temporal Customization Samples.

How to register types

All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types.

If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail.

In development, use workflowsPath:

snippets/src/worker.ts

import { Worker } from '@temporalio/worker';
import * as activities from './activities';

async function run() {
const worker = await Worker.create({
workflowsPath: require.resolve('./workflows'),
taskQueue: 'snippets',
activities,
});

await worker.run();
}

In this snippet, the Worker bundles the Workflow code at runtime.

In production, you can improve your Worker's startup time by bundling in advance: as part of your production build, call bundleWorkflowCode:

production/src/scripts/build-workflow-bundle.ts

import { bundleWorkflowCode } from '@temporalio/worker';
import { writeFile } from 'fs/promises';
import path from 'path';

async function bundle() {
const { code } = await bundleWorkflowCode({
workflowsPath: require.resolve('../workflows'),
});
const codePath = path.join(__dirname, '../../workflow-bundle.js');

await writeFile(codePath, code);
console.log(`Bundle written to ${codePath}`);
}

Then the bundle can be passed to the Worker:

production/src/worker.ts

const workflowOption = () =>
process.env.NODE_ENV === 'production'
? {
workflowBundle: {
codePath: require.resolve('../workflow-bundle.js'),
},
}
: { workflowsPath: require.resolve('./workflows') };

async function run() {
const worker = await Worker.create({
...workflowOption(),
activities,
taskQueue: 'production-sample',
});

await worker.run();
}

How to shut down a Worker and track its state

Workers shut down if they receive any of the Signals enumerated in shutdownSignals: 'SIGINT', 'SIGTERM', 'SIGQUIT', and 'SIGUSR2'.

In development, we shut down Workers with Ctrl+C (SIGINT) or nodemon (SIGUSR2). In production, you usually want to give Workers time to finish any in-progress Activities by setting shutdownGraceTime.

As soon as a Worker receives a shutdown Signal or request, the Worker stops polling for new Tasks and allows in-flight Tasks to complete until shutdownGraceTime is reached. Any Activities that are still running at that time will stop running and will be rescheduled by Temporal Server when an Activity timeout occurs.

If you must guarantee that the Worker eventually shuts down, you can set shutdownForceTime.

You might want to programmatically shut down Workers (with Worker.shutdown()) in integration tests or when automating a fleet of Workers.

Worker states

At any time, you can Query Worker state with Worker.getState(). A Worker is always in one of seven states:

  • INITIALIZED: The initial state of the Worker after calling Worker.create() and successfully connecting to the server.
  • RUNNING: Worker.run() was called and the Worker is polling Task Queues.
  • FAILED: The Worker encountered an unrecoverable error; Worker.run() should reject with the error.
  • The last four states are related to the Worker shutdown process:
    • STOPPING: The Worker received a shutdown Signal or Worker.shutdown() was called. The Worker will forcefully shut down after shutdownGraceTime expires.
    • DRAINING: All Workflow Tasks have been drained; waiting for Activities and cached Workflows eviction.
    • DRAINED: All Activities and Workflows have completed; ready to shut down.
    • STOPPED: Shutdown complete; worker.run() resolves.

If you need more visibility into internal Worker state, see the Worker class in the API reference.

How to start a Workflow Execution

Workflow Execution semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters.

In the examples below, all Workflow Executions are started using a Temporal Client. To spawn Workflow Executions from within another Workflow Execution, use either the Child Workflow or External Workflow APIs.

See the Customize Workflow Type section to see how to customize the name of the Workflow Type.

A request to spawn a Workflow Execution causes the Temporal Service to create the first Event (WorkflowExecutionStarted) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first WorkflowTaskScheduled Event.

When you have a Client, you can schedule the start of a Workflow with client.workflow.start(), specifying workflowId, taskQueue, and args and returning a Workflow handle immediately after the Server acknowledges the receipt.

const handle = await client.workflow.start(example, {
workflowId: 'your-workflow-id',
taskQueue: 'your-task-queue',
args: ['argument01', 'argument02', 'argument03'], // this is typechecked against workflowFn's args
});
const handle = client.getHandle(workflowId);
const result = await handle.result();

Calling client.workflow.start() and client.workflow.execute() send a command to Temporal Server to schedule a new Workflow Execution on the specified Task Queue. It does not actually start until a Worker that has a matching Workflow Type, polling that Task Queue, picks it up.

You can test this by executing a Client command without a matching Worker. Temporal Server records the command in Event History, but does not make progress with the Workflow Execution until a Worker starts polling with a matching Task Queue and Workflow Definition.

Workflow Execution run in a separate V8 isolate context in order to provide a deterministic runtime.

How to set a Workflow's Task Queue

In most SDKs, the only Workflow Option that must be set is the name of the Task Queue.

For any code to execute, a Worker Process must be running that contains a Worker Entity that is polling the same Task Queue name.

A Task Queue is a dynamic queue in Temporal polled by one or more Workers.

Workers bundle Workflow code and node modules using Webpack v5 and execute them inside V8 isolates. Activities are directly required and run by Workers in the Node.js environment.

Workers are flexible. You can host any or all of your Workflows and Activities on a Worker, and you can host multiple Workers on a single machine.

The Worker need three main things:

  • taskQueue: The Task Queue to poll. This is the only required argument.
  • activities: Optional. Imported and supplied directly to the Worker.
  • Workflow bundle. Choose one of the following options:
    • Specify workflowsPath pointing to your workflows.ts file to pass to Webpack; for example, require.resolve('./workflows'). Workflows are bundled with their dependencies.
    • If you prefer to handle the bundling yourself, pass a prebuilt bundle to workflowBundle.
import { Worker } from '@temporalio/worker';
import * as activities from './activities';

async function run() {
// Step 1: Register Workflows and Activities with the Worker and connect to
// the Temporal server.
const worker = await Worker.create({
workflowsPath: require.resolve('./workflows'),
activities,
taskQueue: 'hello-world',
});
// Worker connects to localhost by default and uses console.error for logging.
// Customize the Worker by passing more options to create():
// https://typescript.temporal.io/api/classes/worker.Worker
// If you need to configure server connection parameters, see docs:
// /typescript/security#encryption-in-transit-with-mtls

// Step 2: Start accepting tasks on the `tutorial` queue
await worker.run();
}

run().catch((err) => {
console.error(err);
process.exit(1);
});

taskQueue is the only required option; however, use workflowsPath and activities to register Workflows and Activities with the Worker.

When scheduling a Workflow, you must specify taskQueue.

import { Client, Connection } from '@temporalio/client';
// This is the code that is used to start a Workflow.
const connection = await Connection.create();
const client = new Client({ connection });
const result = await client.workflow.execute(yourWorkflow, {
// required
taskQueue: 'your-task-queue',
// required
workflowId: 'your-workflow-id',
});

When creating a Worker, you must pass the taskQueue option to the Worker.create() function.

const worker = await Worker.create({
// imported elsewhere
activities,
taskQueue: 'your-task-queue',
});

Optionally, in Workflow code, when calling an Activity, you can specify the Task Queue by passing the taskQueue option to proxyActivities(), startChild(), or executeChild(). If you do not specify taskQueue, the TypeScript SDK places Activity and Child Workflow Tasks in the same Task Queue as the Workflow Task Queue.

How to set a Workflow Id

Although it is not required, we recommend providing your own Workflow Id that maps to a business process or business entity identifier, such as an order identifier or customer identifier.

Connect to a Client with client.workflow.start() and any arguments. Then specify your taskQueue and set your workflowId to a meaningful business identifier.

const handle = await client.workflow.start(example, {
workflowId: 'yourWorkflowId',
taskQueue: 'yourTaskQueue',
args: ['your', 'arg', 'uments'],
});

This starts a new Client with the given Workflow Id, Task Queue name, and an argument.

How to get the results of a Workflow Execution

If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id.

The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result.

It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution).

In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions.

To return the results of a Workflow Execution:

return (
'Completed '
+ wf.workflowInfo().workflowId
+ ', Total Charged: '
+ totalCharged
);

totalCharged is just a function declared in your code. For a full example, see subscription-workflow-project-template-typescript/src/workflows.ts.

A Workflow function may return a result. If it doesn't (in which case the return type is Promise<void>), the result will be undefined.

If you started a Workflow with client.workflow.start(), you can choose to wait for the result anytime with handle.result().

const handle = client.getHandle(workflowId);
const result = await handle.result();

Using a Workflow Handle isn't necessary with client.workflow.execute().

Workflows that prematurely end will throw a WorkflowFailedError if you call result().

If you call result() on a Workflow that prematurely ended for some reason, it throws a WorkflowFailedError error that reflects the reason. For that reason, it is recommended to catch that error.

const handle = client.getHandle(workflowId);
try {
const result = await handle.result();
} catch (err) {
if (err instanceof WorkflowFailedError) {
throw new Error('Temporal workflow failed: ' + workflowId, {
cause: err,
});
} else {
throw new Error('error from Temporal workflow ' + workflowId, {
cause: err,
});
}
}

Cancellation scopes in Typescript

In the TypeScript SDK, Workflows are represented internally by a tree of cancellation scopes, each with cancellation behaviors you can specify. By default, everything runs in the "root" scope.

Scopes are created using the CancellationScope constructor or one of three static helpers:

  • cancellable(fn): Children are automatically cancelled when their containing scope is cancelled.
    • Equivalent to new CancellationScope().run(fn).
  • nonCancellable(fn): Cancellation does not propagate to children.
    • Equivalent to new CancellationScope({ cancellable: false }).run(fn).
  • withTimeout(timeoutMs, fn): If a timeout triggers before fn resolves, the scope is cancelled, triggering cancellation of any enclosed operations, such as Activities and Timers.
    • Equivalent to new CancellationScope({ cancellable: true, timeout: timeoutMs }).run(fn).

Cancellations are applied to cancellation scopes, which can encompass an entire Workflow or just part of one. Scopes can be nested, and cancellation propagates from outer scopes to inner ones. A Workflow's main function runs in the outermost scope. Cancellations are handled by catching CancelledFailures thrown by cancelable operations.

CancellationScope.run() and the static helpers mentioned earlier return native JavaScript promises, so you can use the familiar Promise APIs like Promise.all and Promise.race to model your asynchronous logic. You can also use the following APIs:

  • CancellationScope.current(): Get the current scope.
  • scope.cancel(): Cancel all operations inside a scope.
  • scope.run(fn): Run an async function within a scope and return the result of fn.
  • scope.cancelRequested: A promise that resolves when a scope cancellation is requested, such as when Workflow code calls cancel() or the entire Workflow is cancelled by an external client.

When a CancellationScope is cancelled, it propagates cancellation in any child scopes and of any cancelable operations created within it, such as the following:

CancelledFailure

Timers and triggers throw CancelledFailure when cancelled; Activities and Child Workflows throw ActivityFailure and ChildWorkflowFailure with cause set to CancelledFailure. One exception is when an Activity or Child Workflow is scheduled in an already cancelled scope (or Workflow). In this case, they propagate the CancelledFailure that was thrown to cancel the scope.

To simplify checking for cancellation, use the isCancellation(err) function.

Internal cancellation example

packages/test/src/workflows/cancel-timer-immediately.ts

import {
CancellationScope,
CancelledFailure,
sleep,
} from '@temporalio/workflow';

export async function cancelTimer(): Promise<void> {
// Timers and Activities are automatically cancelled when their containing scope is cancelled.
try {
await CancellationScope.cancellable(async () => {
const promise = sleep(1); // <-- Will be cancelled because it is attached to this closure's scope
CancellationScope.current().cancel();
await promise; // <-- Promise must be awaited in order for `cancellable` to throw
});
} catch (e) {
if (e instanceof CancelledFailure) {
console.log('Timer cancelled 👍');
} else {
throw e; // <-- Fail the workflow
}
}
}

Alternatively, the preceding can be written as the following.

packages/test/src/workflows/cancel-timer-immediately-alternative-impl.ts

import {
CancellationScope,
CancelledFailure,
sleep,
} from '@temporalio/workflow';

export async function cancelTimerAltImpl(): Promise<void> {
try {
const scope = new CancellationScope();
const promise = scope.run(() => sleep(1));
scope.cancel(); // <-- Cancel the timer created in scope
await promise; // <-- Throws CancelledFailure
} catch (e) {
if (e instanceof CancelledFailure) {
console.log('Timer cancelled 👍');
} else {
throw e; // <-- Fail the workflow
}
}
}

External cancellation example

The following code shows how to handle Workflow cancellation by an external client while an Activity is running.

packages/test/src/workflows/handle-external-workflow-cancellation-while-activity-running.ts

import {
CancellationScope,
isCancellation,
proxyActivities,
} from '@temporalio/workflow';
import type * as activities from '../activities';

const { httpPostJSON, cleanup } = proxyActivities<typeof activities>({
startToCloseTimeout: '10m',
});

export async function handleExternalWorkflowCancellationWhileActivityRunning(
url: string,
data: any,
): Promise<void> {
try {
await httpPostJSON(url, data);
} catch (err) {
if (isCancellation(err)) {
console.log('Workflow cancelled');
// Cleanup logic must be in a nonCancellable scope
// If we'd run cleanup outside of a nonCancellable scope it would've been cancelled
// before being started because the Workflow's root scope is cancelled.
await CancellationScope.nonCancellable(() => cleanup(url));
}
throw err; // <-- Fail the Workflow
}
}

nonCancellable example

CancellationScope.nonCancellable prevents cancellation from propagating to children.

packages/test/src/workflows/non-cancellable-shields-children.ts

import { CancellationScope, proxyActivities } from '@temporalio/workflow';
import type * as activities from '../activities';

const { httpGetJSON } = proxyActivities<typeof activities>({
startToCloseTimeout: '10m',
});

export async function nonCancellable(url: string): Promise<any> {
// Prevent Activity from being cancelled and await completion.
// Note that the Workflow is completely oblivious and impervious to cancellation in this example.
return CancellationScope.nonCancellable(() => httpGetJSON(url));
}

withTimeout example

A common operation is to cancel one or more Activities if a deadline elapses. withTimeout creates a CancellationScope that is automatically cancelled after a timeout.

packages/test/src/workflows/multiple-activities-single-timeout.ts

import { CancellationScope, proxyActivities } from '@temporalio/workflow';
import type * as activities from '../activities';

export function multipleActivitiesSingleTimeout(
urls: string[],
timeoutMs: number,
): Promise<any> {
const { httpGetJSON } = proxyActivities<typeof activities>({
startToCloseTimeout: timeoutMs,
});

// If timeout triggers before all activities complete
// the Workflow will fail with a CancelledError.
return CancellationScope.withTimeout(
timeoutMs,
() => Promise.all(urls.map((url) => httpGetJSON(url))),
);
}

scope.cancelRequested

You can await cancelRequested to make a Workflow aware of cancellation while waiting on nonCancellable scopes.

packages/test/src/workflows/cancel-requested-with-non-cancellable.ts

import {
CancellationScope,
CancelledFailure,
proxyActivities,
} from '@temporalio/workflow';
import type * as activities from '../activities';

const { httpGetJSON } = proxyActivities<typeof activities>({
startToCloseTimeout: '10m',
});

export async function resumeAfterCancellation(url: string): Promise<any> {
let result: any = undefined;
const scope = new CancellationScope({ cancellable: false });
const promise = scope.run(() => httpGetJSON(url));
try {
result = await Promise.race([scope.cancelRequested, promise]);
} catch (err) {
if (!(err instanceof CancelledFailure)) {
throw err;
}
// Prevent Workflow from completing so Activity can complete
result = await promise;
}
return result;
}

Cancellation scopes and callbacks

Callbacks are not particularly useful in Workflows because all meaningful asynchronous operations return promises. In the rare case that code uses callbacks and needs to handle cancellation, a callback can consume the CancellationScope.cancelRequested promise.

packages/test/src/workflows/cancellation-scopes-with-callbacks.ts

import { CancellationScope } from '@temporalio/workflow';

function doSomething(callback: () => any) {
setTimeout(callback, 10);
}

export async function cancellationScopesWithCallbacks(): Promise<void> {
await new Promise<void>((resolve, reject) => {
doSomething(resolve);
CancellationScope.current().cancelRequested.catch(reject);
});
}

Nesting cancellation scopes

You can achieve complex flows by nesting cancellation scopes.

packages/test/src/workflows/nested-cancellation.ts

import {
CancellationScope,
isCancellation,
proxyActivities,
} from '@temporalio/workflow';

import type * as activities from '../activities';

const { setup, httpPostJSON, cleanup } = proxyActivities<typeof activities>({
startToCloseTimeout: '10m',
});

export async function nestedCancellation(url: string): Promise<void> {
await CancellationScope.cancellable(async () => {
await CancellationScope.nonCancellable(() => setup());
try {
await CancellationScope.withTimeout(
1000,
() => httpPostJSON(url, { some: 'data' }),
);
} catch (err) {
if (isCancellation(err)) {
await CancellationScope.nonCancellable(() => cleanup(url));
}
throw err;
}
});
}

Sharing promises between scopes

Operations like Timers and Activities are cancelled by the cancellation scope they were created in. Promises returned by these operations can be awaited in different scopes.

packages/test/src/workflows/shared-promise-scopes.ts

import { CancellationScope, proxyActivities } from '@temporalio/workflow';
import type * as activities from '../activities';

const { httpGetJSON } = proxyActivities<typeof activities>({
startToCloseTimeout: '10m',
});

export async function sharedScopes(): Promise<any> {
// Start activities in the root scope
const p1 = httpGetJSON('http://url1.ninja');
const p2 = httpGetJSON('http://url2.ninja');

const scopePromise = CancellationScope.cancellable(async () => {
const first = await Promise.race([p1, p2]);
// Does not cancel activity1 or activity2 as they're linked to the root scope
CancellationScope.current().cancel();
return first;
});
return await scopePromise;
// The Activity that did not complete will effectively be cancelled when
// Workflow completes unless the Activity is awaited:
// await Promise.all([p1, p2]);
}

packages/test/src/workflows/shield-awaited-in-root-scope.ts

import { CancellationScope, proxyActivities } from '@temporalio/workflow';
import type * as activities from '../activities';

const { httpGetJSON } = proxyActivities<typeof activities>({
startToCloseTimeout: '10m',
});

export async function shieldAwaitedInRootScope(): Promise<any> {
let p: Promise<any> | undefined = undefined;

await CancellationScope.nonCancellable(async () => {
p = httpGetJSON('http://example.com'); // <-- Start activity in nonCancellable scope without awaiting completion
});
// Activity is shielded from cancellation even though it is awaited in the cancellable root scope
return p;
}