Chaining Jobs With Channels (Advanced)
This end-to-end tutorial builds a four-job pipeline that fans events out to two downstream jobs and then fans them back in on a shared channel. It highlights how to design channel names, configure channel drivers, deploy jobs in the correct order, and verify flow end to end.
Scenario overview
We implement the following flow:
- Job A ingests events from an HTTP input and publishes each record to channel
alpha. - Channel
alphais configured with theclonedriver so multiple downstream jobs receive a copy. - Job B (enrichment) listens on channel
alphaand writes enriched events to channelbeta. - Job C (alerting) also listens on channel
alphaand writes alerts to channelbeta. - Job D consumes
betaand emits final records to an output (for example Elasticsearch).
Channels let you decouple workloads while keeping jobs simple—each job still has a single input and output, but you can compose multi-stage pipelines without external scripts.
Prerequisites
- Server and at least one worker online (the built-in worker is sufficient).
- Familiarity with the visual editor and staging/deployment workflow (see the build overview).
Step 1 – design the channel contract
Before building jobs, document the schema each channel carries. This avoids downstream validation errors.
| Channel | Producer | Consumers | Fields |
|---|---|---|---|
| alpha | Job A | Jobs B, C | event_id, ts, raw_payload |
| beta | Jobs B, C | Job D | event_id, ts, stage, payload |
Store this table in your runbook or a shared document so future contributors know which fields are available.
Step 2 – configure the channels on the worker
- In the UI, open Workers and select the worker you will deploy these jobs to.
- Edit worker settings and add two channels:
alphawith driverclone(broadcast fan-out).betawith driverstandard(single shared channel for fan-in).
If you skip this step, jobs that use worker-channel will fail with a “worker channel not found” issue.
Step 3 – build Job A (producer)
- Create a new job named channel-producer.
- Choose an HTTP input (or another source) and configure authentication.
- Add any necessary actions (for example, parse JSON).
- Set the output to Worker Channel with channel ID
alpha. - Save the job, close the editor, and stage it.
You can verify the payload shape by using the Run Output tab—confirm the fields match the alpha contract.
Step 4 – build Jobs B and C (parallel consumers)
Create two jobs based on the channel contract:
-
channel-enrich
- Input: Worker Channel
alpha. - Actions: add geographic enrichment, compute scores, or call external APIs.
- Add a field such as
stage: "enrich"so downstream jobs can distinguish records. - Output: Worker Channel
beta.
- Input: Worker Channel
-
channel-alert
- Input: Worker Channel
alpha. - Actions: filter on severity, map to alert levels.
- Add a field such as
stage: "alert"so downstream jobs can distinguish records. - Output: Worker Channel
beta.
- Input: Worker Channel
Use the Preview tab for each action to ensure Job B and Job C emit the expected fields. When both jobs stage successfully, deploy them to workers that have capacity for the new workload.
Step 5 – build Job D (fan-in)
Job D combines the outputs from Jobs B and C by consuming the shared beta channel.
Create a job named channel-aggregate with an Input: Worker Channel set to beta.
Finish configuration:
- Use
filteror conditional logic based onstageto handle enriched vs alert records differently (if needed). - Add business logic or scoring based on enriched and alert data.
- Set the output to your destination—e.g., Elasticsearch, Splunk, or S3.
Sample YAML for the worker-channel input:
input: worker-channel: worker-channel-name: betaStage Job D but wait to deploy until Jobs A–C are running to avoid empty channel warnings.
Step 6 – deploy in order
Deploy all four jobs to the same worker:
- Deploy channel-producer and confirm the worker shows the job as running.
- Deploy channel-enrich and channel-alert; watch worker logs for channel subscription messages.
- Deploy channel-aggregate once
betashows new events. - Trigger sample traffic (curl, replay file, etc.) and verify the final output destination receives the expected records.
Monitor the Issues panel and worker logs throughout deployment. Channel mismatch errors typically indicate a schema drift between the jobs.
Troubleshooting
- Job D never receives events: confirm Jobs B and C publish to the exact channel IDs listed in the contract. Channel IDs are case sensitive.
- Jobs B/C split events instead of both receiving them: ensure channel
alphais configured with driverclone(broadcast). Withstandardorround-robin, events are distributed across subscribers. - Validation errors in Jobs B/C: use Preview for the last action in Job A to confirm the payload contains the fields referenced downstream.
- No events flow between workers: worker channels are local to a worker; deploy the connected jobs together.
- Lost events after restart: channels are in-memory; for guaranteed delivery, persist to a queue or storage service before fan-out.
Validate and monitor
- Use Run & Trace on channel-producer to capture a live payload and confirm the channel contract before deployment.
- Stage and deploy Jobs A-D in order, then confirm they remain Running with expected event throughput under Operate > Job status.
- Generate synthetic traffic (curl, replay, or QA fixtures) and monitor worker logs for channel subscription updates and aggregate outputs.
After validation, you can:
- Add automated tests or synthetic traffic to continuously verify the multi-job pipeline.
- For cross-worker isolation, switch to a durable transport between stages (for example Kafka or object storage); worker channels remain single-worker only.
- Combine these checks with reference/troubleshooting when new failure modes appear.
Fold the channel topology into Operate daily operations and configure health alerts using Operate monitoring.