Job Actions
Jobs transform events after the input parses them and before the output delivers them. Use this page to choose actions by outcome, then jump into the DSL reference for exact fields.
Choose by outcome
| Outcome | Typical sequence | Edition |
|---|---|---|
| Normalize payloads | json -> convert -> rename -> remove | Both |
| Parse unstructured logs | extract -> key-value -> assert | Both |
| Enrich with lookup data | enrich -> time | Both |
| Build LLM features from text | pdf-text or docx-to-text -> chunk -> tokenize | Both |
| Generate model output in-pipeline | chunk -> infer -> assert | infer is Enterprise |
| Score or detect anomalies | infer (anomaly-detect) -> scoring -> filter | infer is Enterprise |
AI pipeline patterns
Structured LLM extraction
actions: - chunk: input-field: body output-field: chunks - infer: workload: llm-completion: llm: provider: openai-compat model: your-model input-field: chunks response-field: ai_result response-format: json prompt: system: Extract only the requested fields. schema: '{"type":"object"}' timeout-ms: 15000 on-error: dlq:ai_failures - assert: behaviour: drop-onfailure schema: schema-string: '{"type":"object"}'Embeddings and clustering
actions: - infer: workload: embedding: embedding: provider: openai-compat model: your-embedding-model input-field: text response-field: vector - cluster: input-field: vector output-field: cluster_idinfer guardrails (recommended defaults)
- Use
response-format: jsonplusprompt.schemawhen downstream systems expect structured output. - Set
timeout-ms,rate-limit, andconcurrencybefore production rollout. - Configure
cache(namespace,ttl,max-entries) for repeated prompts. - Set
on-errorexplicitly (fail,skip, ordlq:name) instead of relying on implicit behavior. - Store provider credentials in variables (for example
${dyn|OPENAI_API_KEY}), not inline literals.
Common actions
add
add creates or overwrites fields from literals, template placeholders ({{ }}), and runtime expansions (${ }}).
convert
convert normalizes data types (string, number, datetime, boolean) and lets you define failure behavior per conversion.
filter
filter gates events using schema rules, pattern matches, or expressions so invalid data does not reach outputs.
enrich
enrich joins event fields against CSV or SQLite lookup assets and maps matched values back into the event.
Run & Trace checklist for AI jobs
- Confirm
input-fieldreceives the expected text payload. - Verify
response-fieldshape is stable across multiple samples. - Inspect token usage fields when configured to estimate cost and rate-limit pressure.
- Test malformed or empty inputs to validate
on-errorbehavior. - Re-run with representative data volume to validate latency and concurrency settings.
For complete parameter details, use the DSL index and open the linked action pages.