Web Dav Store
Web Dav Store (web-dav-store)
Write to a WebDAV-compatible object store.
Block Store WebDAV binary json raw
Minimal example
output: web-dav-store: base-url: ~ object-name: name: ~JSON
{ "output": { "web-dav-store": { "base-url": null, "object-name": { "name": null } } }}Contents
Advanced
Advanced
| Field | Type | Required | Description |
|---|---|---|---|
config | map (string) | Additional configuration passed directly to the HTTP client (see object_store http configs). |
Behavior
Behavior
| Field | Type | Required | Description |
|---|---|---|---|
mode | Mode | Specify whether we’re putting or deleting. Allowed values: put, delete |
Location
Location
| Field | Type | Required | Description |
|---|---|---|---|
base-url | string | ✅ | Base URL for the WebDAV endpoint (e.g., https://host/webdav). |
bucket-name | string | Optional bucket or root path that will be prepended to object names. | |
object-name | Object Name | ✅ | File name (may contain slashes). Allowed values: name, field |
Object Properties
Object Properties
| Field | Type | Required | Description |
|---|---|---|---|
disable-object-name-guid ✓ | boolean (bool) | Disable the GUID prefix if you want object name to be treated literally (off for deletes). Default: false | |
guid-prefix | string | GUID Prefix, will be prepended to the GUID, the default value is ”/”. | |
guid-suffix | string | GUID Suffix, will be appended to the GUID if specified. |
Processing
Processing
| Field | Type | Required | Description |
|---|---|---|---|
batch | Batch | Write events as batches. | |
input-field | field (string) | Use the specified field as the content for the file line. Examples: data_field | |
preprocessors | Output Preprocessor[] | Preprocessors (process data before making it available for upload) these processors will be run in the order they are specified. Allowed values: gzip, parquet, base64 |
Reliability
Reliability
| Field | Type | Required | Description |
|---|---|---|---|
retry | Retry | Retry on failure. | |
track-schema ✓ | boolean (bool) | Check the schema of the written data and update __SCHEMA_NUMBER (written data must be JSON). Default: false |
Schema
- Object Name Options
- Batch Fields
- Retry Fields
- Config Table
- Mode Options
- Batch - Mode Options
- Output Preprocessor Options
Object Name Options
| Option | Name | Type | Description |
|---|---|---|---|
name | Name | string | Object Name. |
field | Field | string | Field containing the Object Name. |
Batch Fields
| Field | Type | Required | Description |
|---|---|---|---|
fixed-size ✓ | number (integer) | maximum number of events in an output batch. Examples: 42, 1.2e-10 | |
mode | Mode | ✅ | If ‘document’ send on end of document generated by input. If ‘fixed’, use fixed_size.Allowed values: fixed, document |
timeout | time-interval (string) | ✅ | interval after which the batch is sent, to keep throughput going (default 100ms). Default: 100msExamples: 500ms, 2h |
header | multiline-text (string) | put a header line before the batch. | |
footer | multiline-text (string) | put a header line after the last line of the batch. | |
use-document-marker ✓ | boolean (bool) | Enrich the job metadata with a document marker (for document handling in batch mode). Default: false | |
wrap-as-json ✓ | boolean (bool) | Format the output batch as a JSON array. Default: false |
Retry Fields
| Field | Type | Required | Description |
|---|---|---|---|
timeout | time-interval (string) | ✅ | timeout (e.g. 500ms, 2s etc. - default is 30). Examples: 500ms, 2h |
retries | number (integer) | number of retries. Examples: 42, 1.2e-10 |
Config Table
| Setting | Value |
|---|---|
setting.name | value |
Value format: templated-text.
Mode Options
| Value | Name | Description |
|---|---|---|
put | put | Put Objects |
delete | delete | Delete Objects |
Batch - Mode Options
| Value | Name | Description |
|---|---|---|
fixed | fixed | Fixed |
document | document | Document |
Output Preprocessor Options
| Value | Name | Description |
|---|---|---|
gzip | gzip | Gzip the output data |
parquet | parquet | Extract the received data as JSON rows from a parquet file |
base64 | base64 | Decode base64 as binary |