Skip to content

System Requirements

Use this page to confirm your hosts meet the baseline requirements before starting the First pipeline guide or any production install. LyftData ships as prebuilt, self-contained 64-bit binaries for each supported platform.

Quick checklist

  • Confirm the operating system appears in the supported list below
  • Verify hardware meets or exceeds the recommended CPU, RAM, and disk targets
  • Ensure networking/firewall rules allow workers to reach the server on the configured ports
  • Decide where staging directories live and ensure the service account can write to them

Supported operating systems

LyftData is available for 64-bit Linux, Windows and Mac. Server and worker processes are interoperable between different operating systems.

OSArchitectureRAMCPU CoresDisk space
Linux (gnu)x86-644 GB4 cores10 GB
Linux (gnu)aarch644 GB4 cores10 GB
Darwin / Macaarch648 GB4 cores10 GB
Windowsx86-648 GB4 cores10 GB

Supported operating system versions

OSArchitectureMinimum Versions
Linux (gnu)x86-64kernel 3.2+, glibc 2.17+
Linux (gnu)aarch64kernel 4.1+, glibc 2.17+
Darwin / Macaarch64macOS 11.0+, Big Sur+
Windowsx86-64Windows 10+, Server 2016+

Sizing and scaling

The following baseline guidance can help you plan capacity. Actual sizing depends on connector mix, event sizes, and action complexity.

  • Small (dev/test, single worker):
    • Server: 2 vCPU, 2–4 GB RAM, 10 GB disk
    • Worker: 2 vCPU, 2–4 GB RAM, 10 GB disk
    • Throughput: up to tens of events/sec with simple actions
  • Medium (team, multiple jobs, 1–3 workers):
    • Server: 4 vCPU, 8 GB RAM, 50–100 GB disk
    • Workers (x1–3): 4 vCPU, 8 GB RAM, 20–50 GB disk each
    • Throughput: hundreds of events/sec depending on I/O
  • Large (production, 3+ workers):
    • Server: 8 vCPU, 16–32 GB RAM, 200+ GB disk
    • Workers (horizontally scale): 4–8 vCPU, 8–32 GB RAM, 50–200 GB disk each
    • Throughput: scale linearly by adding workers for parallelizable jobs

Notes and tips:

  • Disk usage: the Server stores logs/metrics in the staging directory. Cleanup runs when the disk usage threshold is reached (see Configuration). Plan headroom accordingly.
  • Network: place workers close to data sources and sinks to minimize latency and egress.
  • Scaling strategy: prefer horizontal scaling by adding workers and splitting pipelines across jobs using worker channels when fan-out or fan-in is needed.
  • Spiky loads: use scheduled triggers or rate limits on inputs to smooth ingestion.
  • Observability: monitor worker CPU, memory, and backpressure; increase worker count before saturation.

Platform notes: Windows and Mac

LyftData provides native builds for Windows and macOS. The quick-start commands are the same; differences are mostly around service management and paths.

  • Windows:
    • Run in a terminal for development: lyftdata run server
    • For running as a service, use the MSI installer or lyftdata service install (native Windows Service manager support). Ensure the service account has write access to the staging directory.
    • Paths: prefer directories without spaces for data directories. Environment variables can be set via System Properties or a service wrapper.
  • macOS:
    • Run in Terminal for development: ./lyftdata run server
    • For background operation, use launchd (create a plist) or a process supervisor like brew services. Ensure the data directory is writable by the user running the service.

On both platforms:

  • Default Server bind address is 127.0.0.1:3000 (see Configuration). Use --bind-address 0.0.0.0:3000 to accept remote connections.
  • You can accept the EULA non-interactively for automation using the environment variable noted in the Configuration page.
  • Workers authenticate with an API key and connect to the Server URL similar to Linux deployments.