< See latest news & posts

Netsy 1.0: Multi-Node KV DB using Object Storage

TL;DR

  • Netsy is a replicated key-value database backed by object storage.
  • You can use it as a drop-in etcd replacement for Kubernetes, or as durable low-latency persistence for any application.
  • Netsy 1.0 has been released, with full multi-node support including quorum-based replication inspired by PostgreSQL synchronous replication.
  • Object storage support has been expanded to include Google Cloud Storage (GCS) alongside S3 and S3-compatible storage.
  • A Kubernetes Helm chart is now available for deploying multi-node Netsy clusters.
  • Please star the Netsy GitHub repository!

When we first released Netsy last August, it was a single-node etcd-compatible KV database that stored data in S3. The vision was always to scale from single-node to large multi-node clusters without having to migrate solutions. Today, with Netsy 1.0, that vision is realised.

What’s new in 1.0

Multi-node replication

The headline feature is full multi-node support, which was always the plan for Netsy, but not-yet-implemented in our developer preview.

Netsy now supports clusters scaling from zero to many nodes, with a replication model inspired by PostgreSQL’s synchronous replication.

The write path works like this: the Primary node receives a write, replicates the record to Replicas via a bidirectional gRPC stream, waits for a configurable quorum of Receipts, and then commits.

Because different use cases call for different trade-offs, Netsy 1.0 comes with a tunable consistency model:

  • Majority quorum (quorum: -1): The default. Writes are committed once a majority of nodes have acknowledged them.
  • Static quorum (quorum: N): Set a fixed number of required acknowledgements.
  • Disable quorum (quorum: 0): Writes go synchronously to object storage, like the original single-node behaviour. Useful for single-node deployments or when you want object storage as the source of truth.

Object storage abstraction and GCS support

Netsy was originally built on S3, but with 1.0 we’ve introduced a provider-agnostic object storage abstraction. This means Netsy now supports:

  • Amazon S3 and any S3-compatible storage which supports preconditions on PutObject (like SeaweedFS)
  • Google Cloud Storage (GCS) with native support for Application Default Credentials and GCS-specific storage classes

Each provider uses its native conditional write primitives (ETags for S3, generation/metageneration preconditions for GCS) to ensure safe concurrent operations. The provider is selected via a single config value:

{
  "storage": {
    "provider": "gcs", // or "s3"
    "bucket_name": "my-netsy-bucket",
  },
}

Two-tier leader election

Netsy uses a two-tier leader election model. The first tier uses s3lect, our Open Source leader election package, to elect an Elector node using object storage as the coordination mechanism. The Elector then runs a second-tier election to select the Primary node for writes.

With quorum replication, the Primary must be a node that has the latest revision available in the cluster, otherwise committed data could be lost. The Elector handles this by maintaining direct gRPC connections to every node, tracking health via heartbeats, and querying each node’s latest revision before making an election decision.

Without the s3lect layer, determining which node is the Elector would be a manual operation, much like selecting a primary PostgreSQL node. s3lect automates the process of nominating the Elector using object storage, which means the entire Netsy stack (data storage, replication, and coordination) runs on object storage plus compute. No additional infrastructure like etcd or ZooKeeper is required.

mTLS everywhere

All communication between nodes and from clients uses mutual TLS with certificate-based identity. Certificates use URI SANs to encode node identity and cluster membership, which enables native integration with cert-manager’s CSI driver in Kubernetes.

Observability

Netsy 1.0 ships with comprehensive Prometheus metrics and structured logging:

  • Node state gauges for health, elector, and primary state
  • Write path metrics including quorum rollbacks, write duration, and path switching
  • Replication metrics for stream health, receipt age, and replica tracking
  • Object storage metrics for reads, writes, and duration
  • Election metrics for duration, success/failure rates, and contact results
  • Structured log events for state transitions, elections, writes, compaction, and more

Configuration

Configuration is now split into per-node settings (environment variables for identity and addresses) and per-cluster settings (a shared JSONC config file for cluster behaviour and thresholds). This makes it straightforward to deploy identical config files across all nodes while varying only the node-specific values.

Kubernetes Helm chart

A Helm chart is available for deploying Netsy clusters on Kubernetes, with native cert-manager CSI driver integration for automatic TLS certificate provisioning.

How it works under the hood

For those interested in the internals, here’s how a multi-node Netsy cluster operates:

  1. Startup: Each node loads its local SQLite database, downloads the latest snapshot from object storage if needed, backfills any gaps from chunk files, and connects to the Primary’s replication stream to catch up.

  2. Writes: Clients can connect to any node. Replicas transparently proxy write requests to the Primary. The Primary writes to its local SQLite database, replicates to connected Replicas, waits for quorum, and commits. Records are asynchronously buffered and flushed to object storage.

  3. Reads: Every node serves reads from its local SQLite database up to the committed revision, so read load is distributed across the cluster.

  4. Failover: If the Primary becomes unavailable, the Elector runs a new election, selecting the healthiest node with the latest revision. The new Primary completes a preflight check (uploading any un-synced records to object storage and relaying them to Replicas) before accepting writes.

  5. Compaction: The Primary periodically coordinates compaction across all nodes, checking watch revisions to ensure no active watches would be invalidated before compacting.

Getting started

Netsy is available on GitHub: github.com/nadrama-com/netsy

Netsy requires object storage, TLS certificates, and a cluster config file for both single-node and multi-node deployments. The deployment documentation covers everything you need to get a cluster running, including deploying Netsy on auto-scaled VMs using Nstance, Nadrama’s Open Source VM auto-scaler for AWS, Google Cloud, and Proxmox (read more about Nstance here).

What’s next

With multi-node support shipped, we’re now focusing on deploying Netsy to production. We’d love to hear your feedback, please join our Discord server or open an issue on GitHub.

And as always, please show your support by starring the Netsy GitHub repository. Thanks for reading!


Published: .


Nadrama

Agent infrastructure you control.
Hassle-free public and private cloud.

Copyright © 2026 Nadrama Pty Ltd