Overview

StrataDB offers three durability modes that control the tradeoff between write performance and crash safety. The naming is designed to be self-explanatory — users should be able to pick the right mode without reading documentation.

ModeData locationCrash behaviorWrite latency target
CacheMemory onlyAll data lost on process exit< 5 us
StandardDisk + periodic WAL flushMay lose last flush interval on crash< 10 us
AlwaysDisk + per-write WAL syncZero data loss on crash~6 ms (NVMe fsync)

Mode Details

Cache

In-memory only. No disk persistence, no WAL. Data exists only for the lifetime of the process. Equivalent to a sophisticated in-process cache with full primitive support (KV, State, Event, JSON, Vector, Branch).

Use cases: Session stores, request-scoped scratch data, feature flag caches, real-time analytics accumulators, test suites, prototyping.

Contract: Data is available immediately after write. Data is gone when the process exits (clean or crash). No disk I/O occurs.

API:

let db = Strata::cache()?;

Prior art: Redis without persistence, memcached, in-process HashMaps.

Standard

Disk-backed with background WAL flushing. Writes go to memory immediately and are appended to an in-memory WAL buffer. A background thread flushes the buffer to disk periodically (configurable interval, default: every second). Reads always see the latest in-memory state regardless of flush status.

Use cases: Most production workloads — web applications, APIs, content management, analytics, any workload where losing the last second of writes on a crash is acceptable.

Contract: Data survives clean shutdowns. On crash, writes from the last flush interval (up to ~1 second by default) may be lost. All other data is recoverable from the WAL.

API:

// Standard is the default — just open with a path
let db = Strata::open("/data/mydb")?;

Config file (/data/mydb/strata.toml):

durability = "standard"

Prior art: Redis appendfsync everysec, SQLite PRAGMA synchronous = NORMAL, MySQL innodb_flush_log_at_trx_commit = 2.

Always

Disk-backed with per-write WAL sync. Every write is fsynced to disk before the call returns. This provides the strongest durability guarantee: if the write call returned Ok, the data is on disk.

Use cases: Financial transactions, audit logs, billing systems, any workload where losing even one write is unacceptable.

Contract: Every write that returns successfully is durable on disk. Data survives both clean shutdowns and crashes with zero loss.

API:

let db = Strata::open("/data/mydb")?;

Config file (/data/mydb/strata.toml):

durability = "always"

Prior art: Redis appendfsync always, SQLite PRAGMA synchronous = FULL, PostgreSQL synchronous_commit = on.

Design Principles

Two axes, not one

Durability has two independent dimensions that were previously conflated:

  1. Storage: Where does data live? (Memory only vs disk-backed)
  2. WAL policy: When is data synced to disk? (Never, periodically, every write)

The three modes represent the useful combinations of these axes:

No WALBackground WALPer-write WAL
Memory onlyCache(not useful)(not useful)
Disk(dropped)StandardAlways

The dropped combination (disk + no WAL) was considered but intentionally excluded:

  • If Standard mode achieves near-Cache write latency (the design target), there is no performance reason to skip the WAL on disk-backed databases.
  • Users who want fast disk writes without crash recovery are better served by Standard with a short flush interval than by a mode that offers no recovery at all.

Read path is durability-agnostic

Read operations must never touch the durability layer. All reads are served from the in-memory store regardless of which mode is active. The durability mode only affects the write path.

This means:

  • kv_get, state_get, json_get, vector_get, vector_search, event_get should all have identical latency across Cache, Standard, and Always.
  • The only performance difference between modes is on write operations.

Standard should be fast

The design target for Standard mode is write latency within 2-5x of Cache mode. Since writes go to memory + an in-memory buffer (no fsync in the hot path), the overhead should be minimal — just the cost of appending to the WAL buffer.

Current measured latencies (targets in parentheses):

OperationCacheStandard (current)Standard (target)
kv/put (1KB)1.02 us6.12 ms2-5 us
state/set2.21 us6.13 ms4-10 us
event/append267 us6.53 ms300-500 us

The current Standard mode is ~3,000-6,000x slower than target because it performs synchronous fsync on every write (same as Always). Fixing this is tracked in issue #969.

Naming Rationale

The mode names were chosen for user understanding over technical accuracy:

  • Cache: Users know what a cache is. It’s fast, it’s temporary, it might disappear. No explanation needed.
  • Standard: “This is the normal one. Pick this if you’re not sure.” It’s the default, it’s what most people should use. The name deliberately avoids technical jargon.
  • Always: Answers the user’s question directly. “When is my data synced?” Always. One word, no ambiguity.

Previous naming (NoDurability / Buffered / Strict) was rejected because:

  • NoDurability defines itself by negation — it says what’s missing, not what it’s for.
  • Buffered describes the mechanism (we buffer writes), not the contract (your data is periodically synced).
  • Strict sounds punitive, like you’re being penalized for wanting safety.

Migration

Enum changes

// Before
pub enum DurabilityMode {
    None,
    Batched,
    Strict,
}

// After
pub enum DurabilityMode {
    Cache,
    Standard,
    Always,
}

API changes

// Before (builder pattern)
Database::ephemeral()
Database::builder().path(p).no_durability().open()
Database::builder().path(p).buffered().open()
Database::builder().path(p).strict().open()

// After (unified API)
Strata::cache()             // in-memory, no config file
Strata::open(p)             // reads strata.toml, defaults to standard

Durability is now configured via strata.toml in the data directory:

# strata.toml
durability = "standard"   # or "always"