Runtime Model
The runtime model defines how Asset Core processes state changes and serves queries with deterministic, auditable guarantees. It is the conceptual backbone for every API, SDK, and operational workflow you will encounter elsewhere in the docs.
Problem this concept solves
Distributed systems face a fundamental tension between consistency, availability, and partition tolerance. Traditional approaches often sacrifice one to achieve the others, leading to problems that are hard to debug and even harder to reproduce. In spatial systems, these failures show up as incorrect positions, conflicting updates, and irrecoverable state drift.
- Race conditions when multiple writers update the same state
- Lost updates when concurrent changes overwrite each other
- Non-deterministic behavior that makes debugging and auditing difficult
- Expensive consensus protocols that add latency and complexity
Asset Core solves these problems by embracing a single-writer architecture with event sourcing, trading horizontal write scalability for absolute determinism and simplicity. Through namespaces, that trade-off still scales horizontally without compromising the contract.
Core ideas
Single-Writer World
Each world has exactly one writer. This is the architectural choice that makes everything else possible.
In distributed systems, multiple writers require coordination: distributed locks, consensus protocols (Raft, Paxos), or eventual consistency with conflict resolution. Each approach trades something precious—either latency (locking/consensus) or determinism (eventual consistency). When debugging a production incident, “eventually consistent” means “might be inconsistent right now, good luck.”
Single-writer eliminates this entire problem class:
-
No coordination overhead: One writer means no distributed locks, no two-phase commit, no consensus protocol overhead. State transitions are instant and local.
-
No race conditions: When only one process mutates state, operations execute in a total order. You can reason about cause-and-effect linearly. Debugging becomes forensic analysis, not probabilistic guesswork.
-
No consensus protocol tax: Raft and Paxos add 2-3x latency and substantial complexity. Single-writer trades horizontal write scaling for simplicity and speed.
The trade-off is real: you can’t scale write throughput horizontally within a single world. The write daemon serializes all commits through a single pipeline, which limits write throughput to what one node can handle. But AssetCore makes that trade-off explicit and recovers scalability through namespace sharding—each namespace is an isolated single-writer world. In practice, throughput scales horizontally by sharding across namespaces. This makes determinism a design choice rather than an accidental property.
Commit Log as Source of Truth
All state changes are recorded as events in an append-only commit log:
- Events are sealed into batches before acknowledgement
- Batches are appended before clients receive success responses (durability depends on backend)
- The log is the authoritative record of everything that happened
This design choice—treating the log as the authoritative source—has profound implications. It means projections are disposable: if you lose read state, replay from the log. It means analytics and notifications can never drift from the source of truth because they consume the same log. It means forensic debugging becomes time travel: replay the log up to any point and inspect exact state. The backend is pluggable, so you can run in-memory for simulations/tests (fast, non-durable) or use file/segmented backends for crash persistence; the interface is designed to accommodate additional backends as needed.
Without this, you’d maintain separate operational and analytical stores, fight synchronization drift, and lose the ability to definitively answer “what was the state at 3:47 PM on Tuesday when the error occurred?” The commit log makes that question trivial. This design enables deterministic replay: given the same sequence of events, any reader will reconstruct the same state. It also guarantees that analytics, notifications, and projections are always derived from the same source of truth.
Projections
Projections are read-optimized views of state derived from the commit log:
- The read daemon tails the commit log for new batches
- Events are applied via replay to update in-memory state
- Snapshots are published atomically for query serving
Projections are eventually consistent with the commit log. The gap between committed events and published projections is the freshness lag, which is measured and exposed through read endpoints.
Three-Layer Architecture
The runtime separates concerns across three layers. This isn’t arbitrary abstraction—it’s how AssetCore achieves both performance and correctness.
Without separation, every operation would mix storage access, business logic validation, and transaction coordination in one monolithic function. That makes testing hard (can’t test validation without storage), replay brittle (business logic during replay risks non-determinism), and performance unpredictable (can’t optimize the hot path separately from the validation path).
AssetCore’s three layers give you:
| Layer | Responsibility | Behavior |
|---|---|---|
| L1 (Storage) | Raw data mutations | No validation, no events |
| L2 (Operations) | Business logic | Validates preconditions, emits events |
| L3 (Transactions) | Coordination | Records undo, handles replay |
L1 (Storage) is the performance layer. It exposes simple setters: “set this balance to N,” “place this instance at coordinates (x,y).” No validation, no events, just state mutation. This is what replay uses—pure, fast, deterministic state application.
L2 (Operations) is the correctness layer. It validates preconditions (“does this container exist?”, “is there space at this position?”), executes the business logic, and emits events describing what changed. This is where your domain rules live.
L3 (Transactions) is the coordination layer. It records undo steps for each operation so failures can trigger rollback. It manages the commit lifecycle: execute operations in sequence, seal events into a batch, append to the commit log backend, return success.
This separation ensures that:
- Fast replay: L1 setters are hot-path optimized, no validation overhead
- Testable business logic: L2 can be tested without storage infrastructure
- Atomic rollback: L3 undo log can reverse any sequence of operations
The cost: more abstraction layers. The gain: you can replay at thousands of events per second because L1 is decoupled from L2’s validation logic.
How it fits into the system
The runtime model shapes every aspect of Asset Core:
Write Path:
- Client sends commit request
- Write daemon validates and executes operations (L2/L3)
- Events are sealed and appended to the commit log backend
- Client receives success with sequence number
Read Path:
- Read daemon tails commit log
- Events are replayed via L1 setters (idempotent)
- Projections are published via atomic swap
- Queries read from current projection
Recovery:
- Load checkpoint (last known good state)
- Replay events from checkpoint position
- Resume normal operation
Key invariants and guarantees
Determinism
Given the same event sequence, replay produces byte-identical state. This is what makes time-travel debugging, disaster recovery, and forensic auditing possible.
Determinism is harder than it sounds. Most systems have hidden sources of non-determinism: system clocks (every replay sees a different timestamp), floating-point arithmetic (rounding varies by CPU), random number generation (non-reproducible), and external API calls (different results each time). When state reconstruction depends on any of these, replay becomes probabilistic—you might get the same state, or you might not.
AssetCore eliminates non-determinism through architectural constraints:
- Events carry hybrid payloads (delta + post-state), and replay uses the post-state fields (the exact values to set, not the operations to compute them)
- Replay uses L1 setters (no arithmetic, no validation, just state application)
- No external dependencies during replay (no API calls, no clock reads, no randomness)
What this enables: replay to any point in the log and inspect exact state, rebuild projections from scratch with confidence, prove what happened when for auditing. What you give up: you can’t call external APIs from operations or rely on wall-clock time—everything must flow through the commit log.
Idempotency
Applying the same event twice has no additional effect. This is the property that makes replay safe after crashes and network failures.
Without idempotency, replaying events is dangerous. If an event says “add 10 to this balance,” replaying it twice adds 20—you’ve corrupted state. If an event says “move instance to position (3,5),” replaying it might succeed the first time and fail the second (already there), causing divergent behavior. Systems that aren’t idempotent must track exactly which events have been applied, which introduces complex bookkeeping and failure modes.
AssetCore makes replay idempotent by design:
- Events carry the final state to set (not deltas to apply)
- Replay simply overwrites with that state (applying “set balance to 50” twice results in 50, not 100)
- Safe to retry after failures (if you’re not sure an event was applied, just apply it again)
What this enables: crash recovery without complex resumption logic, retries without state corruption, confidence that “replay the log” always produces the same result. What you give up: events are slightly larger (they carry post-state, not just deltas), but the operational simplicity is worth it.
Atomicity
All operations in a transaction succeed or fail together. This is the guarantee that prevents partial state corruption.
Without atomicity, a failure in operation 5 of 10 leaves operations 1-4 committed and 6-10 unexecuted. Your state is now internally inconsistent—a container created but never populated, a transfer half-executed, an instance moved but its parent reference stale. Debugging requires reconstructing “what should have happened,” which is often impossible. Users see corrupt state, operations fail unexpectedly, and trust erodes.
AssetCore enforces atomicity through an undo log:
- L3 records undo steps during execution (reverse operations for each state change)
- Failures trigger rollback of all changes (replay undo log in reverse, restoring exact prior state)
- No partial commits are visible (queries never see intermediate states)
The error response tells you WHICH operation failed and why, but the state remains clean. What you gain: confidence that failed transactions leave no trace, no need for manual cleanup after failures, internal consistency is a guarantee not a best-effort. What you give up: you can’t “commit what succeeded and skip what failed”—if any operation fails, the entire transaction rolls back. That’s not a bug, it’s the design. If you need partial progress, break the transaction into smaller commits with explicit checkpoints.
Crash Safety
With a durable commit log backend, the system recovers correctly from crashes, with no data loss and no manual intervention. This is what makes AssetCore safe to run in production.
Most systems require careful operational procedures after a crash: check for corrupted state, reconcile in-flight transactions, verify consistency between replicas, possibly restore from backup. These procedures are error-prone (humans miss steps) and time-consuming (recovery takes minutes to hours). Worse, some failures can’t be recovered from automatically—they require manual data fixes or accepting data loss.
AssetCore’s architecture makes crash recovery automatic and lossless when the commit log is durable:
- Commit log survives restarts when backed by durable storage
- Checkpoints record progress (snapshots of projection state at known log positions)
- Replay reconstructs any missing state (load checkpoint, replay from that position to log head)
After a crash, the write daemon resumes at the last checkpoint position and continues processing the log. Read daemons reload their checkpoints and replay forward. With a durable backend, no state is lost (the log has everything), no manual intervention is required (recovery is automatic), and no corruption is possible (replay is deterministic and idempotent). In in-memory modes, history is intentionally ephemeral and recovery is a clean restart. What you gain: operational simplicity, confidence in disaster recovery, no 3 AM pages to fix corrupted state. What you give up: recovery time scales with the number of events since the last checkpoint, but checkpoints are taken frequently (configurable, typically every few seconds).
See also
- Freshness and Replay - How “fresh” read data is
- Transactions and Operations - The atomic unit of change