Getting Started Overview
This overview gives you the narrative and the minimal mental model needed to start confidently. It defines what “first success” looks like and shows how the core components deliver determinism and replay.
Who this is for
Anyone new to Asset Core who wants to understand the minimal components required to run the system and send a first transaction. If you are evaluating whether the model fits your domain, this is the fastest way to orient yourself before you dive into details.
What you will learn
This section focuses on the smallest set of concepts you need to get a correct end-to-end flow. You will see the system boundaries, the role of the commit log, and how to confirm that writes are visible to reads in a deterministic way.
- The core components (write daemon, read daemon, commit log, namespace catalog, and clients with auth tokens) and what each one is responsible for
- How data flows from a commit into read projections and why that flow stays deterministic
- What “success” looks like for a first interaction and how to recognize it in responses
When to use this
Read this before attempting to run Asset Core locally or integrate it into your application. If you want the model behind the guarantees, start with Runtime Model and Transactions and Operations. If you want examples of how domains map cleanly into the system, read Scenario Foundations. If you are ready to build, continue with First Commit and Read or the Python SDK path. Operators and agent builders can jump directly to Operations or Agents Overview.
High-level structure
Asset Core uses a write/read split architecture with event sourcing as the source of truth. This separation is fundamental: the write daemon optimizes for correctness and determinism, while the read daemon optimizes for query performance and freshness visibility. Together, they form an interdependent system where the commit log is the bridge between writes and reads.
┌──────────────┐ ┌──────────────┐
│ Write Daemon │ │ Read Daemon │
└──────┬───────┘ └──────▲───────┘
│ │
│ Events │ Tail + Apply
▼ │
┌───────────────────────────┴──┐
│ Commit Log (Events) │
│ (sealed batches, ordered) │
└──────────────────────────────┘
Write Daemon
The write daemon accepts HTTP POST requests to /v1/write/namespaces/{namespace_id}/commit. It is the only component that mutates state, so every commit is serialized and the system stays deterministic under concurrency.
This single-writer design is a deliberate trade-off: you can’t scale write throughput horizontally within one namespace, but you get absolute determinism (see Runtime Model). Write throughput scales by sharding across namespaces—each namespace is an independent single-writer world.
The write daemon:
- Validates incoming transactions (checks operation structure, preconditions)
- Executes operations against the runtime (applies business logic via L2/L3 layers)
- Seals events into batches (groups events with sequence numbers)
- Persists batches to the commit log backend before acknowledgement (durability depends on backend choice)
What you gain: every commit sees a consistent view of the world, operations execute in total order, debugging becomes forensic analysis instead of probabilistic guesswork. What you give up: write throughput is bounded by a single node’s capacity (but sharding across namespaces recovers horizontal scalability).
Read Daemon
The read daemon serves HTTP GET queries. It materializes projections from the commit log so reads are fast, consistent, and explainable. Projections are derived data—if lost, they can be rebuilt from the commit log (see Freshness and Replay).
The read daemon operates asynchronously from writes: it tails the commit log at its own pace, applying events as they arrive. This means reads are eventually consistent, not immediately consistent. The lag between commits and queries is measured and exposed, which is critical for debugging.
The read daemon:
- Tails the commit log for new batches (polls for events since last checkpoint)
- Applies events via deterministic replay (uses L1 setters, see Runtime Model)
- Publishes zero-copy snapshots (atomic swap, no partial updates visible)
- Reports freshness metadata (lag between committed events and query state)
Queries return point-in-time projections with freshness information (world_seq, lag in commits and milliseconds), so you can reason about staleness explicitly. Clients can block reads until fresh enough by passing x-assetcore-min-world-seq.
What you gain: query performance independent of write load, disposable read state (rebuild from log), visibility into staleness (no hidden lag). What you give up: reads aren’t instantly fresh—there’s always some lag (typically milliseconds).
Commit Log
The commit log is the authoritative event store. It is the only data source used for replay and audit. Durability depends on the backend you choose: in-memory backends are fast and disposable, while file-backed or segmented backends persist across restarts. This design (see Runtime Model) keeps AssetCore operationally simple: you can always reconstruct query state from the log, and durability is an explicit deployment choice.
The commit log is:
- Append-only: Events are added at the end, never modified or deleted
- Immutable: Once written, batches cannot be changed or reordered
- Sequenced: Each batch has a monotonic sequence number (total ordering)
- Durable (optional): When backed by file or segmented storage, batches are persisted before acknowledgement
Both daemons interact with the same commit log, ensuring consistency across writes and reads. The write daemon appends events; the read daemon tails and applies them. This separation is what enables write/read scaling independently: the write daemon doesn’t block on read performance, and the read daemon can lag without affecting write throughput.
What you gain: forensic debugging (replay to any point), auditability (prove what happened when), and analytics that never drift (all consume the same source). What you give up: log storage grows over time (though archival strategies mitigate this), and full crash recovery depends on using a durable backend.
Client
Any HTTP client can interact with Asset Core. The SDKs are thin layers over the same HTTP contract, so you always have a fallback path when you need to debug at the protocol level.
Clients:
- Send transactions to the write daemon (
POST /v1/write/namespaces/{id}/commit) - Query state from the read daemon (
GET /v1/read/namespaces/{id}/query) - Use idempotency keys for safe retries (prevents duplicate commits from network retries)
The Python SDK provides typed helpers and ergonomic wrappers, but raw HTTP works for any language. This means you’re never locked into a specific SDK—when you need to troubleshoot at the protocol level, you can use curl or any HTTP client.
The Goal
Getting started means:
- Running both daemons locally with auth tokens and a namespace catalog so you can observe write and read behavior
- Provisioning a namespace and sending a commit that creates a container and adds balance, proving end-to-end writes
- Reading back the state to confirm the commit was applied and the read model is current
This validates that your environment is working and gives you a foundation for more complex experiments and integrations.
Next steps
- First Commit and Read - Step-by-step guide with HTTP examples, then drill into the HTTP API and Transactions
- Using the Python SDK - Same workflow with typed Python helpers for faster iteration
- Robotic Arm Continuous scenario - Full real-world walkthrough with logs and deterministic replay details