A crypto exchange simulator in Go. Runs a live order matching engine anchored to real Coinbase prices, with competing autonomous market maker and trader agents communicating over NATS.
- Go
- NATS (pub/sub + request-reply)
- Coinbase Advanced Trade API (WebSocket)
- PostgreSQL (TimescaleDB)
- Docker

| Metric | Result |
|---|---|
| Order throughput | 32,000+ orders/s |
| Trade throughput | 40,000+ trades/s |
| Average order latency | 1.2ms |
| Rejections | 0 |
| DB write throughput | 57,000+ writes/s |
The matching engine maintains three independent order books (BTC-USD, ETH-USD, XRP-USD), each backed by a MaxHeap (bids) and MinHeap (asks) for O(log n) best-price access.
Order flow:
- A participant submits an order via NATS request-reply on
orders.submit. - The engine unmarshals the order, routes it to the correct order book, and runs the matching algorithm.
- Limit orders walk the opposite side: a bid matches against the best ask while
bid.price ≥ ask.price; fills happen at the resting order's price (price-time priority). - Market orders consume the opposite side until filled or the book is exhausted; an unfilled market order is cancelled (never rests).
- Partial fills are supported — unfilled limit remainder rests on the book.
- After matching, the engine publishes each resulting trade to
trades.executedand a top-10 order book snapshot toorderbook.snapshot, then ACKs the submitter synchronously.
Cancel requests follow the same request-reply pattern on orders.cancel.
7 participants run as independent Go services, each subscribing to a Coinbase price feed and submitting orders to the engine.
Quotes a tight fixed spread around mid, across N price levels. Requotes only when the mid moves more than minMoveThresh.
where
Parameters: spreadBps=2, numLevels=3, orderSize=0.01 BTC.
Like the scalper but skews the entire quote ladder in the direction of recent price movement. When price is rising, quotes shift up — the MM buys cheaper and sells into strength.
where
Parameters: spreadBps=4, skewFactor=0.3, numLevels=5, orderSize=0.1 ETH.
Implements the Avellaneda-Stoikov (2008) optimal market making model. The reservation price adjusts for inventory risk; the spread widens with volatility and inventory.
where
Parameters: γ=0.1, κ=1.5 (dynamic), σ=0.02, T=1.0, numLevels=5, orderSize=50 XRP.
Maintains a rolling price window of size N. When the window shows a trend exceeding a threshold, it enters a directional position and places a take-profit limit.
Parameters: windowSize=10, threshold=0.2%, orderSize=0.05 ETH.
Places a symmetric limit order ladder around the current mid. Does nothing until price drifts far enough from the base to warrant rebuilding.
Ladder rebuilds when
Parameters: levels=5, spacing=0.1%×mid, rebuildThresh=0.5%, orderSize=0.05 BTC.
Places random orders on a random interval. Provides background liquidity consumption.
Computes a rolling volume-weighted average price over the last 50 trades. Buys when price is below VWAP, sells when above.
Parameters: window=50, threshold=0.1%, orderSize=0.05 BTC.
Three TimescaleDB hypertables are created at startup via embedded SQL migrations:
trades — partitioned by executed_at
trade_id UUID
symbol TEXT
price NUMERIC(20, 8)
qty NUMERIC(20, 8)
buyer_mm_id TEXT
seller_mm_id TEXT
buyer_order_id UUID
seller_order_id UUID
executed_at TIMESTAMPTZ -- partition keyorderbook_snapshots — partitioned by snapshot_at
id BIGSERIAL
symbol TEXT
bids JSONB -- top-N levels [price, qty]
asks JSONB
mid_price NUMERIC(20, 8)
spread NUMERIC(20, 8)
snapshot_at TIMESTAMPTZ -- partition keyIndex: (symbol, snapshot_at DESC)
mm_status — partitioned by recorded_at
id BIGSERIAL
mm_id TEXT
strategy TEXT
inventory NUMERIC(20, 8)
realized_pnl NUMERIC(20, 8)
unrealized_pnl NUMERIC(20, 8)
open_orders INT
recorded_at TIMESTAMPTZ -- partition keyIndex: (mm_id, recorded_at DESC)
Write path:
The persistence service subscribes to trades.executed. Incoming trades are fanned to 8 worker goroutines via a buffered channel (capacity 150k). Each worker accumulates trades and flushes using pgx.CopyFrom (PostgreSQL binary copy protocol) either every 100ms or when the batch hits 5,000 rows — whichever comes first. This is what achieves 57k+ writes/s.
Circular buffer for backpressure: If a DB write fails (e.g., transient connection loss), the batch is written to an in-memory circular buffer (capacity 100k trades) instead of being dropped. On the next successful flush, the circular buffer is drained first. The buffer overwrites oldest entries if it fills — a deliberate trade-off that keeps the hot path non-blocking at the cost of data loss only under sustained DB outage.