Run AI inference on the Dria network. Earn rewards by serving models from your machine.
Choose one installation method:
Homebrew (macOS / Linux):
brew install firstbatchxyz/dkn/dria-node
dria-node --versionHomebrew will add the tap automatically.
Shell script (macOS / Linux):
curl -fsSL https://raw.githubusercontent.com/firstbatchxyz/dkn-compute-node/master/install.sh | sh
dria-node --versionAMD ROCm (Linux x86_64):
curl -fsSL https://raw.githubusercontent.com/firstbatchxyz/dkn-compute-node/master/install-rocm.sh | bash
dria-node --versionRequires ROCm 6.x to already be installed on your machine.
PowerShell (Windows):
irm https://raw.githubusercontent.com/firstbatchxyz/dkn-compute-node/master/install.ps1 | iex
dria-node --versionFrom GitHub Releases:
Download the latest file for your platform from Releases, then run dria-node --version to verify it.
Run the interactive setup:
dria-node setupThis will:
- Detect your system RAM and list models that fit
- Let you pick a model from the available options
- Download the GGUF model file from HuggingFace
- Run a test inference to verify everything works
- Print a benchmark (tokens per second)
Use --gpu-layers -1 to offload all layers to GPU (Metal on macOS, CUDA on NVIDIA builds, ROCm on AMD Linux builds):
dria-node setup --gpu-layers -1Once setup is complete, start the node:
dria-node start --wallet <YOUR_SECRET_KEY> --model <MODEL_NAME>The node will connect to the Dria network, register your models, and start serving inference requests. You can increase throughput with --max-concurrent:
dria-node start --wallet <KEY> --model lfm2.5:1.2b --max-concurrent 4| Model | Type | Quant | ~Size |
|---|---|---|---|
lfm2.5:1.2b |
Text | Q4_K_M | 0.8 GB |
lfm2.5-audio:1.5b |
Audio | Q4_0 | 1.0 GB |
lfm2.5-vl:1.6b |
Vision | Q4_0 | 1.2 GB |
nanbeige:3b |
Text | Q4_K_M | 2.0 GB |
locooperator:4b |
Text | Q4_K_M | 2.5 GB |
qwen3.5:9b |
Vision | Q4_K_M | 6.0 GB |
lfm2:24b-a2b |
Text | Q4_K_M | 14 GB |
qwen3.5:27b |
Vision | Q4_K_M | 16 GB |
qwen3.5:35b-a3b |
Vision | Q4_K_M | 20 GB |
Serve multiple models by comma-separating them: --model "qwen3.5:9b,lfm2.5:1.2b"
Override quantization with --quant Q8_0 (applies to all models).
dria-node <COMMAND>
Commands:
setup Interactive setup: pick a model, download it, and run a test
start Start the compute node
setup options:
--data-dir <PATH> Data directory [env: DRIA_DATA_DIR]
--gpu-layers <N> GPU layers to offload (0 = CPU only) [default: 0]
start options:
--wallet <KEY> Wallet secret key, hex-encoded [env: DRIA_WALLET]
--model <MODELS> Model(s) to serve, comma-separated [env: DRIA_MODELS]
--router-url <URL> Router URL [default: quic.dria.co:4001] [env: DRIA_ROUTER_URL]
--gpu-layers <N> GPU layers to offload (-1 = all, 0 = CPU) [default: 0]
--max-concurrent <N> Max concurrent inference requests [default: 1]
--data-dir <PATH> Data directory [env: DRIA_DATA_DIR]
--quant <QUANT> Override GGUF quantization [env: DRIA_QUANT]
--insecure Skip TLS verification [env: DRIA_INSECURE]
All flags can also be set via environment variables.
git clone https://github.com/firstbatchxyz/dkn-compute-node.git
cd dkn-compute-node
cargo build --releaseFeature flags:
--features metal— Apple Metal GPU acceleration (macOS)--features cuda— NVIDIA CUDA GPU acceleration--features rocm— AMD ROCm GPU acceleration (Linux x86_64)
cargo testcargo clippy
cargo fmt --checkThis project is licensed under the Apache License 2.0.