Skip to main content

Architecture

┌──────────────────────┐ ┌──────────────────────┐
│ bitview-app │ REST + WSS │ bitview-bot │
│ (Next.js) │ ────────────────▶│ (Rust / actix-web) │
│ │ │ │
│ • streamer creator │ ◀────── Swagger │ • Twitch IRC oracle │
│ • viewer dashboard │ │ • Accrual loop │
│ • Solana wallet │ │ • Distributions API │
│ adapter │ │ • Claim proof proxy │
└────────┬─────────────┘ └────────┬──────────┬──┘
│ │ │
│ wallet.signTx │ │ proxy
▼ ▼ ▼
┌──────────────────────┐ ┌────────────┐ ┌─────────────────────┐
│ Solana cluster │ ◀─ new_claim ────│ MongoDB │ │ distributor/api │
│ │ │ (state) │ │ (axum, proofs) │
│ • merkle distrib. │ └────────────┘ └─────────────────────┘
│ program │ ▲ ▲
│ • SPL token mints │ │ │
│ │ ◀─ new_distrib. ─────────┴────────────────┘
└──────────────────────┘ (offline cli at finalize time)

Modules

ModuleTechResponsibility
bitview-appNext.js 14, wallet-adapter, MetaplexStreamer creates distribution, viewer dashboard, claim UX
bitview-botRust 1.75+, actix-web 4, MongoDBTwitch presence, accrual, REST API, OAuth verification
distributor/merkle-distributorAnchor 0.30Audited on-chain claim program (Jito/Jupiter fork)
distributor/cliRustBuild merkle tree from CSV, fund vault, set root, send claim
distributor/apiaxumServes merkle proof for (mint, claimant)
checkpointDocusaurusThis documentation site

Data flow

Accrual

Every ACCRUAL_TICK_SECONDS the backend:

  1. Loads distributions where start_at <= now <= end_at and status is pending or active.
  2. For each, ensures the Twitch listener is up for the channel.
  3. Snapshots present logins from the in-memory state_space_twitch map.
  4. For each linked viewer (look up users by login), upserts an accruals row: +amount, +1 tick.
  5. Skips viewers above max_per_viewer. Stops crediting when the pool is exhausted.

Snapshot / finalize

When a distribution ends (or the operator forces it):

  1. POST /distributions-api/{id}/finalize flips status to snapshotting.
  2. The offline cli create-merkle-tree reads the accruals rows for that distribution and produces a tree_<version>.json.
  3. cli new-distributor (or a set_enable_slot if pre-created) publishes the root on-chain.
  4. The merkle proof API loads the tree and starts serving proofs.
  5. Backend flips status to claimable.

Claim

  1. Viewer hits the rewards page. Frontend calls GET /claims-api/summary/{wallet}.
  2. For any claimable entry, it calls GET /claims-api/proof/{wallet} (which proxies to the distributor proof API).
  3. Frontend builds and signs a new_claim instruction with their wallet.
  4. Tokens land in the viewer's ATA.

Deployment shape

Single backend process binds two ports:

  • 4477 — REST + Swagger (configurable via HTTP_PORT)
  • 9100 — Prometheus exporter (configurable via METRICS_PORT)

MongoDB is the only persistence layer. Solana RPC is read-only from the backend's perspective (proofs come from the distributor API, the actual on-chain writes are from the streamer's wallet on creation and the viewer's wallet on claim).

There is intentionally no separate "frontend backend" — the bot is the backend. Adding a second service would just split state.