Running locally
Prerequisites
- Rust 1.75+ (
rustup default stable) - Node 20+ and
pnpm - MongoDB 6+ reachable on the network
- Solana CLI 1.18.17 + Anchor 0.30.1 if you want to deploy the distributor
- A registered Twitch application (https://dev.twitch.tv/console/apps) for OAuth
Backend (bitview-bot)
Copy and fill the env file:
cd bitview-bot
cp .env.example .env
Minimum settings:
MONGODB_URL=mongodb://192.168.100.97:27017
MONGODB_DB=bitview
SOLANA_RPC_URL=http://192.168.100.98:18899
DISTRIBUTOR_PROGRAM_ID=4ffj6hEnx6cqp4ToMALExqk6QwPNSbZyr8ro9yW1Qvok
DISTRIBUTOR_API_URL=http://localhost:7001
TWITCH_CLIENT_ID=<your twitch app client id>
ADMIN_API_KEY=<random string>
Run:
cargo run --release
Verify:
curl http://localhost:4477/health
open http://localhost:4477/swagger-ui/
Distributor proof API
Once you've built at least one merkle tree (distributor/cli create-merkle-tree),
serve proofs:
cd distributor/api
cargo run --release -- \
--bind-addr 0.0.0.0:7001 \
--merkle-tree-path ../merkle-trees/ \
--base <base-pubkey> \
--mint <spl-mint> \
--program-id 4ffj6hEnx6cqp4ToMALExqk6QwPNSbZyr8ro9yW1Qvok
Frontend (bitview-app)
cd bitview-app/bitview-app
cp .env.example .env.local
pnpm install
pnpm dev
Required env (all NEXT_PUBLIC_ because they are read from the browser):
NEXT_PUBLIC_BACKEND_URL=http://localhost:4477
NEXT_PUBLIC_RPC_URL=http://192.168.100.98:18899
NEXT_PUBLIC_DISTRIBUTOR_PROGRAM_ID=4ffj6hEnx6cqp4ToMALExqk6QwPNSbZyr8ro9yW1Qvok
NEXT_PUBLIC_DISTRIBUTOR_COLLECTION=<core collection mint>
NEXT_PUBLIC_TWITCH_CLIENT_ID=<your twitch app client id>
End-to-end smoke test
bitview-botis up at:4477, MongoDB reachable.curl -X GET http://localhost:4477/twitch-api/channel/connect/<some_channel>.- Visit a Twitch chat manually for the same channel — the backend logs joins/parts.
- Link a wallet via
POST /bitview-api/user/link(Postman or the frontend). - Register a distribution via
POST /distributions-api/register(admin). - Wait one accrual tick. Check
GET /bitview-api/viewer/{wallet}/accruals— non-zero.
Production deployment
The minimum production topology is one backend binary, one MongoDB host, and a managed Solana RPC. Below is the recommended setup matching the operational SLAs in BitView flow.
Topology
┌─────────────────────┐
│ Cloudflare / CDN │
└──────────┬──────────┘
│ TLS
▼
┌─────────────────┐
│ nginx (LB+TLS) │
└─────┬───────────┘
│
┌─────────────────┼─────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ bitview- │ │ bitview- │ │ bitview- │
│ bot │ │ bot │ │ bot │
│ (active) │ │ (passive)│ │ (passive)│
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
└────────┬────────┴─────────────────┘
▼
┌───────────────┐
│ MongoDB │ ← replica set, 3 nodes
│ (primary + │ ← daily backups
│ 2 secondary) │
└───────┬───────┘
│
▼
┌──────────────────────────────────────┐
│ Solana RPC: Helius / Triton / Quicknode (failover order)
└──────────────────────────────────────┘
Three bitview-bot instances run active/passive — only one
attaches to Twitch IRC at a time (so we don't see double-joins);
the others are warm standby. Failover takes < 30 seconds via
shared distributed lock in MongoDB.
Docker / docker-compose
For environments that prefer container deployment, here's a
minimal docker-compose.yml:
version: "3.9"
services:
mongo:
image: mongo:6
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
volumes:
- mongo_data:/data/db
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
networks: [bitview]
bitview-bot:
image: bitview/bot:latest
restart: always
env_file: .env.production
depends_on: [mongo]
ports:
- "4477:4477"
- "9100:9100"
networks: [bitview]
healthcheck:
test: ["CMD", "curl", "-fsS", "http://localhost:4477/health"]
interval: 30s
timeout: 5s
retries: 3
prometheus:
image: prom/prometheus:latest
restart: always
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
networks: [bitview]
grafana:
image: grafana/grafana:latest
restart: always
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
volumes:
- grafana_data:/var/lib/grafana
networks: [bitview]
ports:
- "3001:3000"
nginx:
image: nginx:alpine
restart: always
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./certs:/etc/nginx/certs:ro
ports:
- "80:80"
- "443:443"
depends_on: [bitview-bot]
networks: [bitview]
volumes:
mongo_data:
prometheus_data:
grafana_data:
networks:
bitview:
nginx config
upstream bitview_backend {
least_conn;
server bitview-bot:4477 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 80;
server_name api.bitview.so;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name api.bitview.so;
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
client_max_body_size 1m;
location /health {
access_log off;
proxy_pass http://bitview_backend;
}
location / {
proxy_pass http://bitview_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_read_timeout 30s;
}
# Rate limiting: tighter than backend's own per-key limits, as defense in depth
limit_req_zone $binary_remote_addr zone=api:10m rate=300r/m;
limit_req zone=api burst=60 nodelay;
}
systemd unit (alternative to Docker)
[Unit]
Description=BitView Backend
After=network.target
[Service]
Type=simple
User=bitview
Group=bitview
WorkingDirectory=/opt/bitview/bitview-bot
EnvironmentFile=/opt/bitview/bitview-bot/.env
ExecStart=/opt/bitview/bitview-bot/target/release/bitview-rust-bot
Restart=on-failure
RestartSec=5s
LimitNOFILE=65536
# Hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/bitview/logs /opt/bitview/data
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
[Install]
WantedBy=multi-user.target
MongoDB
Run a replica set, not a standalone. Even at MVP scale, we want:
- 3 nodes (primary + 2 secondary) for write durability and failover.
- Authentication enabled (
MONGO_INITDB_ROOT_PASSWORDin env). - TLS for client connections.
- Daily backups (
mongodumpcron) with 30-day retention to encrypted off-site storage (S3-compatible with object-lock). - Quarterly restore drills with documented RTO.
Solana RPC
Use two managed providers in failover. Helius primary because of their indexer + websocket support; Triton secondary because they have different upstream nodes; Quicknode tertiary for geographic redundancy.
The bot tries primary, falls back to secondary on consecutive errors, and reports back to primary every 5 minutes. Health-check job pings each RPC every 30 seconds; failover decisions are logged.
Monitoring stack
The bot exports Prometheus metrics on :9100. Recommended
dashboards:
- API health — request rate, p95 latency per endpoint, error rate per status class
- Accrual loop — tick rate, viewers credited per tick, channel count, time per tick
- Solana — RPC latency per provider, RPC error rate, Twitch IRC connection state
- MongoDB — connection pool usage, query latency, replica health
- System — CPU, memory, network, disk
Alert rules (example, in PromQL):
groups:
- name: bitview-critical
rules:
- alert: BackendDown
expr: up{job="bitview-bot"} == 0
for: 2m
labels:
severity: critical
annotations:
summary: "BitView backend is down"
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.01
for: 5m
labels:
severity: high
- alert: AccrualLoopStalled
expr: time() - bitview_accrual_last_tick_timestamp > 300
for: 1m
labels:
severity: high
annotations:
summary: "Accrual loop hasn't ticked in 5 minutes"
- alert: RPCFailover
expr: bitview_rpc_failover_count > 0
labels:
severity: medium
Backup procedure
Daily MongoDB backup via mongodump, encrypted, uploaded to
object-lock S3 with 30-day retention. Quarterly restore drill on a
staging environment to verify backups actually work.
#!/usr/bin/env bash
set -euo pipefail
DATE=$(date +%Y-%m-%d)
DUMP_DIR="/var/backups/bitview/$DATE"
mongodump \
--uri "$MONGODB_URL" \
--out "$DUMP_DIR" \
--gzip
# Encrypt
tar czf "$DUMP_DIR.tar.gz" -C /var/backups/bitview "$DATE"
age -R /etc/bitview/backup-recipients.txt -o "$DUMP_DIR.tar.gz.age" "$DUMP_DIR.tar.gz"
# Upload with object-lock
aws s3 cp "$DUMP_DIR.tar.gz.age" s3://bitview-backups/mongo/
# Cleanup local files older than 7 days
find /var/backups/bitview -mindepth 1 -maxdepth 1 -mtime +7 -exec rm -rf {} \;
Rollback procedure
If a backend deployment introduces a regression:
- Roll back to previous container tag (
docker-compose up -d bitview-bot:prev-tag) or systemd previous binary (/opt/bitview/bitview-bot/target/release/bitview-rust-bot.prev). - Restart service.
- Monitor: error rate, accrual loop tick rate, p95 latency.
- If MongoDB schema changed forward-incompatibly, restore from yesterday's backup (this is rare; we maintain forward + backward schema compatibility in writes).
- Public post-mortem within tier SLA per BitView flow §incident response.
The deployment process tags every release in git and Docker registry so rollback is one command, not a debugging session.
Disaster recovery
| Scenario | RPO | RTO | Plan |
|---|---|---|---|
| Backend instance failure | 0 | 30s | Active/passive failover |
| Single MongoDB node failure | 0 | 1m | Replica set automatic failover |
| Full MongoDB cluster failure | 4h | 30m | Restore from latest backup |
| Solana RPC outage (one provider) | 0 | 30s | Automatic failover |
| Full Solana network outage | n/a | n/a | Documented platform risk; queue operations resume on recovery |
| Region failure | 0 | 5m | Hot standby in second region |
Secrets management
.envfiles never committed to git.- Production env vars come from a secrets manager (HashiCorp Vault / AWS Secrets Manager / 1Password Connect).
- Multi-sig wallet keys held in HSMs (Ledger / Trezor / cloud HSM) for cold storage; hot wallet keys in encrypted-at-rest secrets manager with audit logging.
- Twitch client secret rotated every 6 months.
ADMIN_API_KEYrotated quarterly.- All rotations logged and announced in the next quarterly transparency report.
CI/CD
Standard GitHub Actions:
- On PR:
cargo check,cargo clippy,cargo test,cargo audit,cargo deny check. Block merge on failures. - On merge to
main: build container, push to registry, tag:latestand:<git-sha>. - Production deploy: manual approval gate, blue-green via load-balancer flip after health-check passes on the new instance.
CI pipeline scripts and Dockerfiles live in the bitview-bot repo.
Frontend production
The frontend (bitview-app) is a Next.js app deployable to:
- Vercel (recommended for ease of operations)
- Cloudflare Pages (recommended for cost at scale)
- Self-hosted Node behind nginx (if you must)
Set the same .env variables as in dev, with production URLs:
NEXT_PUBLIC_BACKEND_URL=https://api.bitview.so
NEXT_PUBLIC_RPC_URL=https://mainnet.helius-rpc.com/?api-key=...
NEXT_PUBLIC_DISTRIBUTOR_PROGRAM_ID=4ffj6hEnx6cqp4ToMALExqk6QwPNSbZyr8ro9yW1Qvok
NEXT_PUBLIC_DISTRIBUTOR_COLLECTION=<your collection mint>
NEXT_PUBLIC_TWITCH_CLIENT_ID=<your twitch app client id>
The frontend is fully static / SSR-compatible. Cloudflare Pages with static export + edge functions is our recommended deployment for production.
Documentation site
This checkpoint site (Docusaurus) deploys as static HTML to:
- Cloudflare Pages (recommended)
- GitHub Pages
- Self-hosted nginx (if you must)
cd checkpoint
pnpm build
# pnpm deploy or manually upload static/ to your CDN
Production checklist
Before opening to non-trial users:
- All
.envvalues set to production endpoints - TLS termination configured + valid certificates
- Backups tested via restore drill
- Monitoring + alerting deployed and verified (test alerts fire)
- Rate limits tuned per API rate limits
- OFAC screening enabled and verified
- Twitch app credentials production-grade (not dev tokens)
- Multi-sig wallets deployed and signers tested
- Bug bounty program live + scope confirmed with platform
- Audit reports published on the audits page
- Status page deployed
- Incident response on-call rotation defined
- Runbooks documented for the top 10 expected incidents
- Legal docs published (Terms of Service, Privacy Policy, Content Policy)
- Sanctions geo-blocks at frontend + backend layers
- First quarterly transparency report template ready
Related
- Architecture — what we deploy
- BitView flow — operational view
- Security overview — production security posture
- Treasury management policy — wallet structure for production