BitView flow
What BitView (the operator) actually does, day in and day out. This is the operational view: continuous duties, per-distribution lifecycle, treasury, sybil detection, incident response, and the weekly / monthly / quarterly / annual cadences. If you're evaluating us as a partner, this is what you're partnering with.
Timeline at a glance — multiple cadences
Continuous Backend running, accrual loop, RPC monitoring,
anomaly detection, support queue
Per-distribution Auto-register → accrue → snapshot → publish → claim window
Daily Ops standup, dashboard review, alerts triage
Weekly Cohort review, stuck-streamer outreach, sybil flag review
Monthly MRR breakdown, funnel analysis, A/B test results
Quarterly Pool concentration drift review, transparency report,
playbook refresh, treasury rebalance decisions
Annual Audit summary, full P&L, strategy refresh, phase
transition gate review
Three rotations of duty handle this:
- Engineering on-call — backend health, incidents.
- Community ops — support, sybil triage, streamer outreach.
- Leadership cadence — weekly metrics review, monthly cohort analysis, quarterly strategy.
A 3–5 person team handles it through ~5K active streamers; team scales sub-linearly with platform usage because most of the work is automated.
Continuous — the always-on duties
Backend service
The Rust bitview-bot is the core process. It's literally always running. What it does, every second of every day:
- Maintains Twitch IRC connections to every channel that's subject of any active distribution. Reconnects automatically on drops.
- Maintains the in-memory chat-presence map. JOIN, PART, PRIVMSG events update the map.
- Runs the accrual loop every
ACCRUAL_TICK_SECONDS(default 60s). - Serves the REST API on port 4477 (or the configured
HTTP_PORT). - Exposes Prometheus metrics on port 9100 for monitoring.
- Persists every state change to MongoDB.
SLA: 99.9% uptime, p95 latency < 250ms on API calls. We monitor both via Prometheus + a public status page.
RPC monitoring
We talk to Solana via three providers in failover order:
- Helius (primary)
- Triton (secondary)
- Quicknode (tertiary)
A health-check job pings each every 30 seconds. If primary degrades, backend automatically routes to secondary. RPC outages are the single most common operational incident on Solana — built-in multi-provider failover means we don't go down even when individual providers do.
MongoDB
Hot-standby in second region. Backups every 4 hours with 30-day retention. RPO 4h, RTO 30 minutes. Periodic restore drills (quarterly).
Customer support queue
Three tiers per streamer playbook:
- Free: community Discord, no SLA, target 24h first-response.
- Pro: email, target 4h first-response.
- Plus: dedicated channel, target 1h first-response.
Viewer support is community Discord plus a fraud-flag-appeals path that goes to manual review.
Anomaly detection
Always-on signals scored in real time:
- Wallet age + funding-source clustering (sybil)
- PRIVMSG pattern similarity across "different" accounts (sybil)
- Volume spikes on individual streamer-token pools (rug or pump)
- Backend error rate spikes (incident)
- RPC failover frequency (provider issues)
- Accrual rate anomalies (e.g., a streamer is paying out unusually fast for unusual viewer count → may indicate compromise)
Soft alerts go to the Slack ops channel. Hard alerts page on-call.
Per-distribution lifecycle
The platform-side journey of any single distribution event.
Stage 1 — Registration
Streamer frontend creates the on-chain distributor PDA
↓
Frontend POSTs /distributions-api/register with PDA + params
↓
Backend:
- validates streamer wallet ownership
- validates parameters (periodicity > 0, duration ≥ periodicity, etc.)
- inserts distributions document with status=pending
- calls ensure_chat_listener for the channel (idempotent)
↓
Status: pending
Stage 2 — Active accrual
Every ACCRUAL_TICK_SECONDS:
for each distribution where status in {pending, active}
and start_at <= now <= end_at:
- promote pending → active on first eligible tick
- read presence map for channel
- apply engagement weighting per viewer
- skip viewers below BTV stake floor
- skip viewers already at max_per_viewer cap
- upsert accruals document for each credited viewer
↓
Distribution may run for hours, days, or weeks
Stage 3 — End-of-period
end_at passes
↓
Distribution stops crediting (loop's eligibility window closes)
24-hour grace period passes
↓
Operator triggers POST /distributions-api/{id}/finalize
(Tier-A: BitView ops; Tier-B/C: streamer with admin permission)
↓
Status: pending → snapshotting
Stage 4 — Snapshot
Backend snapshot job:
- reads all accruals for distribution_id
- for each, applies any final-pass adjustments (sybil-flagged exclusions)
- constructs CSV in distributor CLI format: wallet, amount
- invokes the merkle-tree builder (existing distributor/cli)
- stores merkle_snapshots document with root, num_nodes, total_amount,
tree_path
- the proof API picks up the new tree from disk
↓
Operator (multi-sig) calls set_enable_slot on the on-chain distributor
↓
Root is published on-chain
↓
Status: snapshotting → claimable
Stage 5 — Claim window
Viewers claim over the next 30+ days
↓
Each claim:
- viewer fetches proof from /claims-api/proof/{wallet}
- signs new_claim instruction
- vault transfers tokens to viewer ATA
↓
At claim_start_ts + clawback_grace (default 30 days):
status: claimable → closed (still viewable history)
↓
Streamer may call clawback to recover unclaimed tokens
Stage 6 — Archive
60 days after closed:
- distribution document remains for audit
- merkle tree archived to cold storage
- proof API stops actively serving but historical proofs are
reproducible from archive on request (rare, e.g., audit query)
At every stage the document state in MongoDB is the source of truth. The on-chain merkle root is the cryptographic commitment. Backend service crashes don't lose state because everything is persisted before responding to user actions.
Treasury management flow
What BitView holds, where it sits, and how decisions about it are made. Cross-reference: Tokenomics §Liquidity policy.
Wallets
| Wallet | Purpose | Signing |
|---|---|---|
| BitView treasury (cold) | Long-term BTV / SOL / USDC reserves | Multi-sig 3-of-5, one signer external |
| BitView treasury (hot) | Daily ops, gas reserves, emergency response | Multi-sig 2-of-3, all internal |
| Fee collection wallet | Receives 0.10% protocol fees from swaps | Multi-sig 2-of-3 |
| LP positions | BitView-owned LP tokens for BTV/SOL, BTV/USDC, STREAM/BTV pools | Multi-sig 2-of-3 |
| Airdrop dispenser | 100 BTV onboarding airdrops to new viewers | Hot, low-balance, refilled weekly |
| Stripe payout wallet | USDC settlement from Stripe subscriptions | 2-of-3 |
Daily flow
Stripe subscription payouts → USDC wallet (automated)
Swap protocol fees → fee collection wallet (atomic per swap)
Sponsorship marketplace fees → USDC wallet (atomic on payout)
LP fees → LP positions (auto-compounded)
Slashing income (BTV) → fee collection wallet (per slash event)
OUTFLOWS:
Operating expenses ← USDC wallet
Onboarding airdrops ← airdrop dispenser (refilled weekly)
Audit + bug bounty payouts ← USDC wallet (lump sum on incident)
Weekly
- Refill airdrop dispenser from cold storage (BTV).
- Reconcile fee collection wallet with expected revenue from logs. Discrepancy >0.1% triggers investigation.
Monthly
- Full revenue/expense P&L drafted.
- Cold-storage rebalance if hot wallet >2 weeks runway.
Quarterly
- Pool concentration drift review (the published-on-the-day decision per Tokenomics §Rebalancing rules).
- Streamer-token decay review — pull liquidity from dead pools.
- Transparency report drafted with: treasury composition, fee revenue, BTV emission progress, sanctions screen counts, slashing events, bug bounty stats.
- Treasury policy review — does the position-sizing table still match where the platform is at?
What we DO NOT do
Strict prohibitions, encoded in policy and enforced by multi-sig veto:
- No leveraged positions on treasury assets. Spot only.
- No third-party yield farming with operating reserves.
- No counterparty exposure (lending BTV to market makers, etc).
- No protocol-owned-liquidity buybacks (Olympus-style bonding).
- No discretionary BTV minting. The supply is fixed at 1B.
- No private market deals on BTV outside published allocations.
Sybil detection flow
How BitView keeps the platform real. Cross-reference: Anti-fraud.
Real-time scoring
Every wallet that links a Twitch identity is scored at link time and re-scored on a rolling basis using:
- Wallet age + first-funding source
- Twitch account age + activity history
- IP / device fingerprint patterns
- Behavioral patterns across active distributions
Score thresholds:
- Low → no action
- Medium → soft-flag, manual review queue
- High → automatic soft-flag (accruals freeze)
- Severe → hard-block + automatic stake slash
Manual review queue
Soft-flagged wallets sit in a queue reviewed by community ops daily. For each:
- Reviewer pulls the auto-generated dossier (wallet history, Twitch history, behavioral patterns, IP cohort).
- Decides: clean, suspicious-but-undecided, confirmed-sybil.
- Clean → unflagged; accruals resume.
- Suspicious → 30-day watch window; second flag in window auto- confirms.
- Confirmed sybil → hard-block + slash + cohort expansion (find other wallets in the same cohort and flag them too).
Slashing
When a wallet is confirmed-sybil:
- BTV stake (≥100 BTV) is transferred to protocol treasury.
- Pending accruals across all distributions are zeroed out.
- The Twitch user_id is marked ineligible for re-link for 90 days.
- A public on-chain blocklist entry is added.
Cohort response
If a sybil network is detected (group of N wallets with shared fingerprint/IP/funding source):
- All wallets in the cohort are batch-slashed.
- Affected distributions are re-snapshotted: original merkle root stays valid for honest claimers; a supplemental distribution redistributes the slashed BTV pro-rata to flagged-clean wallets in the same affected cohort.
- Public post-mortem published.
What viewers see
Honest viewers see almost nothing — they're never flagged. The slashed wallets see their accruals frozen and (if they appeal) get the dossier explanation.
Incident response flow
When something breaks. The protocol is the same regardless of severity: detect → triage → contain → fix → public communicate → retro.
Severity tiers
| Tier | Examples | Time to respond | Time to communicate |
|---|---|---|---|
| Critical | Smart-contract exploit, treasury compromise, mass-claim failure | < 15 min | < 4 hours |
| High | Major RPC outage, full backend down > 30min, sybil cohort > 30% of accruals | < 30 min | < 24 hours |
| Medium | Partial backend degradation, sybil cohort 5–30%, suspected wallet compromise pattern | < 2 hours | < 72 hours |
| Low | Minor UX bugs, isolated user reports, planned maintenance | within 24 hours | weekly digest |
The flow
Detection - Monitoring alerts, community reports, on-chain anomalies
↓
Triage - Tier assignment + scope assessment
↓
Contain - Stop the bleeding (e.g., disable specific endpoints,
freeze fee-collection if needed, pause swap router)
↓
Fix - Engineering deploys the resolution
↓
Verify - Post-fix monitoring window proves stability
↓
Communicate - Public post-mortem on the checkpoint site within tier SLA
↓
Retro - Internal retrospective; what to add to monitoring,
what process changed
What we will and won't do
- Will: publish post-mortems publicly within the SLA, even when embarrassing.
- Will: offer remediation (re-distribute, refund fees) where the incident caused user loss attributable to BitView.
- Won't: hide incidents.
- Won't: offer compensation for general market events (BTV price moves, Solana network outages, third-party RPC issues — these are documented platform risks).
Cadence calendar
What happens when. The compact reference for the operator.
Daily
- 09:00 ops standup (15 min): review alerts, queue, on-call notes.
- Continuous: monitor Slack #ops-alerts, manual-review queue, support inbox.
- 18:00 EOD log: snapshot of metrics dashboard, append to weekly digest.
Weekly
- Mon: cohort dashboard review (leadership).
- Wed: stuck-streamer outreach (community ops).
- Fri: sybil flag review + cohort analysis.
- Continuous: A/B email tests, content publishing.
Bi-weekly
- Tier-A streamer concierge calls (rotating, ~12 streamers per cycle).
Monthly
- 1st: Stripe + Solana revenue reconciliation.
- 5th: MRR breakdown, ARR projection, board-pack drafting.
- 15th: Funnel conversion deep-dive, A/B test results published.
- 20th: Cohort retention analysis (D7, D30, D90).
- 25th: Marketing budget review for next month.
Quarterly
- Last week of quarter: pool concentration drift review (decision published).
- First week of next quarter: transparency report published, treasury composition disclosed, BTV emission progress reported.
- First week: streamer-token decay review (pull dormant pools).
- First week: playbook refresh (this site updated where needed).
Annually
- Audit summary published (financial + smart-contract + operational).
- Full P&L published.
- Strategy refresh tied to roadmap phase transitions.
- Bug bounty program review and rate refresh.
- Insurance coverage renewal.
Decision authority
Who can do what. Layered for safety.
| Decision | Authority | Process |
|---|---|---|
| Hot-fix backend deploy | On-call engineer | PR + auto-deploy |
| Feature deploy | Engineering lead | PR review + staging + rollout |
| Soft-flag a wallet | Community ops | Direct, logged |
| Hard-flag + slash a wallet | Ops lead + appeal review | Two-person approval |
| Streamer delisting | Trust & safety + ops lead | Two-person + public log |
| Pool rebalance | Treasury committee | Quarterly, documented |
| Treasury cold-storage move | Multi-sig 3-of-5 | Out-of-band confirmation |
| Fee schedule change | Phase 1–4: leadership; Phase 5+: governance vote | Public proposal + comment window |
| Smart contract upgrade | Multi-sig + 7-day timelock | Public proposal + audit + timelock |
| BTV emission schedule change | Cannot be changed. On-chain enforced. | N/A |
| Vested allocation acceleration | Cannot. On-chain enforced. | N/A |
Long-term — Phase 5+ governance handoff
In Phase 5 (per Roadmap), substantial decision authority shifts from BitView leadership to BTV-weighted governance:
- Fee schedule rates (within bounds).
- Marketplace listing curation.
- Treasury allocation policy.
- Specific protocol parameters.
BitView still operates the platform — engineering, ops, support, incident response — but no longer unilaterally sets economic policy. This decentralization runway is critical to BTV's defensible non-security posture, see Risk and compliance.
By Phase 6+, decision authority on protocol economics is fully governance-driven; BitView the entity is purely an operator.
What BitView never does
- Custodies user funds. Streamer pool BTV is in their wallet until distribution finalize, then in the on-chain vault, then in viewers' wallets. We never have signing authority over user positions.
- Mints BTV beyond the published curve. The 1B supply is fixed.
- Modifies vested allocations. On-chain enforced.
- Pauses claims. Once a distribution is
claimable, viewers can always claim until the clawback window. We can't freeze. - Sells streamer-token protocol allocations early. Vested.
- Front-runs LPs with treasury liquidity. Treasury LP positions are pre-published; we don't game our own pools.
- Skips post-mortems. Even when embarrassing.
Cross-references
- Architecture — what services we're operating
- Operations guide — how to run it (locally + production)
- Anti-fraud — full sybil-resistance design
- Risk and compliance — what BitView's legal/regulatory posture is
- Streamer onboarding playbook — the BD/community ops side
- Metrics — what we measure and target