Live Sports Intelligence · Cricket & NBA
Your model sees the game.
The book sees the last print.
Ball-by-ball IPL and PSL state modeling, lineup-aware NBA pricing, and paper-first execution — so the only things that ever touch live stakes are models that already earned it in shadow.
- CRR model MAE
- 0.41 rpo
- Paper edge vs book
- +2.3%
- Decision latency
- < 800ms
Selected results
Backtests and paper runs, not vibes.
Every metric below is attributed to a specific engine and a specific dataset window. Context labels state whether the number came from a model backtest or a shadow paper run. Live-PnL numbers stay private to clients until they opt in.
- 0.41 rpoCRR model MAE
cricket-engine · 186 IPL innings · ball-by-ball backtest
- +2.3%Paper edge vs close
nba-engine · 412 paper bets · 60-day paper run
- < 800msBall-to-decision
cricket-engine + betting-core · live feed → paper action
- 0 / 412Live stakes on untested models
paper-first policy · every model graduates via shadow first
What we run
Two sports. One ML brain. Four capability blocks.
Cricket and NBA engines share a common state abstraction and pricing core, so every improvement on one side compounds across both. No separate spreadsheets, no glue code between sports.
-
Cricket engine · IPL + PSL
Ball-by-ball match-state modeling with win-probability, CRR projection, and scenario simulation over pitch, weather, and head-to-head context.
- Live state updates every delivery
- Scenario simulator (thousands of continuations)
- Recorder tooling for model retraining
-
NBA engine · pregame + in-game
Lineup-aware pricing across sides, totals, and props — with back-to-back, rest, travel, and injury context baked into the prior. Market mapper surfaces where books disagree.
- Lineup + rest + injury priors
- First-5-minute CLV tracking
- Cross-book market mapper
-
Market mapping & fair pricing
Book prices, model prices, and implied volatility normalized into one conversion graph so staking decisions are driven by expected value, not surface odds.
- Multi-book odds normalization + dedupe
- Kelly + fractional staking primitives
- Shared via the open glitch-edge-betting-core primitives
-
Paper-first execution workflows
Every model trades in shadow before it touches real stakes. Promotion requires a risk-adjusted return threshold over a minimum sample, not a hunch.
- Shadow runs gated on sample + Sharpe
- Kill-switch + drawdown guardrails
- Per-model audit log, no black-box promotions
Case spotlight
IPL CRR vs market: 186-innings backtest.
The cricket-engine CRR model was backtested ball-by-ball over 186 IPL innings against mid-innings book pricing. Mean absolute error landed at 0.41 rpo with the model out-edging the book by roughly +1.8% when staking was decided by fair-price deviation.
- MAE
- 0.41
- Paper edge
- +1.8%
- Innings
- 186
30-day pilot · fixed scope
Move one metric in 30 days, or get your money back.
No six-month retainers, no "sports AI roadmap" PDFs. One scoped pilot, one metric, one number at the end. If it didn't move, you didn't pay.
- Week 1 01
Data audit & baseline backtest
Read-access to your data feeds (Cricsheet/exchange, NBA API, odds feeds) and any existing staking history. Within five business days you get one report: where signal is being left on the table, which engine moves the biggest number, and the single metric we will move over the pilot window.
- Week 2–3 02
Engine deploy & paper sim
Cricket engine, NBA engine, or a slice of both — shipped into your infra, emitting decisions to a shadow account. Every model runs paper-first with sample + Sharpe gates. No live stakes touch the book until the model has earned it in shadow.
- Week 4 03
Review the target metric or refund
We compare the target metric (paper edge vs close, MAE, CLV, decision latency — whichever we scoped) against Day-0 baseline. Moved? We scope Phase 2 month-to-month, no retainer trap. Didn't move? Pilot fee refunded in full, no questions.
Pilot fee quoted on the first call after we see your data setup. No NDA required to scope.
FAQ
The questions every founder asks on the first call.
Answered up front so you can skip the discovery dance and we can spend the call on your stack, not ours.
-
Who owns the models and the data we generate together?
You do. Match-state captures, embeddings, model weights, and paper-run ledgers all live in infrastructure you control (your Postgres, your S3-compatible bucket, your GPU box). If you end the engagement tomorrow, every model trained on your data stays with you. Our contract has an explicit data + weights ownership clause — ask for it before you sign. -
Do you work with our direct competitors?
We cap concurrent engagements to protect quality and never onboard two direct competitors in the same sport and geography. If you're a cricket desk in South Asia and we're already running one, we'll tell you on the first call. The non-compete is two-way and written into the MSA. -
How is this priced?
The 30-day pilot is fixed-scope and fixed-fee — quoted on the first call once we've seen your data setup. Ongoing engagement after the pilot is month-to-month, priced by engine scope (cricket, NBA, or both) and whether we operate the paper-run infrastructure end-to-end. No 12-month retainer, no "minimum commitment". -
What do you need from us to start?
Read-access to whatever match feed and odds source you already subscribe to (Cricsheet exports, NBA API token, odds API keys), plus any historical staking ledger you want us to baseline against. One 30-minute call with whoever owns the book. That's enough for us to start the Week-1 backtest. Write-access to any live staking system only gets granted after the audit is delivered and you've approved the build plan. -
Which AI models do you use, and does our data train third-party systems?
We run a mix: gradient-boosted match-state models (LightGBM, XGBoost), sequence models for in-game trajectory, and LLM routing (Claude + open-weight Llama / Qwen) for data ETL, annotation, and research. Latency-sensitive paths run on local CPU/GPU; reasoning runs on frontier. Your data never trains third-party foundation models. Fine-tunes run inside your infra on your data only. We'll sign a no-training addendum with any cloud vendor you prefer. -
When do we actually see results?
The Week-1 backtest surfaces the biggest signal gap immediately — you'll see where your current pricing is off vs. fair by Day 5. Paper-sim numbers compound over Weeks 2–3 as sample accumulates. Live promotion is gated on sample + risk-adjusted return thresholds, not a timeline. If the target metric has not moved by Day 30, the refund triggers automatically — you don't have to ask.
Under the hood
A stack built to survive vendor lock-in.
Open-source primitives for pricing and staking, proprietary ML for the signal, self-hosted data where the moat lives. No black-box "signals service", no SaaS that owns your edge.
- Python · LangGraph Engine orchestration + DAGs
- Postgres + pgvector Match state, embeddings, ledger
- LightGBM · XGBoost · torch Match-state ML + sequence models
- Cricsheet · NBA API · odds feeds Versioned datasets, auto-updated
- Paper-first execution Shadow → sample-gated → live
- Recorder tooling Live feed capture for retraining
- glitch-edge-betting-core Open-source pricing + staking primitives
- Claude + open-weight LLMs Annotation, ETL, research — no train leak
Talk to us
Tell us what you're trying to price.
We reply within one business day. If it's urgent, email us directly — support@glitchexecutor.com.
- No NDAs required to scope the first call.
- Fixed-scope pilot — 30 days, measurable outcome or we refund.
- You keep the data, the models, and the weights we train on it.