Methodology

How predictions and simulations are calculated. OPR is standard and well-documented elsewhere — this covers everything built on top of it.

OPR Computation

OPR (Offensive Power Rating) is a least-squares estimate of each team's individual contribution to their alliance score. Given a set of matches, it solves the system AᵀAx = Aᵀb where each row of A indicates which teams played in that match, and b is the alliance scores.

// normal equations
AᵀA · x = Aᵀb
solved via Gauss-Jordan elimination with partial pivoting

All strength modes below use this solver on per-event match data pulled from the FTC API hybrid schedule. Scores are Non-Penalty only — foul points are excluded from b.

Team Strength — Three Modes

There are three ways the simulator can determine each team's strength. Switch between them using the Season Best / Pre-Event / Post-Event toggle on the simulate page.

Season Best (default)

Fetches all events the team played in this season, computes OPR at each event individually, and returns the highest value. Teams with no season matches fall back to the event mean.

strength = max(OPR(event) for each event the team played this season)
OPR computed independently per event from that event's match data
Future data contamination. Season best includes events that happen after the one being simulated. A team that peaked in December will appear inflated in an October simulation. Switch to Pre-Event to eliminate this.
Pre-Event

For each team, finds their most recent event that ended beforethe simulated event's start date, computes OPR at that event, and uses that as strength. Teams with no qualifying prior events fall back to the event mean.

strength = OPR at the team's most recent prior event
only events ending before this event's start date

This is the most historically accurate mode. Teams playing their first event of the season get a No prior data badge and fall back to the event mean.

Post-Event

Computes OPR using this event's own match results. Only valid after the event has played matches — before that, all teams have null strength and fall back to the mean. Useful for reviewing how a completed event played out relative to the actual performance levels observed there.

strength = OPR(this event's matches)
fallback strength
Mean of all known strengths at the event, or 80 if none are available. Used for teams with null strength in any mode.

First-Event & Early-Season Predictions

All modes have reliability limits when teams have little data.

Season best, first event
Team has no season matches yet. Strength is null → falls back to event mean. They appear average regardless of actual quality.
Pre-event, first event
No prior events ended before this one. Same result: strength null → event fallback. Shown with a 'No prior data' badge in the standings.
Post-event, no matches
If the event hasn't played any matches yet, all OPRs are zero or null and every team falls back to the mean.
Few prior matches
OPR from only 1–2 events (small n) can be noisy. A team that consistently played with strong partners may appear inflated. Treat early-season estimates as rough.
Live event
Post-event mode updates as matches are played at the ongoing event. Pre-event ignores those results entirely. For live predictions, post-event mode is generally more useful once several matches have been played.
At a team's first event of the season there is no reliable signal regardless of mode. Strong rookies and weaker returning teams look identical. Widen your confidence interval significantly for any team tagged No prior data.

Match Score Prediction

The predicted score for an alliance is the sum of its two teams' NP OPRs. Win probability is derived from the score gap using a logistic (sigmoid) function.

predictedRed = NP_OPR(red₁) + NP_OPR(red₂)
predictedBlue = NP_OPR(blue₁) + NP_OPR(blue₂)
P(red wins) = 1 / (1 + exp(−(predictedRed − predictedBlue) / scale))
scale = max(14, fallbackStrength × 0.28)

The scale parameter controls how sensitive win probability is to score differences. A larger scale makes the function flatter — small gaps matter less. It grows with the typical scoring level at the event so the sensitivity stays proportional regardless of the season's game design.

Score Sampling (Monte Carlo Runs)

Each simulation run samples actual match scores from a normal distribution centered on the predicted score. This reflects the real-world variance in team performance.

sampledScore = max(0, Gaussian(predictedScore, σ))
σ = max(8, fallbackStrength × 0.16)

The standard deviation σ scales with the event's scoring level. Higher-scoring events have proportionally more variance. The minimum of 8 prevents near-zero spread at very low-scoring events. Scores are clamped to zero — no negative match scores.

The Gaussian sampler uses the Box-Muller transform to convert two uniform random values into a normally distributed sample:

u = 2·r₁ − 1, v = 2·r₂ − 1, s = u² + v²
sample = mean + u · √(−2·ln(s) / s) · σ

Random Number Generation

All randomness uses a seeded, deterministic RNG so the same inputs always produce the same results. The seed is derived from the event code and season using FNV-1a hashing, then fed into a PCG generator.

// FNV-1a hash (seed text → integer)
hash = 2166136261
for each char: hash ^= charCode; hash = (hash × 16777619) mod 2³²

// PCG step (integer → [0, 1))
state += 0x6d2b79f5
word = ((state >> 22) ^ state) >> (state >> 28) + 22
return (word ^ (word << 2)) / 2³²

Using a deterministic RNG means two users running the same simulation see identical results. The seed changes between simulation runs so each run is independent.

Schedule Modes

There are two ways to generate the qualification schedule used in simulation:

API Schedule
Uses the official FTC API's published match schedule. Teams, alliances, and match order are exactly what was set for the real event.
Random Schedule
Generates a schedule from scratch using the event's team roster. Each round shuffles teams with Fisher-Yates and groups them into 2v2 alliances. This is useful for events without a published schedule or for exploring counterfactuals.
// Fisher-Yates shuffle (per round)
for i from n−1 down to 1:
j = floor(rng() × (i + 1))
swap(teams[i], teams[j])

The random scheduler also attempts to balance alliances by minimizing a penalty score that accounts for past partner/opponent repetition and red/blue strength imbalance:

penalty = partnerRepeat × 4 + opponentRepeat × 1.5 + |redStrength − blueStrength| × 0.08

Qualification Standings

After each simulation run, teams are ranked by the standard FTC tiebreaker rules:

1st
Wins (descending)
2nd
Tiebreaker score — sum of the losing alliance's score in every match the team played (descending)
3rd
Team number (ascending)

Tiebreaker score is computed within each run using the sampled (random) scores, not the predicted means — so it varies across runs and is probabilistic.

Playoff Simulation

After qualification, the top 4 seeds form alliances and play a seeded single-elimination bracket with best-of-3 series.

Alliance selection
Seeds 1–4 each pick one partner. Selection is greedy: each captain picks the highest-NP-OPR available team that hasn't already been picked.
Bracket
Semifinal 1: Seed 1 vs Seed 4. Semifinal 2: Seed 2 vs Seed 3. Finals: semi winners.
Series format
Best-of-3 — first alliance to win 2 matches advances. Each game uses the same Gaussian score sampling as qualification.

Output Statistics

All metrics are aggregated across N simulation runs (default 300, configurable 50–2000). Higher run counts reduce variance in the estimates.

Expected Wins
Mean wins across all runs.
Average Seed
Mean qual ranking across all runs.
1st Seed %
Fraction of runs where the team finished 1st in quals.
Top 4 %
Fraction of runs where the team was in the top 4 seeds (alliance captain eligible).
Semifinal %
Fraction of runs where the team reached the semifinals (as any alliance member).
Finalist %
Fraction of runs where the team reached the finals.
Champion %
Fraction of runs where the team won the event.
Avg Score For
Mean sampled alliance score across all qualification matches in all runs.

Probabilities are empirical frequencies — no closed-form formula, just counting outcomes over many runs. More simulations → more stable numbers.

Known Limitations

Future data (season best)
Default mode uses the team's best per-event OPR across the full season. Past-event simulations may reflect a team's later performance. Switch to Pre-Event to eliminate this.
Small-event OPR noise
OPR solved from only a few matches (e.g., a single 6-team event) is noisy and may not generalize. The normal equations become ill-conditioned when team co-appearances are sparse.
First-event teams
Teams with no prior season matches are indistinguishable from an average team in both modes. Shown with a 'No prior data' badge in pre-event mode.
Alliance scoring model
Predicted alliance score = sum of two team strengths. Assumes additive, independent contributions — synergies and role specialization aren't captured.
Variance model
Score variance is a fixed proportion of the event's scoring level, not fit per team. Some teams are more consistent or volatile than the model assumes.

Data Sources

FTC API
Official FIRST Tech Challenge API. Provides event rosters, published schedules, and match results.
FTCScout
Community stats site. Provides NP OPR (tot) and component quick stats used as team strength.