A methodology to measure what no analysis tool has measured: the performance of two players as a tactical unit, not as two individuals in parallel. The DPI is the core metric of Sinergia and the foundation of its coach suggest system.
You have been playing ranked with your botlane partner for years. You know you play better together than alone. You feel it in every coordinated engage, in every ward placed before you even ask. But when you open OP.GG, U.GG or Mobalytics, what you see are two individual profiles in parallel. His stats. Your stats. No stats about the two of you as a unit.
That is the problem Sinergia was born to solve.
When you search for "duo analysis" in any existing tool, what you find is one of these three things:
Side-by-side multisearch — two profiles next to each other. There is no data that truly belongs to the duo. It is like measuring an orchestra's performance by showing two musicians separately.
Synergy tier lists — global rankings of which champions work well together based on millions of games from any duo in the world. Useful as a general reference, useless for understanding your specific pair with your specific history.
Editorial guides — coaching content written about popular combinations. Valuable, but not personalized and not based on your real data.
The conceptual mistake of current tools is treating the individual player as the fundamental unit of analysis. For a premade duo, the correct unit is the pair.
This distinction has important consequences. A pair can have a mediocre individual winrate and an exceptional joint winrate. A player can have a high solo KDA and zero impact when playing with their partner. Real synergy is not measured in each player's stats, but in how those stats change when they are together.
Developed by Wayne Winston and Jeff Sagarin in the 1990s, APM measures the net impact of a player on the score while on the court, controlling for the quality of teammates and rivals. We adapt this concept to measure the net impact of the partner's presence on each player's performance.
The GPI proved that it is possible to aggregate heterogeneous individual performance metrics into a single score that non-technical players can understand. The DPI extends this approach to the pair dimension.
Sarah Rudd's work on possession models in football and shot quality analyses in hockey establish the principle that the performance of a tactical unit cannot be inferred directly from the individual statistics of its components.
The Duo Performance Index is a composite score from 0 to 100 that quantifies the performance of a League of Legends player pair as a tactical unit, based exclusively on the real game history of that specific pair.
A DPI of 50 indicates the pair performs exactly as expected. Above 50, synergy is positive. Below 50, it is negative.
| Dimension | Weight | What it measures |
|---|---|---|
| Vision coordination | 20% | How the support's vision score and ADC's CW placement improve when playing together vs solo. A real duo coordinates on map control. |
| Cross kill participation | 30% | Percentage of ADC kills with support assist and vice versa. The kill correlation between both players in the same game. |
| Joint economic advantage | 30% | CS differential and gold differential of the botlane at minute 10 and 15 in duo games vs solo games. |
| Pair consistency | 20% | Winrate variance by champion combo. Difference between the first N and last N games (trend). A solid duo performs with multiple combinations. |
| DPI | Interpretation | Recommended action |
|---|---|---|
| 85-100 | Exceptional synergy | Keep current combo and pool |
| 70-84 | Solid synergy | Optimize weak combos, keep the strong ones |
| 55-69 | Positive synergy | Identify the weakest dimension and work on it |
| 45-54 | Expected performance | Evaluate whether the duo adds value or is neutral |
| 30-44 | Mild negative synergy | Review coordination and combo selection |
| 0-29 | Significant negative synergy | Analyze systemic tilt or style incompatibility |
A score without a recommended action is an academic data point. Sinergia's Coach Suggest is the system that translates the DPI into a specific combo recommendation for the next session, with explicit reasoning and two actionable tips.
The system operates on four priority levels, ensuring there is always a reasoned output regardless of the amount of data available:
Sinergia's Performance Picks are not based on champion winrate — they are based on a real profitability per game index calculated over each player's full season history.
The index is anchored to the player's current LP for greater precision: LP history is reconstructed backwards from the real current LP, reducing accumulation error. With 80+ games of history, the result is comparable to the accumulated LP shown by OP.GG.
Intellectual honesty requires explicitly acknowledging the limitations of the current methodology.
Sample size. With fewer than 20 shared games, the DPI has high variance. Trend analysis requires at least 10 games in each half of the history to be significant.
Estimated LP vs real LP. The Riot API does not return historical LP per game. Backwards reconstruction from current LP is a functional estimate but not real data. Error accumulates in older games.
The DPI does not capture communication. Two players who coordinate by voice will have a different DPI than two who play in silence, but the system cannot distinguish the cause. The DPI measures effects, not processes.
No patch weighting. When Riot releases a significant patch that changes a champion's or synergy's value, pre-patch games become unrepresentative. We currently do not apply temporal weighting by patch.
DPI v1.1 — Temporal weighting (recent games weigh more), patch adjustment for champion metrics, improved position filtering.
DPI v2.0 — Integration of map positioning data (Riot timeline endpoint), objectives coordination model, support roam timing analysis relative to ADC wave state.
DPI v3.0 — Machine learning model trained on Sinergia's corpus of analyzed duos. As data from thousands of real pairs accumulates, weights will be learned automatically instead of being fixed.