Over the weekend, one of ArenaNet’s Gameplay Programmers took the time to post some of the numbers behind the system Guild Wars 2 uses to establish WvW rankings. Here’s the short explanation:
Originally posted by HabibLoew (Source)
People often ask how WvW rankings are determined so in this post I will outline exactly the system that we use.The short, short summary is that after each battle we use the score differential between the worlds that fought each other, along with their previous ratings, to calculate new ratings. Once ratings have been calculated we re-sort the list of worlds and form new groupings of 3 for the next battle.
And if you’d like to see the math-filled version, keep reading!
Originally posted by HabibLoew (Source)
Far more details regarding the ratings calculation for the mathematically inclined:WvW world ratings are calculated using the Glicko 2 rating system. Full details regarding Glicko 2 are available at http://www.glicko.net/glicko.html. How an algorithm is applied is often nearly as important as the algorithm itself, so here are the details of exactly how we use Glicko 2 in the context of each 3 world matchup. My explanation of these details assumes basic familiarity with the Glicko 2 algorithm.
Assume the worlds are wA, wB, and wC. In order to handle a 3 way battle we treat each match as having two battles for each world. So when calculating the new rating for wA the two battles are wA vs. wB and wA vs. wC. Likewise for wB and wC. Naturally, we do all the calculations before updating any of the ratings. Note that this usage is supported by Glicko 2.
We use a Tau of 0.6 and a k of 1.0
In standard Glicko 2 scores are represented as 0.0 for a loss, 0.5 for a draw, and 1.0 for a win. We use a slightly modified version that takes into account score differential. I’ll explain how using the wA vs. wB battle as an example. To calculate the Glicko 2 score for wA in the wA vs. wB matchup we do the following:
wAPercent = wAScore / (wAScore + wBScore)
wAGlickoScore = (sin((wAPercent – 0.5) * Pi) + 1) * 0.5where wAScore and wBScore are the raw scores from the end of the match.
That last transform is easiest to visualize as a graph .
We perform the same score calculation for each world and then plug those results into Glicko 2. This means that ratings change over time as a result of battle outcomes and that the rating for a given server reflects the history of that server’s performance.
Later in the thread, this update was also posted:
Originally posted by HabibLoew (Source)
Just a quick clarification. The formulas
wAPercent = wAScore / (wAScore + wBScore)
wAGlickoScore = (sin((wAPercent – 0.5) * Pi) + 1) * 0.5make no assumptions about winners or losers. The first simply calculates a relative score percentage between the two teams involved. This is a number between 0.0 and 1.0. The second transforms the relative percentage into a Glicko 2 score using a sine wave which has the effect of making large score differentials have a less than linear impact on the Glicko 2 score (see the linked graph). The result is still in the range 0.0 to 1.0. Effectively this means that as score differentials get larger and larger the actual Glicko 2 score for a team approaches 0.0 or 1.0 more and more slowly.
We perform these operations pairwise on all the teams, so we are calculating 6 matches (two for each team) which fits the Glicko 2’s one-way nature. This is because the rating changes are asymmetric.
Because the ratings of two teams matter when calculating the results of a match (a lower rated team beating a higher rated team results in more change than a higher rated team beating a lower rated team) if two teams are very close in score then there is generally very little change in their ratings. Again, I encourage you to read the Glicko 2 website for a more thorough explanation of that part of the process.
LET'S GET SOCIAL