I started thinking about this problem in a specific debate context (for a specific application), before realizing that it isn't so useful after all. Still, the math is quite interesting.
Here's the original question that got me thinking: Is there a way to assign weighted wins -- round 1 wins count for so much, round 2 wins count for something different, and so on -- such that a tournament can produce a ranking that is consistent with the actual results, such that if team A beats team B, and they finish with the same win-loss record, A must be ranked above B? (Thanks to Steve Gray for coming up with the original idea to use weighted wins this way.) The point values for each round must be decreasing for this to work. If team A beats team B in round 1 for 1 point, then all subsequent point values must be smaller, or else B would have the chance to tie (with an equal number of wins) or surpass A (with an equal number of wins -- of course, B should surpass A if it wins more total rounds than A). I tested out a specific set of weighted win points that meets both of these conditions: {1, 0.9, 0.89, 0.889, 0.8889, 0.88889}. I'll write more on why this particular set works later in this post. The results of the test follows:
Pick any point where you might compare two teams, and you can see that you can always tell who got there by winning the earlier round(s). As you can see from the bottom row, it is possible to distinguish every one of the 64 possible pathways (2^6) that a team could take through the tournament (e.g., WLLWWL), and the results line up nicely in order -- that is, an earlier win always ranks a team higher (e.g., WWLWLL is ranked higher than WLLWWL). The earlier win is ranked higher, because that way, there is no possibility a team is ranked lower than someone they beat. NB: This only works if everyone debates within brackets. It fails to hold as a true assumption for pull-up rounds, so this system would fail utterly for round robins.
However, it is not the pull-up problem that scuppers this system (I think it's somewhat solvable for one-win pull-up rounds with half-points). The problem is that this system produces a ranking that is consistent with the actual win-loss results; it does not necessarily produce the most desirable ranking. True, teams are never ranked below someone they beat, but this system overdoes it: teams are ranked by the order in which they won rounds. What about an excellent 4-2 team with great speaker points who loses the first two random rounds to the top 4-2 teams? To the good, this excellent 4-2 team will be ranked below the top two 4-2 teams (because LLWWWW is worth fewer points than, for example, WLWWWL and LWWWLW); to the excess, this team will be ranked below EVERY 4-2 team who happened to win either of the first two rounds. It's throwing the baby out with the bath water. Furthermore, it makes no sense to use this kind of weighted wins as a third or fourth tie-breaker. If speaker points are the first tie-breaker, then it becomes possible to rank a team below someone they have beaten. It's an all-or-nothing solution, and no one would think that the rankings it produces are good.
The math behind this problem turns out to be more interesting than the originally intended application. Some patterns emerge in any set of numbers that work for weighted wins. In the set of numbers {a, b, c, d, e, f }, the terms must be decreasing, so a > b > c > d > e > f is the first condition. However, not just any set of decreasing terms will work. For example, if the weighted win points were {1, 0.9, 0.8, 0.7, 0.6, 0.5}, the system breaks down: for two 2-0 teams, a round 1 and round 6 win totals 1.5 points, but it should beat a round 2 and round 3 win, which totals 1.7 points. Therefore, the second, trickier condition is that the decreases must decrease, that is, while the slope is negative (negative first derivative) the set must have upward concavity (positive second derivative). Going to my original set of numbers {1, 0.9, 0.89, 0.889, 0.8889, 0.88889}, the terms are decreasing, meeting the first condition, and the decrease is decreasing {-0.1, -0.01, -0.001, -0.0001, -0.00001}, meeting the second condition.
Looking at the chart, every unmarked place is one where any set of numbers that meets only the first condition will be true. For example, on the line of round 3, as long as a > b > c, then a+b > a+c > b+c; it's analytically true. Every marked place is one where a set of numbers that meets only the first condition might fail; only sets meeting both conditions must be true. For example, on the line of round 4, a+d must be > b+c only if the second condition is also met. I made up a term for a set that meets the conditions of this problem: an "inequality ordered set." The definition would be: a set of numbers, such that for any two equally-sized subsets created without repetition of terms, the subset with the single largest term always has the largest sum. For example, a+f > b+c and a+e+f > b+c+d.
I wondered whether it is possible to create an infinitely long "inequality ordered set." Two possible ways to create such an infinite set: {1, (1/x), (1/x^2), ...} and {1, 1-(1/x), 1-(1/x)-(1/x^2), ...}. For the first kind, the smallest x that still works is about 2. For the second kind, the smallest x that still works is about 1.25. I'm working on writing a proof of these.
No comments:
Post a Comment