Strength of Schedule in the 2016 AFL Fixture

It’s AFL fixture time again, and that means plenty of attempts to evaluate the unbalanced AFL fixture and work out who’s got the soft ride and who’s being screwed by a tough draw. This post is our analysis based on a number of Strength of Schedule calculations.

If you want the short version before we dive in:

  • North and Adelaide have it the toughest
  • Carlton Essendon and Geelong have it the easiest
  • The fixture successfully gives the bottom 6 teams the strongest weighting on the easy side of a fair schedule, but GWS have an anomalously tough run for a team that finished 12th
  • Comparing 2015 to 2016, Port Adelaide, Geelong and Essendon should expect a bump of at least 1 expected win next year from just the fixture
  • Conversely, GWS should expect -0.9 wins and Hawthorn and Adelaide -0.8 wins compared to 2015 based on the fixture.

And the caveats:

  • These are all based on strength of opponent alone
  • Strength of Schedule makes a difference of about half an expected win between the hardest and the average fixture, and less than 1 win between the hardest and easiest draws
  • These assessments will turn out to be interestingly wrong – last year for example Hawthorn projected to have the 3rd hardest draw and it turned out to be the 3rd easiest

What is Strength of Schedule?

The Strength of Schedule measure is often used in US sport, specifically the NFL and Collegiate Sports, to equate records where the schedules are not perfectly even. Take for example NCAA Division I College Football, where each team plays around 12 games, but have 128 teams in the division. In order to best work out who was actually the best in order to play Bowl games (or the new NCAA College Football Playoff), the selection committee leans on the strength of schedule as a tiebreaker for teams on the same, or similar, records. In the world of College Football, this can be worth millions of dollars.

The most basic way of determining SOS is to average out the winning rates of all teams that each team faces. This is the method that the NFL and the NCAA use in their formulations. Some of the more advanced statistical methods account for other factors, such as Home Ground Advantage and extra weighting towards stronger teams.

We’re sticking to the most basic formula, but will also use a couple of different inputs as well as straight win-loss records. All data is based on results from the 2015 AFL Season. We’ll first assess the fixture as a tool of competitive equalisation, and then go into discussing some different measures of fixture strength.

Does the fixture achieve competitive equalisation?

Strength of Schedule on Win-Loss is a straightforward measure of the cumulative winning percent of each team’s opposition in 2016. A team with a 50% winning record and who played every opponent the same number of times would have a strength of schedule of 0.500. Strength of Schedule tells us how far each club’s draw deviates from that ideal.

Of course, since teams don’t play themselves that’s an inherent influence on draw difficulty. If every team played each other once in a 17-week season, Fremantle’s opposition would have had a strength of 8.2 wins in 2015, while Carlton’s would have 8.8 wins. That translates to a fair draw for Fremantle being 0.484 and for Carlton being 0.519.

Here’s a chart looking at strength of schedule for each side based on the win-loss record of their opponents.

Strength of Schedule assessment of fixture weighting

The AFL’s fixturing goal is to give the sides who finished top 6 last year (post-finals) the hardest set of matchups, and the bottom 6 the easiest. We can compare the impact of their double-up matches by looking at their actual Strength of Schedule compared with what a “fair” one would be.

Does the weighted AFL fixture “equalise” things competitively? No, it overcompensates compared to a balanced draw, which is the actual goal. The fixture does not merely seek to handicap teams to the same level of difficulty, it seeks to promote “competitiveness” by reducing the exposure of the worst sides to difficult opponents.

We can see that the weighted fixture does effectively achieve its goal at the bottom end. The worst sides, which would have the hardest fair fixtures, get the biggest downward deviation to draw difficulty via their double-ups. This leaves them with the weakest schedule strengths, not just average ones.

At the top the outcomes are more mixed – GWS are getting a set of opponents that drags them further from their “fair” draw difficulty than Fremantle or Sydney from the top group are getting. Richmond, likewise, have a more strongly weighted schedule than Sydney.

Note that removing the impact of not playing one’s own team can be seen here clearly. Sydney have an easier overall Strength of Schedule than Collingwood or the Bulldogs, but those sides’ harder draws are due simply to playing Sydney rather than playing Collingwood or the Bulldogs.

Other measures of strength

The above analysis is based on actual win-loss outcomes as the most basic way to measure Strength of Schedule. Let’s now move into a couple of other measures which try to account for the impact of luck, in the form of close games and goal-kicking accuracy.

While calculating wins via actual results is common sense, it may not be the fairest way to account for the abilities of each team. For example, Hawthorn in 2015 lost a lot of close games to finish third on the ladder, but few would argue that they weren’t the dominant team.

To compensate for the vagaries of close results, we’ve stolen an idea from the father of sabermetrics, Bill James, and tried to apply it to the AFL (To read more about pythagorean expectations and the AFL check out the excellent Matter of Stats on the subject).

James looked at this formula to try to work out how teams “should” have performed, as opposed to how they actually did. By doing so, you can estimate how “lucky” team was during the course of a season.

James did this by looking at runs scored against runs conceded for each team. In an AFL context, looking at “points scored” against “points conceded” for each side is the most appropriate approach. The logic is that the more you outscore opponents by, the more often you should win.

A basic rundown of the Pythagorean Expectation is available on Baseball Reference, or via MAFL. The formula we have used is as follows:

Win % = (Points For ^3.87) / ( (Points For ^ 3.87) + (Points Against ^ 3.87)).

We’ve borrowed our exponent for this from MAFL, who got this number (3.87) in his earlier piece on this subject. We’ve used the Classic James formula, rather than the later models that MAFL has relied on.

Thirdly, we looked at goal-kicking. Teams rarely have the same accuracy for and against over a series of years. If a team is unusually accurate, or inaccurate, it tends to balance out a little the next season. This formula tries to account for that.

To account for accuracy luck more comprehensively we’d need access to something like Champion Data’s expected accuracy which draws on comparing goalkicking from various positions to the expected accuracy from that position. We don’t have that, so we just use the basic scoring shot data.

The formula is:

Win % = (Scoring Shots For ^3.87) / ( ( Scoring Shots For ^ 3.87) + ( Scoring Shots Against ^ 3.87)).

So who has the hard and easy draws?

Pulling the above together, we get the following Strength of Schedule across actual win-loss outcomes, and for two sets of expected win-loss outcomes based on scoring differentials and accuracy:

Strength of Schedule of 2016 fixture

Based on opponents, North Melbourne and Adelaide have the most difficult draws – Adelaide have it harder on actual win-loss record, but North have it harder on Pythagorean expected outcomes. This difference basically reflects that Hawthorn on its scoring dominance and luck with accuracy rates more as a 19-win team than a 16-win team, and North play them twice.

GWS also look like they have an anomalously difficult run for a team which finished 12th, as do Melbourne, especially based on the Pythagorean measures which rate their double-up opponent Hawthorn highly.

At the other end, St Kilda and Essendon get the easiest draws while Geelong and Port Adelaide, who finished 9th and 10th, shape as bolters in 2016.

Sydney and West Coast both sit in the bottom half of Strength of Schedule with almost exactly 0.500 draw, reflecting (as we saw above) that they’re good teams who don’t play themselves, and that their moderate burden from the weighted fixture does not outweigh that.

What does this mean in terms of actual games?

Let’s just take a step back here. Strength of Schedule matters. It changes the expected outcomes for each team. But a common mistake we make is to look at each game as either a “win” or a “loss” rather than a probability of each team winning, and thus overestimate the impact of the fixture. If a team plays North Melbourne they might have a 30% chance of winning. If they play Brisbane they might have a 70% chance. The difference between those two opponents is not 1 win just because one opponent is easier, it’s 0.4 expected wins. Strength of Schedule doesn’t decide premierships, it rarely even impacts ladder position all that much.

To illustrate that, we can add up the above Strength of Schedule measures into actual advantage over the course of the 22 game season and see, the impacts don’t look so bad:

Strength of Schedule expressed in season win advantage

Effectively, compared to a perfectly fair draw, North Melbourne and Adelaide are between 0.4 and 0.6 wins behind St Kilda and Essendon are 0.3 or 0.4 wins ahead. The gap between the hardest and easiest draw based on opponents is about 1 win.

The hardest possible draw (something like Carlton playing last year’s top 5 twice) would be about a 1.5 win disadvantage. The easiest possible draw would be around a 1.7 win advantage. Collectively that’s a range of 3.2 expected wins for mathematically possible draws.

While those differences are significant, they’re also a situation that wouldn’t occur under the current AFL Fixture rules. Within the constraints of an 18-team, 22 week season, the weighted fixture is pretty successful at minimising Strength of Schedule differences.

How did 2015 turn out?

The other difficulty with fixture weighting to compensate for unevenness in team numbers is that expectations based on the previous year’s ladder don’t always pan out.

We ran this analysis last year, and projected Geelong to have the hardest draw, followed by Port Adelaide. That sort of happened, but Port’s draw went from tough to hilariously brutal thanks to the Bulldogs and Adelaide’s improvement. Port Adelaide’s double-up opponents ended up finishing 1st, 3rd, 4th, 6th and 7th on the ladder. Given the Crows are a fixed opponent and the limitations of the AFL’s weighting policy, only replacing the Bulldogs with West Coast or Richmond could have made their draw harder.

Melbourne’s draw also became unexpectedly tough (6th hardest) due to improvement in the Bulldogs, GWS and to a lesser extent St Kilda.

On the other hand, Hawthorn’s draw, projected as the third toughest, turned out to be one of the easiest draws as Essendon, Carlton and Port Adelaide fell off various cliffs while Geelong also regressed.

To finish then, bearing in mind that all this analysis will turn out to be interestingly wrong, here’s a comparison of the actual 2015 Strength of Schedule and the projected 2016 Strength of Schedule to see who gets the most benefit heading into next year:

Strength of Schedule impacts, 2015 and 2016

From this we can expect Geelong and Port Adelaide to start confident of an improved 2016, and since they both just missed finals (finishing 9th and 10th), they will expect to push for a finals berth. Essendon and Gold Coast, too, look set to rebound a little.

Hawthorn theoretically present as likely to slide, as do Adelaide, North Melbourne and GWS. Remember that this is just the impact on expected wins from the strength of opponents, and plenty else will change between now and the end of next year. We, for one, won’t back Hawthorn to slide just because their fixture is harder.

4 comments

Leave a Reply to Matt Cowgill Cancel reply

Your email address will not be published. Required fields are marked *