The WIAA Girls Basketball Tournament came to a close on Saturday by naming state champions in each of the WIAA's five divisions. This post serves as a before-and-after, comparing what the models predicted before the tournament to what actually transpired on the hardwood.
First up, the state championship teams. The chart below shows the pre-tournament probability that each of the teams would win the state title. In many cases, a low probability reflects how difficult a team's tournament run was expected to be rather than expectations of poor performance. Cuba City earns the title of the least expected champion with only a 1.0% chance of winning.

Diving deeper, the chart below shows the least likely tournament runs. To qualify for this list, a team needed to win at least a regional championship and have one of the two least likely tournament runs in their division. Similar to the above list, these probabilities reflect the pre-tournament rating of the team, the expected difficulty of their path, and the length of their tournament run. As expected, there are several state champions on this list. However, the least likely tournament run goes to Sectional Runner-Up, Eau Claire Immanuel Lutheran, which had only a 0.5% of becoming the sectional runner-up (or better) before the tournament began.
Another view of similar data is shown below. This chart shows the 10 largest rating increases for teams during the tournament. The rating measures similar results as the prior chart (unexpected wins) but also includes margin of victory as a main driver of the model's ratings. Unsurprisingly, Cuba City received the largest rating increase.
No review of tournament surprises would be complete without looking at the individual games. The following exhibit shows the least likely upsets that occurred during the tournament. Kettle Moraine takes the top spot here after their defeat of Oregon which had a pre-game probability of 9.1%.
And finally, it is time to review how the model performed during the tournament. The following exhibit only uses games between teams that were seeded in the same group (through sectional finals for division 1 and through sectional semifinals for divisions 2-5). The higher seed in these matchups only won 81.52% of the time. The model predicted the winner correctly in 86.02% of games which was not only better than the seedings but was also better than the model's own expectations (85.13%). The main driver of the model's performance was the picking of upsets. In the 47 games where the model disagreed with the seedings, the model was correct 33 times.