/misc
Navigation

CRA poll pretty much nailed the results in New Brunswick. But was it just luck?

Coup d'oeil sur cet article

The NB election was 2 days ago, on Monday September 22nd. The results were close most of the night, not in small part due to a glitch in the system that prevented results from being reported for a couple of hours.

At the end, the Liberals regained power by winning a majority of the seats (27 out 49) with 42.7% of the vote. The incumbent Conservatives got only 34.7% and 21 seats. The NDP got no seat with 13% of the vote while the Green Party managed to get one seat with only 6.6% of the vote (if that doesn't scream the need for electoral reform... but let's move on).

I didn't cover this election simply because I did not have time. The lack of polls didn't help either. Indeed, we were left with pretty much only 2 polls during the last week, one by Forum done the day before the election, one by CRA conducted a little bit earlier. Forum had the main two parties tied at 40% while CRA had the Liberals way ahead with 45% versus 36%. At the end of the day, CRA was of course a lot closer to the actual outcome. Forum actually didn't have any of the main two parties within the margins of error (although it was very close for the Liberals).

The CRA poll only had an incredibly small number of respondents: 333! It was at least conducted by phone (not IVR, actual phone calls), still, this is a very small sample and you have to wonder if CRA just got lucky to be so close to the actual result.

Now, let me be clear when I say lucky. When you do random sampling, you can be lucky or unlucky in the sense that you can get a good or "bad" sample. When we report the margins of error and we say that it's plus or minus 3% 19 times out of 20, this is exactly what we mean. It means that if you were to take a 100 samples, the true results (the one in the population) would be within the MoE in 95 of these 100 samples. So you have a 5% chance of having a sample where your results will be a lot further from the true, actual ones.

With only 333, it means the MoE are very large. For the Liberals who were polled at 45%, it means margins of error of plus or minus 5.3%. Not a very precise estimate to say the least.

So, let's assume that the CRA methodology had no bias and if done enough times, we'd have got the actual true results of the election. The questions: what were the chances that one given poll would be so close to the true outcome? How likely was it that they'd estimate the Liberals only 2.3 points off and the Conservatives only 1.3 off? I ran 5000 simulations which are equivalent to taking 5000 random sample of 333 respondents from a population where the actual proportions were the ones from the election (so 42.7% for the Liberals, 34.7% for the Conservatives). Here are some of the results:

- There were about 35% chances that the poll would estimate the Conservative within 1.3 points of the true result.

- 58% chances of estimating the Liberals' share of votes within 2.3 points of the true result.

- Together (being off by 2.3 points or less for the Liberals and by 1.3 point or less for the PC): only 25%

The last one is  interesting but too specific. It's too specific because the CRA poll would have been just as good if they've had the PC off by 2 points but the Liberals off by only 1.3 for instance. So let's look at the odds of getting both parties within 2 points of the actual results. In this case, the chances are around 35%. Within 3 points? 60%.

So here you have it, by having such a small sample, CRA was actually quite lucky to be that close to the actual results. I'm not implying here that the CRA poll and methodology aren't sound or anything. I'm simply talking about the "risk" you take when you sample so few people. When you do that, you can have a perfect sampling method, you are still vulnerable to the normal statistical variation. Of course, this post is most likely exaggerating the problem because I do not account for the weights. Polling firms take a sample and if this one isn't perfectly and naturally representative of the population, they can use weights to correct (some) of this issue. Therefore, when I say there was only 35% chances that CRA would get within 2 points of the actual results, this is most likely an underestimation. At the same time, let's not pretend that weighting can fix everything.

At the end of the day, congratulations to CRA for being by far the most accurate pollster in NB. If you could however increase your sample size, that wouldn't hurt!

Even though they were quite off, a big thank to Forum as well for actually polling at all! Say what you want about Forum, but this firm polls every single election, and multiple times. It's easy not to be wrong when you never poll (looking at you Nanos!)