Menu Close

Party Support in the 2015 Federal Election

As promised in my first post, this post uses the 2015 federal election to show how ThreeHundredThirtyEight.com will track support for political parties over time use the Dyad Ratio Algorithm as calculated using Wcalc by James Stimson.

Essentially this algorithm uses the differences between polls and their sample sizes to show the varying level of support for issues or parties over time. When the timing of polls overlap this gives the ratio an opportunity to evaluate how they compare to one another. The multiple polls that are inputted into the program then are combined to create a trend line show changes in support over time. One important note with this approach, for a result to appear in the calculation the polling firm needs to ask the question at least twice (and the more times the better).




Before demonstrating the approach a few other caveats should be covered. Wcalc offers a smoothing option, which is used in this analysis. The smoothing effect assumes that changes in opinion are generally gradual and that abrupt changes may be the result of sampling error. Smoothing has a particularly large impact at times when there are only a few polls and typically less of an impact when there are many polls in a given time period.

Secondly, since the Dyad Ratio Algorithm is used to estimate support for each political party individually the total support levels at any one point in time does not always add up to exactly 100%. When this occurs the data has been reweighted so the total support for each political party always equals 100%.

When the Dyad Ratio Algorithm estimates each party’s level of support during the 2015 federal election the results can be graphed to show the changes over time. The resulting graph shows the levels of support for the NDP fall dramatically starting in late September and the Liberal party rising mostly steady throughout the campaign.

The previous graph was created using data from the Wikipedia page Opinion polling in the Canadian federal election, 2015. This Wikipedia page also graphs the data thanks to the work of Galneweinhaw. The Wikipedia graph is created by using Local Regressions; they also add confidence intervals in grey at the 95% level.

Opinion Polling during the 2015 Canadian Federal Election

The noticeable difference between the approaches is the additional smoothing in the Wikipedia approach. Here the NDP trendline is much clearer in its direction from the start of the campaign to the end. While this is a valuable approach for avoiding the impact of outlier polls, the additional smoothing may be less likely to pick up a shift in support, instead assuming a high or low poll is simply an outlier.

Indeed, both approaches have value when interpreting an election. ThreeHundredThirtyEight.com will use the Dyad Ratio Algorithm to provide observers with more information about the election. In addition, the Dyad Ratio Algorithm is much more valuable when estimating changes in public opinion over time, as these changes are based on significantly less polling data.

In the next post how the Dyad Ratio Algorithm performed at predicting the final vote totals for each political party in the 2015 election will be explored. If you would like to work with this dataset yourself the raw data is available at http://doi.org/10.3886/E107121V1

Comparing the Regional Chair Survey to the Election Results

From October 16 to 17, 2018, ThreeHundredThirtyEight.com conducted a survey to assess support for the candidates in the Waterloo Region Chair race. The results showed Karen Redman in the lead but a large number of voters undecided. Ultimately, Karen Redman was successful during the election held from October 22 to 23, 2018. This post looks back to assess how accurate the survey was at predicting the election results.

In the original reporting of the race, we reported a margin of error of +/-4.25%. For simplicity sake, when reporting a margin of error a single value is typically shared. However, the calculation for margin of error actually varies based on the observed proportion. Results close to 50% have higher margins of error than results close to 10%. Forum Research breaks down the rough margins in a handy table by sample size and observed proportions. For example, according to Forum Research’s table, with a sample size of 300, the margin of error can vary between 3.4% (at 10% or 90% proportion) and 5.7% (at a 50% proportion).

In this post, one margin of error per sample is reported calculated at the 95% level (i.e. the results are considered accurate 19 times out of 20). However, in the commentary assessing the accuracy of the results the margin of error for the individual proportions were calculated using an online calculator.

The obvious place to start is to assess the accuracy of the top line results as reported on October 18, 2018. Here we see that the results overall did quite well. The results of each candidate are within the margin of error except for Jan d’Ailly who slightly outperformed. His margin of error was 2.8 percentage points, yet he received 9.7% of the vote, a result 3.0 percentage points above his 6.7% predicted support. The tracking error on this model also performed quite well at 8.4 percentage points. The tracking error was calculated by taking the election results and subtracting them from the survey results, then adding the absolute value of each of these numbers.

The reported results included leaning and decided voters. It is also possible to compare only using decided voters.  Once again in this approach, all results except for those involving Jan d’Ailly are within the margin of error. However, in this model, the tracking error increases to 11.2 percentage points.

A model was also created to predict likely voters. In this model, the results do not work as well. Both Karen Redman and Jan d’Ailly are outside of the margin of error in this model and the tracking error increases to 15.3 percentage points. Interestingly, using only unlikely voters all candidates results are within the margin of error. The small sample size for this group increases the margin of error. The tracking error amongst unlikely voters is 13.1 percentage points.

When the results of leaning and likely voters are broken down by city/township they all fall within the margin of error. However, it should be noted some of these sample sizes are very small creating very large margins of error. It is worth noting with respect to Karen Redman, the Cambridge results were at the edge of the Margin of Error at the 95% level.

The tracking error was lowest in the townships at 2.7 percentage points, followed by Kitchener at 5.9 percentage points, then Waterloo at 9.3 percentage points, and then Cambridge at 18.5 percentage points.

One final comparison was made. The results reported publically were weighted by age, gender, and city/township of residence. However, it is also possible to compare the unweighted survey results to the actual election results. This approach finds all of the results well within the margin of error and a tracking error of 4.8.

Overall the results of the survey were a fairly good predictor of the actual election results. Indeed, even the breakdown by city/township showed results that were a reasonable predictor of the actual election results. However, the likely voter model was a poor predictor of the election results. It is fortunate this model was not used. It is also interesting to note that weighting the variables did not improve the predictive power of the survey.



Three Reasons Why the Regional Chair Poll May Not Predict the Election Results

Election polling results are interesting because we like to use them to predict an election. However, polls represent a snapshot in time so extrapolating results to a future event may lead to faulty predictions. With respect to the Regional Chair race poll, we released this week the need for caution is even greater, as this was a single poll in a low turnout election with a response rate of 0.2%. Three issues warrant particular consideration.

1) The Survey Could be a 1 in 20 Outcome

Public opinion surveys are an attempt to ascertain what the population believes about a topic by asking a small group of people. When reporting a single poll a margin of error is typically given at the 95% confidence level, indicating a range of plus or minus a few percentage points within which the population’s belief actually falls. So, for example, in the case of the poll for Regional Chair, 36.5% of respondents were undecided with a range of plus or minus 4.3%. Meaning statistically the poll predicts somewhere between 32.2% and 40.8% of voters were undecided 19 times out of 20. This caution of 19 times out of 20 is an important one to note as statistically speaking, even if the poll is a perfectly random sample 1 in 20 times the actual result is expected to fall outside of the margin of error (i.e. 1 time in 20 the poll will simply be wrong).

The best defence against this problem is multiple polls on the same topic by multiple polling firms. A single poll showing a result may be an outlier, multiple polls showing the same result is unlikely. Multiple polling firms tackling the same topic also decreases the likelihood that a bias may be built into the sampling procedure (i.e. how people are selected to participate in the poll). Yet, even when multiple poll results show the same result, the polls may not be predictive of future events. The most recent presidential election in the United States provides a case study in the need for caution when extrapolating from polls to a future election.



2) The Undecided Voters

The survey revealed that 36.5% of voters have yet to make up their mind. These individuals were first asked who they would support if the election were held today, then asked if they were leaning towards a particular candidate. Asking vote preference twice is considered best practice and tends to capture even soft supporters for a candidate. The 36.5% in this survey who were undecided, are therefore individuals who are likely quite open to being persuaded by at least two of the candidates. An additional 8.7% said they would prefer not to share their preferred candidate, meaning we have no idea how they will vote. Finally, of those who indicated a preference, 8.0% were only leaning towards a candidate (i.e. when first asked who they would support they said they did not know). These results indicate that over 50% of voters preferences are either unknown or open to changing. Undecided voters could therefore dramatically change the election results. How dramatically? Extrapolating from these numbers, and assuming the poll results are accurate and not an outlier, Karen Redman’s support on Election Day could fall anywhere between 27.6% and 89.3%.

3) The Sample May Not Be Accurate

It is also possible that the results of the poll may be off because the sampling method introduced an unknown error into the process. The sample of landlines was created by purchasing an electronic list of listed landlines from www.telephonelists.biz. Landline numbers were then randomly sampled from this list. In addition, a list of likely cellphones and unlisted landlines was added into the sample. This list was created using data published at www.cnac.ca on the area code and exchange (NPA & NXX). The data from the Canadian Number Administrator can be used to ascertain the first six digits of phone numbers that when originally activated belonged to someone who activated a phone within Waterloo Region. However, with number portability, it is possible to keep your phone number when moving into or out of Waterloo Region. The survey revealed 6% of respondents no longer live in Waterloo Region. These individuals were excluded from the final results. However, there was no correction made to introduce people with cell phones originally activated outside of Waterloo Region. It is impossible to know what percentage of phone numbers were excluded from the survey because they were not included in our sample and it is possible that these individuals support different candidates than those included in the survey.

The second problem with respect to the sample is those who choose to participate in the survey. The response rate was less than one-quarter of a percent. It is possible that the 99.8% of the population that did not complete the survey are different than those who did participate. A total of 86.5% of people called did not answer the phone. It is not possible to know if these people are somehow different. Perhaps the reason they were all busy on a Tuesday and Wednesday evening makes them predisposed to a particular candidate. There is no way to know. Of the 13.5% of people who did answer the phone when called; only 1.5% completed the entire survey. Again, it is not possible to know if these people are somehow different. Perhaps these people exhibited a Shy Tory effect and were unwilling to participate because they did not want to admit their preferences to a (liberal) Conestoga College professor.