Pages

Monday, 24 April 2017

How Opinion Polling Works

Which of these is most representative of public opinion at large? A heavily gamed voluntary poll of 160,000-odd people done on the behest of This Morning that shows a commanding lead enjoyed by Jeremy Corbyn over Theresa May; or any of the recent spate of polls by professional polling companies who show very much the opposite consistently on the basis of samples between 1,000 and 1,800 people. I have to ask because lots of people have been pushing ITV's poll as more representative than anything YouGov can come up with. After all, it covers more people. The latter? Pah. It was founded by a couple of Tories and provides findings politically convenient for Jeremy Corbyn's opponents. If they were free and fair it would show more support for Labour because I know loads of people who support Labour.

If you happen to share these views, you're wrong. The methodology of opinion polling has been refined over decades of research, and why pollsters and other researchers (including distinctly un-Tory sociologists like me) can make confident generalisations from seemingly small pools of people. This operation has two dimensions to it, but both kinds of test deal with probabilities.

Before anything, we need to start with the ‘null hypothesis’. This is the assumption that when we approach two social phenomena there is no relationship. The maths underpinning statistics are set up to confirm or refute this hypothesis. In the case of polling, tests of statistical significance show the likelihood that claims of no relationship between the cases under study and the results can be rejected. Hence when a poll is compiled, characteristics reflective of the population at large are selected for. For a typical poll, the sample group of, say 1,000 respondents, represents in miniature the population at large or the segment of the population the operation wishes to survey. If we don't do this, then a huge validity question mark hangs over the subsequent claims made. A sample should approximate as much as possible the age, income, gender, ethnicity, etc. profiles of the group or sub-group to be studied. If 30% of the population are over the age of 60, then that should be the case with the sample. If 10% are from a non-white ethnic background, it needs to be reflected. I’m sure you get the picture. Selection then is never completely random but is within the parameters set by the research design. If you’ve never been contacted by a polling company, don’t take it personally!

We have our pool of demographically representative respondents then, but how can we surmise that the views of the sample are equally as representative? This is where tests of statistical significance come in. These are mathematical procedures designed to establish the likelihood that observed characteristics – in this case political opinions – are random (i.e. the null hypothesis is true) or infer a pattern of views that really exist “out there” in wider society. All surveys compute statistical significance tests, which you can usually find by burrowing into the data sets polling companies release along with their results. These tests ask a simple question: if a hundred representative samples were taken, what number of the observed results could be put down to chance alone? If the computed figure returns 0.6, then 60% of cases can be put down to randomness, for example. If it’s 0.05, then five per cent of the sample cases are likely to be random, and so on. The lower the level of significance, the more confident researchers can be that observed data reflects real proportions existing in real populations. When it comes to statements about samples, researchers typically use either 0.05 or 0.01 depending on sample sizes (large for the latter, small for the former). i.e. We are 95% or 99% certain that observed patterns really do exist and are not an artefact of the maths.

This isn’t the only test of statistical significance available. Instead, one can produce an ‘interval estimate’ which, instead of identifying the probability of sample patterns mirroring those of general patterns, looks at errors in sampling. For instance, if 48% of our sample say they’re going to vote Conservative, and such polls have done the rounds recently, how close to the real figure is this finding? This can be inferred by computing a standard error statistic. This means multiplying the Tory figure (48) by the non-Tory figure (52). This gives us 2,496, which is then divided by the sample size. Assuming a sample of 1,000, this equals 2.496. We then apply a square root, which gives us 1.578. This is all very well, but why? This standard error can be used to suggest the real number will be circa 1.6% above or below the polling figure. We have already seen that >0.05 (or 95%) is taken as an acceptable level of certainty in our previous significance test providing, of course, the sample is representative. If it is, we can say with 95% confidence that the numbers of people planning to vote Conservative will be 48%, +/- 1.6%. For example, this is why pollsters in the lead up to the first round of the French presidential election found it very difficult to call because the four front runners were, at times, all within the margin of error of one another.

Sometimes pollsters weight their samples in a particular direction. For example, rather than going for an accurate snapshot of the general population, they sometimes ensure older people are over represented and younger people underrepresented because, as we know, the old are much more likely to vote than the young. Likewise, people from low income backgrounds, have lower levels of formal qualifications, and so on might be scaled down for exactly the same reason.

There you have a very basic overview of polling. There are criticisms of significance testing, and in this age of Big Data a growing clamour suggesting that sampling of this sort may have had its day now huge data sets are available (though, it has to be said, most of these are under the lock and key of public bureaucracies and private business). There are specific criticisms one can make of polling companies. YouGov, for example, is reliant on a database of voluntary sign-ups. There are about 800,000 who’ve joined their UK panel, so while they are likely to not reflect the general population the company has enough data about their demographic characteristics and preferences to construct representative samples out of them. However, they have got into murky waters when they’ve tried polling members of organisations. For one, they have no hard data on the characteristics of their wider membership and so have difficulties generating representative samples. And also, they sometimes have very low numbers of people belonging to certain organisations. I can remember them conducting a poll on Jeremy Corbyn’s support among trade union panel members, and arrived at the CWU’s result after asking just 50-odd people. The union has around 190,000 dues payers.

As a mathematical discipline, statistics have two centuries of scholarship behind it. Polling might get it wrong occasionally, but again that's because it deals with probabilities. Researchers and pollsters can learn from these mistakes, methods can be refined, techniques can be calibrated, improved. Unfortunately, rejection of polling because a leading firm is owned by Tories, because they are used for self-serving political reasons, and because they show Labour plumbing the depths doesn't mean they're wrong. To pretend they have to be because they contradict your experience and views is naive cynicism. The problem is this gets us nowhere. Clinging to illusions is only setting yourself up for a fall when reality crashes in.

If we want to change the world, we have to ask questions, analyse, think, and explain. If things aren't going our way, why? And on that basis, what are we going to do about it? That's the route to making things better because it's the only way.

15 comments:

  1. So you are choosing to ignore the testimony of those who have worked at YOuGov who have made it plain that "They were driven to achieve certain results by managment"?

    ReplyDelete
  2. Yes, but how do we know if the polling companies are doing everything correctly?
    How have two of them come out with such different answers?
    Can you calculate the odds of them both using the same methods and getting a particular difference between their results?

    The description of the idea of naive cynicism in wikkipedia seems to believe it is
    pathological not to assume that everyone is telling the truth and knows what they're
    talking about. I'll be sticking with naive cynicism for the time being.

    ReplyDelete
  3. This post is an explanation of polling methods in general, not of YouGov or polling companies. If there is testimony of bad practice then feel free to share it.

    And Wikipedia has a definition of naive cynicism?

    ReplyDelete
  4. Huh, so it does. I really didn't know it was a thing. That's what happens when you ignore social psychology literature.

    ReplyDelete
  5. Very good stuff Phil, thanks.

    But it doesn't explain the ITV poll; it wasn't heavily gamed - there were no related campaigns until it got to around 75000 people - and I haven't yet seen a reasonable expansion for it.

    ReplyDelete
  6. Hi John, it was heavily shared on Facebook and Twitter among pro-Corbyn networks. I follow a few of those folks and was exhorted to vote on about a dozen occasions.

    ReplyDelete
  7. The rather sinister Tom Watson is never more than six yards away from a free buffet. It's just about the only thing in UK politics you can feel 100% certain about.

    ReplyDelete
  8. Fair enough on the stats but Opinion Polls have made some massive wrong predictions recently. I think they don't allow for people just lying to them, what are the characteristics of that group, are the liars randomly distributed as per other variables? For instance, are bloody-minded Kippers more likely to lie, or would it be anti-capitalists who hate polling companies? I've studied census returns while researching my own family history, I noticed that my Irish immigrant ancestors lied consistently to the census takers, probably a cultural thing from growing up in an occupied country.

    ReplyDelete
  9. Thanks Phil

    I'm sure I follow a lot of those on Twitter (though not FB) too - as I do you - and I didn't see anything till after 75,000 or so, which is when I started seeing the invites to join in. Which I think is odd.

    Nevertheless, does the outcome mean only that the left are more prolific/effective/relentless online? This would seem to fly in the face of the analysis of the last election, which - I think - said that the Tories had a much better operation.

    I think what I'm saying is - don't dismiss the ITV one out of hand. I'm not foolish enough to think it changes much overall, just that it's a different sample.

    Whilst we're on this, do/can polls take account of the 350000 (mainly younger?) voters that have registered this past week? Or even with the various endorsements from Grime/snooker stars - who are followed by around 1,500000 people; are there ways in which polling, between now and the 8th June, catches up with whoever may be 'new' in the election?

    ReplyDelete
  10. Why do people sign up to an online polling site, hand over personnel info about themselves and do polls?

    Is there any way of finding out why people do online polls? Money? A chance of winning a cash prize or vouchers?

    If prizes and payment is the motivation how would that effect the way people answer questions?

    I did a little research, signed up to yougov with a fake name and address and did a few polls. It's boring but at the end you have a chance to win a £1000 to £5000 cash prize.

    ReplyDelete
  11. Isn't the obvious question to ask: who has been closest in the past?
    Particularly on brexit, which seemed to not have been forecast by some polls.

    ReplyDelete
  12. The stuff about polls being wildly wrong has been greatly exaggerated lately, and people on the left have picked up on it in a way that does us no favours. It started out with mainstream journalists blaming the polls because they called things like Brexit and Trump wrong, when the real problem was that they had been reading the polling evidence in the light of what they considered common sense and conventional wisdom. Polls have been out a bit but not to anything like the degree that some people suggest. It would be far more useful for the left to emphasize that polls lately have been far more volatile than in the past (if you trusted the polls on Brexit a year out, you would have thought it was a foregone conclusion for Remain); it's not that they're wildly wrong about public opinion at the moment, but that doesn't mean it'll be the same in a year or two.

    Another point is that when you get away from straightfoward questions about party preference etc., the way that questions are worded makes a big difference. All the questions about immigration, economic policy etc. need to be worded very carefully if they're not going to lead people one way or another. Take one issue of the last few days - if you asked people 'do you think Britain should retain the first-strike option as part of its nuclear deterrent against potential attack?', you'd probably get a very different answer than if you said 'do you think Britain should be willing to start a nuclear war?' (the second wording is a lot more neutral than the first, too). Focusing on this stuff is far more useful than talking as if the polls on party preference were totally unreliable.

    ReplyDelete
  13. The interesting unreported thing about the polling is that about 20% of 2015 GE Labour voters have become Don't knows/Will not vote…. which suggests that we need to get on the doorstep.

    I think you've rather underplayed the impact of the changes in weightings that have occurred since 2015. In particular, 'likelihood to vote' which elevates the Tory % because they tend to be older and more consistent about voting (only 15% of over 65s say they will vote Labour). Again, this means LP getting on the doorstep to get out the younger voters.

    I have little doubt that there has been a significant swing to the Tories because of former Ukip voters turning blue but it's not clear whether the weightings (added since 2015) underestimate the Labour vote. On face value, LP may have lost 5% points from 2015 but up until last autumn, there had been little change for Labour since the GE.

    It was also true that LP did significantly better (and the tories significantly worse) in last year's local elections than had been predicted by the polls. However, local elections have always been notoriously bad predictors of GE results.

    ReplyDelete
  14. I always wonder about the backgrounds of those who will answer pollster's questions - are people who hang up on them, or ignore requests to take part, inherently anti-establishment and therefore left-leaning in voting intent? Would such people be less likely to vote at all?

    What statistical impact assessment has been factored into polls for the 'no response' pollee?

    ReplyDelete
  15. There is work out there that looks at these sorts of issues, but haven't got any to hand.

    Generally pollsters make a great deal of effort to make their sample representative. For instance, if 10% of the population are from a BME background they will continue contacting people until that proportion of the sample is all full up.

    ReplyDelete

Comments are under moderation.