On Political Polls and Negative Rainfall

http://www.abc.net.au/news/2013-03-26/coalition-support-leaps-in-latest-newspoll/4593938 – Coalition support leaps in latest Newspoll

(Trigger warning: “smug bastard” photo of Tony Abbott).

Does it occur to anyone else this headline is rather like “Phenomenal amount of rain didn’t fall yesterday”?

According to this poll, if somehow time were annihilated between today, Tuesday the 26th of March, and Saturday the 14th of September, the Coalition might very well win the election. Provided the only persons who voted were the people sampled in the latest Newspoll.

Does the above summation help anyone to see just how this makes the headline, and the article it accompanies, resemble someone talking earnestly about how much rain didn’t fall last night?

Poll watching is the great spectator sport among Australian journalists, and there are polls just about every week measuring how people feel about X, Y, or Z. What these polls leave out (and what they have to leave out) is due to the mechanism of our representative democracy, how we-the-voters feel about issues doesn’t matter most of the time. It only matters on one day every three years – on election day, when we get to cast our votes. The rest of the time, it’s just noise, and no amount of opinion polls showing how concerned we are by $ISSUE are going to change the fact.

Politicians could quite easily ignore polls (and I’d argue things would be a lot better for a number of them if they did) – but then, the polls aren’t being commissioned by the politicians. This is the big clue. The opinion polls are being commissioned by the media organisations, because as far as they’re concerned, opinion polls are an easy way to generate a lot of content out of the one result. They can write stories about the poll result itself. They can write stories about how the result was received by various political figures (making a huge fuss if said political figures don’t give the results the same degree of importance they do). They can write stories about what these results might “mean” for various political figures. They can make a lot of journalistic soup out of the one onion of a single poll result.

But this only works so long as they conceal the basic truth: opinion polls, as a statistical measure, aren’t measuring anything relevant. The poll that matters is going to be held on September the 14th, it’s going to have a sample size which encompasses the entire Australian voting population, and will be far more representative than any media-sponsored poll could be. All the others before then? They’re just measuring how much rain didn’t fall yesterday.

Categories: media, parties and factions, Politics

Tags: , ,

10 replies

  1. “If an election was held tomorrow”. But an election isn’t being held tomorrow, so it’s basically fantasy, and all of these journalists are writing about something that isn’t happening and putting it on the front page.

  2. And I remember in a time long ago (NZ in the 80’s and maybe even the 90’s) when opinion polls were reported with their margin of error. Which strangely enough sometimes negated the reported differences!

  3. I disagree, you say
    opinion polls, as a statistical measure, aren’t measuring anything relevant.
    I would say that opinion polls are a proxy measure, we know that people change their mind between when the poll is and when the election is, and sometimes people lie about their voting intentions. However to say they are aren’t measuring anything relevant is a big call when given the predictive power of for example Nate Silver’s (http://fivethirtyeight.blogs.nytimes.com/author/nate-silver/) work, and one I strongly disagree with.

  4. In order for an opinion poll to be a useful measure of anything, the people reading stories about it need to know a few things about the poll being conducted. We need to know the sample size (which generally isn’t reported for things like Newspoll). We need to know the questions which were asked by the poll takers (again, not reported) and how the poll was conducted (also not reported). We need to know the margin of error (as per Janet’s comment, and again, not reported). We need to know where their sample is being drawn from, and how representative this sample is (which, yet again, isn’t reported either). We need to know how the final figures being reported (generally as the answer to a question which isn’t being asked at the election anyway: we aren’t directly electing our Prime Minister; Julia Gillard and Tony Abbott aren’t standing for the same seat in the House of Representatives; heck, they’re not even contesting in the same blasted STATE) are calculated from the data they get. We need to know how many people refused to be sampled, how many people they got incomplete demographic data from, and a lot more information about the actual substance of the poll – NONE OF WHICH IS ACTUALLY REPORTED in the stories about it.
    So the actual, statistical value of the average Australian newspaper opinion poll is right up there, as I pointed out, with a measure of how much rain didn’t fall overnight. Or how many angels are capable of dancing on the head of a pin. Or even how many invisible pink flying elephants are capable of balancing on the top of the flagpole over Parliament House in Canberra.

  5. Just because the reader doesn’t know if the poll is statistically valid doesn’t mean that it isn’t valid. It just means you have to trust the newscaster more. And, many newspolls are conducted by specialists and academics on their behalf, so can be both representative and pretty accurate. For some news outlets, getting this right is a matter of significant pride.
    More importantly, regardless of its accuracy, it’s still politically important BECAUSE we proceed as if they are valid, and so it influences how we think and ultimately behave. So this week, staff at Sydney Uni are on strike, because management have walked away from negotiations with the Union. Management have chosen to do so – so it is claimed – because they think they will get a Liberal government in September (changing the balance of power between unions and management). The reason they did that is because of the data they are getting from polls like this. Similarly, many people will think twice about how they cast their vote because of these polls (what’s the point of throwing away a ballot- maybe I’ll vote strategically etc). Politicians also rely on them to guage a sense of how their campaign is going, and they use them to adapt their campaign strategies. So, polls like these help us, rightly or wrongly, plan for the future.
    I get that it’s still a ‘media created event’ – they created the poll in order to have something to talk about – but that’s hardly unique to this situation. Moreover, the media is itself a political organ and indeed this is often construed positively as the freedom of the press is viewed as balancing the power of the state. Providing debate, discussion and yes opinion polls has long been considered part of the media’s role in shaping political discourse. They provide such things because we, the public, demand them and find them useful in making decisions.

  6. Actually the margin of error on these polls is sometimes mentioned. The report I heard on the radio this morning said something like ‘the first party preferences for the Labor Party have dropped by 7% which is outside the poll’s margin of error of 3%’
    What’s not clear is if that is a confidence interval (and if so what degree) or a standard error.
    Opinion polls tend to select 1000 people. There is some fixed idea that this is a good number to have. It’s probably not a bad number if you don’t want to break your estimates down in any way (eg split by state, sex, age etc).
    The thing most likely to lead to significant bias in sampling is if they are using random digit dialling of landlines. Quite a large proportion of younger people only have mobiles these days, and younger people tend to vote more progressively.
    Still, much as I wish it were so, I’m not convinced that would be a substantial enough bias to swing the preferences by 20%…

  7. Megpie71 – although a lot the information you talk about is not reported (though margin of error is mentioned fairly commonly) the pollsters do publish nearly everything you ask for (including for example questions asked) and is available on their websites to see.

  8. Newspoll has correctly predicted 56 of the last 56 state and federal elections. I checked the web story and it gives a sample size of 1136, MOE of 3%, contains the exact questions asked, says survey was conducted by phone etc. This is done for every survey and always has been since I’ve read the papers and I’m somewhat long in the tooth now 😉 Other info may be found on the dedicated newspoll website
    “We need to know where their sample is being drawn from, and how representative this sample is (which, yet again, isn’t reported either).”
    Actually, all we need to know for stat purposes is the MOE.
    “we aren’t directly electing our Prime Minister; Julia Gillard and Tony Abbott aren’t standing for the same seat in the House of Representatives; heck, they’re not even contesting in the same blasted STATE”
    The survey asks “If a federal election for the House of Representatives were held today, which one of the following would you vote for? If “uncommitted”, to which one of these do you have a leaning?”
    Hence your objection is incorrect.
    Also what Cheshire said. If you are going to *successfully* argue that polls only measure yesterday’s rainfall then you’ll need to explain precisely how it is that Nate Silver and a number of other stats’ nerds managed to predict the last US fed election accurately including each and every House of Reps seat using *nothing other than historical poll results* and basic Bayesian methods.
    Sorry if I sound a little chippy, but this old gal taught stats for 30 years and doesn’t like to see her profession dismissed by someone who clearly hasn’t done any basic fact checking.
    Anyway, all the best and thanks for raising this topic, even though I we don’t see eye to eye 🙂

  9. @Megpie71 I completely agree with you about the reporting of polls, sometimes, and I think it is getting more common the margin of error is mentioned, but the sample size should be reported, as well as the % of people who chose to respond, as low response rate can be a significant cause of bias. I would love to see more discussion of what these things mean in non jargon.
    “They surveyed 900 people, of them 42% would vote labour, and we are pretty sure that means of all Australians somewhere between 39 and 45% would vote labour.” or something like that.
    @Miss Vicky Statisticians represent! (Med statistician here).
    Although “basic Bayesian methods” seems a bit harsh.

  10. Sorry for the double post, but even though most people don’t vote for the leader, I live in neither Layor or Warringah people do change their party vote on the basis of the leader, and the assumption that Australians are somehow voting for 150 independents is false, we don’t have direct elections, but we do have a party system and in my last electric I couldn’t tell you one thing my local member did, except that as labour member she supported labour to form goverment, which in the end is why my preferences went to her.

%d bloggers like this: