Media Circus: Woman Has A Baby Edition

One cynically wonders what the PR hacks/flacks will be scurrying to release to the press today while they can be guaranteed that it will be buried deep down inside the boring bits of the news pages during the #RoyalBaby mania.

What non-royal story has piqued your media interests lately? The ABC’s Election 2013 Policy page has some good links to more in-depth coverage of issues that aren’t making the headlines, for example.


As usual for media circus threads, please share your bouquets and brickbats for particular items in the mass media, or highlight cogent analysis or pointed twitterstorms etc in new media. Discuss any current sociopolitical issue (the theme of each edition is merely for discussion-starter purposes – all current news items are on topic!).


SotBO: The safe arrival of a much wanted baby is a happy time, and the Windsors should get to enjoy it just as much as any other family. They don’t really need me/us to witter on about it though.



Categories: media, parties and factions

Tags: ,

20 replies

  1. The Rudd bounce may be over with 2PP in the latest Newspoll being 48/52 Labor/Libs. Of course this is only one of the three polls, no election has been called yet that I know of and who the hell knows what they actually ask in these things. I also heard that support for Labor’s asylum seeker policy is rising.
    I think it is time we stopped comforting ourselves that it is the ‘bogan’ vote that likes the asylum seeker policy. People from all walks of life support humane treatment and onshore processig for refugees and people from all walks of life support the horrible situation or worse than we have now.

  2. Due to frequent investigations arising from a long string of ant-AGW claims against him, Michael Mann is currently the most vindicated scientist of this century, and he’s becoming weary of the claims continuing nonetheless: DC Court Bluntly Affirms Michael Mann’s Right To Proceed In Defamation Lawsuit Against National Review And CEI

  3. who the hell knows what they actually ask in these things

    They ask: ‘If a Federal Election for the House of Representatives was held today, which one of the following would you vote for? If “Uncommitted”, to which one of these do you have a leaning?’ Source: Newspoll latest polls (look for the 01/07/13 report). The 2PP is not asked directly though, they model it based on 2010 preference flows.
    I’ve been a respondent for Roy Morgan and their wording is very very similar.
    I don’t know how they validate the question for eg stability (ie, not getting different answers from the same person after an hour has passed), but it’s reasonably specific, not, eg, asking people to predict how they think they might feel at some future election date.

    • It’s often repeated (but I haven’t fact-checked since the last election) that pollster companies do not generally poll households without a landline, so that they’re missing a rather large demographic of under-30s who only have mobile phones whose voting intentions are thus not represented in the polls. Is it true that the polling companies haven’t caught up to the 21st century yet?

  4. Roy Morgan recruited me via door-to-door and came and did an in-person interview in my home using analog materials. (For anyone else they approach: it lasts for an hour or more and is mostly about brand recognition and such, depending on their clients. I found their recruitment process underestimated the time expenditure, although they did later tell me that I used/recognised more products than my interviewer was used to, so I may have taken more time.)
    They have since polled me once by SMS (to which I didn’t reply because I was distracted). I don’t recall either way whether I was asked if I had a landline phone as a requirement of being recruited, I don’t think so. They wanted me to be over 18, that I do recall. Their followup (about supplementary surveys) have been by mobile.
    They also wanted me to do a media diary (which is still done on paper), but I was traveling during the diary period and couldn’t participate.

  5. Newspoll at least have done some research into the effectiveness of mobile-only polls and claim that their data supports using landlines, mostly because of the ability to translate them into a household location. Topic paper for Newspoll clients: QLD Election 2012 results show landline telephone is still gold standard

  6. I note that topic paper doesn’t mention locations of polled voters and notes an obvious Coalition bias over and above the Coalition bias displayed by electors.
    Basically, they’ve put a misleading headline on a topic paper. Unsurprising, for a Murdoch publication, really…

  7. Queensland is also the only state in Australia to have anywhere near similar numbers of people living in the capital city and living outside it. Assuming people in rural areas are more likely to keep their landlines, then Queensland is the one place in the country where this effect will be least in evidence.

  8. Via Tanya Plibersek’s Twitter, Medicare billing is being changed so that procedures are no longer restricted to patients of a certain gender. I haven’t read any commentary on it yet, so don’t know how complete the change will be or whether there’s discrimination that’s been overlooked or similar.

  9. Mary – As I understand it, all procedures with a reference to gender will be rewritten to be gender neutral. A person identifying as a man (for example) should no longer face having to explain himself when claiming a rebate for a procedure that is currently coded as female.

  10. On the ‘mobiles v. landlines’ issue- statistically, this only matters is we believe that somehow mobile phone ownership, without landline ownership, is a predictor of voting behaviour. Now, the argument made here is that this phenomenon is more common among under 30s (which is presumed to be a predictor of voting behaviour), but as long as the number of under 30s who exclusively have mobiles is otherwise distributed similarly to under 30s with landlines (ie in terms of race, occupation, class, gender, other voting predictors) it makes no statistical difference.
    In the UK, with a population of 60 million and no compulsary voting (adding variables to the stats calcs), the sample they use to predict elections is just over 900 people. Because they have got their sampling method down almost perfectly, they can predict the elections results FOR EACH MAJOR PARTY (not just the winner) to within a few percentage points.
    Mathematically, if you can get your sampling technique honed in perfectly, then you can predict outcomes on any size of population with a sample of only 30. In reality, you need at least 50 as we just aren’t that good at picking perfect samples/asking the right question. Now, because you have multiple variables in large populations that need to be accounted for (ie gender, race, age etc) your desired sample size starts to expand to reflect that complexity within certain degrees of accuracy- but it doesn’t mean you need HUGE samples to be accurate. Therefore, as long as a statistical spread of under 30s have landlines (or there is some compelling reason for believing that mobile phone use predicts voting behaviour), there is no need to worry about the accuracy of these polls.

  11. (or there is some compelling reason for believing that mobile phone use predicts voting behaviour

    Isn’t the criteria answering a landline rather than mobile use? We don’t answer ours as we have a machine and screen out the cold callers.

  12. Datapoint: I’m mobile-only and have been polled by Ray Morgan on the mobile. Don’t know about the other pollsters.

  13. On the ‘mobiles v. landlines’ issue- statistically, this only matters is we believe that somehow mobile phone ownership, without landline ownership, is a predictor of voting behaviour… Therefore, as long as a statistical spread of under 30s have landlines (or there is some compelling reason for believing that mobile phone use predicts voting behaviour), there is no need to worry about the accuracy of these polls.

    It seems sensible to be concerned this unless evidence is presented otherwise though. That is, the burden of (reasonable) proof is on people arguing the sample is still adequately representative when there’s reasonable evidence suggesting that the sampling is biased.
    And in any case I gather tigtog’s concern is not so much that the sample of under-30s who have landlines might be not representative of under-30s as a whole, as that under-30s might be being systemically under-recruited, resulting in a sample biased towards older voters, which is predictive of voting patterns.
    So there are two cases that pollsters need to make to keep using landlines (assuming they are doing so, as this thread has discussed, Roy Morgan appears to not be):
    (a) mobile phone-only people as a population do not vote differently from landline phone-reachable people as a population
    (b) correcting for possible age skew in recruitment
    (b) wouldn’t be too hard to do, I would think, since the census data (and maybe enrollment data, I’m not too familiar with it) should give the age demographics to recruit from. So I’m not too concerned with tigtog’s original objection either necessarily, I’m just not convinced by what I read your argument as: we shouldn’t be worried that the sample is biased unless someone makes a really strong case that it is. I think the burden of proof is in the other direction.

  14. But based on that argument, we should also be worried that the data pollers remember to ask enough women, enough people with Chinese heritage, enough people aged over 65 – if we don’t trust them to be competent, why trust that they’re doing any of this right? Why only worry about young people?
    How I understand this works (and again I base this on my knowledge of the UK system) is that the pollsters know what they need in advance and then look for them in the population. So, they have a list that would look something like: 3 black women aged 60-65; 4 white men aged 18-25; 6 Chinese men aged 30-34 who bought the Jay-Z album (or whatever variables they find to be important). And, then when they poll you, they should ask for some basic demographics (or ask for you by name, which means they got your demographics for elsewhere) to help meet their quotas. This is why they sometimes end an poll early- if they discover that you don’t work for the demographic they’re trying to cover. In some cases, I believe they use the same people repeatedly to make life easier. This is why I presume that their results would still be accurate, because they’re filling quotas, not taking a sample and then hoping it works for them.

  15. Why only worry about young people?

    Because there’s evidence that they are disproportionately mobile-only. Cf ACMA’s data (Figure 4): by eliminating mobile users, you eliminate 40% of 25–34 year olds from your recruitment pool, but only 3% of over 65s. I know well that the actual sample only needs to be small, but 40% a large enough pool of people who are totally ineligible for recruitment that it seems reasonable to say “we want that 40% of 25–34 year olds to be demonstrably extremely similar to the other 60% in polling outcomes before we are confident that they don’t need to be eligible to be recruited.” (And now I am wondering about non-English speakers, I imagine polling companies have information on that too.) That’s for my concern (a). For (b), quotas are sufficient (and Crikey suggests they have age quotas), they just need to work harder to get their 25–34 quota.
    Another report from a polling company (relevant more to (a) than (b)): Pew reports that in the US 25% of adults cannot be reached by landline, and that there are noticeable differences in their politics, their wealth (mobile-only are poorer and less likely to be college graduates) and some of their lifestyle patterns (eg wireless net access higher in mobile-only, gun ownership in landline-accessible). It’s not all germane to this discussion perhaps, but likely relevant to non-political polling for products and such, and sometimes politically relevant.

  16. Here’s a study that seems to suggest that there’s a bias towards right wing voters in landline only populations. It’s based in Europe however so YMMV. And oddly, they don’t look at age.
    Age skewing within a sample can be corrected for by weighting after the survey is taken, but I have no idea if the polling companies do this (it’s not hard if you have access to Census totals).
    @Feminist Avatar – trying to balance your sample to that degree is really difficult when you’re only talking about 1000 people. Also quota sampling is dodgy, far more dodgy than taking a proper probability sample, regardless of the distribution of characteristics within the sample. This is because if you select a sample properly (which basically means you know the probability of selection of everyone in the population) then you can accurately estimate how wrong you might be. In quota sampling you cannot do this. You also open yourself to self-selection bias (where you’re more likely to end up sampling those people who are willing and eager to talk about your subject matter).
    Polls are not brilliant examples of the survey takers art, but for what they are trying to do (estimate a common characteristic at a broad level in a large population) they are adequate.

  17. Kage, it’s unclear whether it applies to the PBS as well, or is MBS only.
    (There are also some items which have caveats ‘in females of childbearing age’; to change that would require some extraordinarily wordy definitions.)

  18. On the polling stuff- that Crikey explanation is pretty good. In case you’re interested here is the sort of questions pollsters ask- this is for the US (I can’t find a more local example with a quick google search- http://www.people-press.org/files/2011/01/demographic-questions-3-11.pdf) As you can see, they take into account things like education and occupation that are related to mobile phone/ landline usage.
    Basically, they know they have to ask enough people that fall under particular demographics (widely defined), so they keep phoning until they get them and, yes, this means they might have to work hard to get groups that are under-represented (like young mobile phone users), but as long as they get the numbers, ultimately it doesn’t really matter. This sort of thing is also influenced by the timing of the phonecall (if you phone during the day, you’re more likely to get women; if you phone on a Friday night, you’re less likely to get young people etc), so they are having to take into account a whole bunch of ‘access’ issues. However, because they have a very large population to choose from and only need a relatively small number of people, it’s not vastly difficult.
    On the probability stuff, yes, quota sampling is dodgy, but they try to offset that by making random phonecalls and hitting their targets through that random process. (And, this is also the fault of the way I explained it, they don’t usually seek out ‘4 black women’ and then stop, although I understand that they sometimes do door to door surveys amongst groups which are hard to access. They have thresholds they need to get above, at which point they can stop making phonecalls; they might not always make it, but if they didn’t on a regular basis, they would have to adjust their polling method, which is possibly why some companies now call mobiles). They then adjust their full amount of responses by full population data to make it accurate.
    On the age thing, there is some debate amongst political economists on whether age is a useful predictor of voting behaviour (by itself). Some people argue for a generational effect (which lasts through life- young people are not necessarily more radical or liberal- I like this argument myself); other people think that class, occupation, race etc are better predictors. The English-language speaking is a genuine issue in the US; I wondered whether it was less so in Australia as you have to be a citizen to vote and that requires basic English skills in order to pass the citizen test?

  19. About 12 years ago I worked for a company and conducted phone surveys on a range of topics (depending on who hired us and what they wanted us to ask), and that included surveys on who a person was likely to vote for in an election. We conducted those surveys at regular intervals, but we weren’t told usually who we were conducting them, but my guess is that it was on behalf of one of the big polling companies.
    At that time we would be given a sheet of randomly chosen phone numbers from the phone book, from that particular electorate. We didn’t call mobiles then, but it was 12 years ago, and it was pretty uncommon to be mobile-only then, that might have changed now (but I’ve never been called on my mobile to be surveyed).
    We would ask enrolled voters which party or candidate they would vote for if the election were held today, and if they chose an independent or minor party, then we’d ask them to pick which major party they would direct their vote to on subsequent preference. If they didn’t pick one, you’d terminate the interview (anytime you terminated an interview you couldn’t use, you’d just tell them that was all the questions, thanks for taking part). There would be several other questions that relate to what is important in deciding the voters decision (economy, health, jobs, environment etc), and perhaps others that might relate to current issues.
    At the end we would ask for demographic information, including sex and age group, sometimes that also included further information such as employment status and income bracket.
    We would conduct surveys randomly until someone from the office would pop out and note that we had enough from particular age groups or sex, then it was more tricky because you might know you only needed a male aged 18 to 39, but you weren’t supposed to ask specifically for them, so you’d shift the order of your questions so the demographic stuff was up second or third, then terminate the interview if they weren’t suitable.
    Surveys were only ever conducted in English, with the exception of one particular survey where approximately a quarter of our targeted respondents were Vietnamese. On that occasion we had our one Vietnamese colleague translate the survey, and conduct those in Vietnamese. That survey was not to do with voting. Voting surveys would have excluded non-English speakers at that time. Indeed, even if our one Vietnamese speaking colleague had managed to call a non-English speaking Vietnamese person, she would not have been able to conduct the interview in Vietnamese, as it was a rule that surveys only be done in English (to keep consistency, we could not alter the script at all).

%d bloggers like this: