This piece of “reportage” from the SMH is typical of the coverage of a paper summarising research done by obstetricians and published in the Medical Journal of Australia:
Private birth has benefits for babies
Babies born to women in private hospitals are less likely to need resuscitation at birth or admission to intensive care than those born in a public hospital, a national study has found.
Obstetricians say the study, published in the Medical Journal of Australia, debunks widespread criticism of the high intervention rates for women in private hospitals.
The authors, from the Australian National University and University of NSW, say the findings challenge the long-held orthodoxy that increased rates of obstetric intervention, such as caesarean and induction, are “bad” for women and their babies.
Only as an afterthought does the SMH article mention that the study is comparing apples and oranges (although the information they highlight is actually more like comparing apples with cheese).
Neonatal death rates were 1 in 1000 in private hospitals and 3 in 1000 in public hospitals. The study took into into account that public hospitals deal with a higher proportion of riskier births. Younger women, smokers, indigenous women, rural women and women with medical conditions such as hypertension or diabetes are more likely to be cared for under the public system.
Women booked to deliver in a public hospital because serious adverse outcomes were predicted were excluded from the study
The second paragraph above seems to have been included to make it look like the populations of public and private patients have actually been matched for this study, when in fact only one of many criteria that make the two populations different is mentioned. How exactly did the study “take into account” the population differences other than by excluding the “predicted serious adverse outcomes”? Did they match for socioeconomic status, which is kinda crucial given that middle-class and wealthier populations are healthier than poorer populations anyway?
It doesn’t surprise me one little bit that a middle-class and wealthier population is going to have healthier babies, because they have the privilege of discretionary income to devote to supplementary aspects of pregnancy, while most people who don’t have private healthcare do not (one of the reasons they don’t have private healthcare, in fact).
To bring Bayes’ Theorem actually into the post: the researchers have taken a narrowly sampled sub-population already predisposed to having healthier babies, compared them to a far more broadly sampled population not generally sharing that same predisposition, and having discovered that their subpopulation does indeed have healthier babies, have then concluded that it is actually something they are doing to this subpopulation that is making a crucial difference. They have ignored nearly all the prior and conditional probability factors in their analysis of two populations.
For example, say that we have 100 women each from private and public hospital populations.
- In the private hospital population, no births are excluded from this study due to predicted serious adverse birth outcomes because those women have already been referred to the specialty team at the public teaching hospital, because these women are receiving frequent prenatal health checks.
- In the public hospital, as they have indicated, there are a certain number of predicted serious adverse birth outcomes referred prior to delivery, so that perhaps 5/100 births are excluded due to this.
- in the public hospital, many women have not been having regular prenatal health checks due to affordability or remoteness issues, so often no adverse birth outcome has been predicted even though they belong to an at-risk population as described above – “younger women, smokers, indigenous women, rural women and women with medical conditions such as hypertension or diabetes” – these women without predictions of trouble are not excluded from this study – let’s say 5/100 are only discovered to be having adverse birth outcomes after they have started delivery.
- Of the 100 women in the private hospital, a large number will be undergoing planned pregnancies where they built their bodies up for gestation while pre-pregnant by taking supplements such as folic acid and quitting smoking and drinking. These women are in the lowest risk category for birth complications.
- Of the 100 women in the public hospital, many of the poorest women will be undergoing an unplanned pregnancy (due to difficulties obtaining affordable reliable contraception) and will not have had the opportunity to stop drinking/smoking or take dietary supplements before the time of conception, and may have continued drinking/smoking for weeks or even months afterwards until they discovered that they were pregnant. Again, as they are less likely to have frequent prenatal checks and thus less likely to become “predicted adverse birth outcomes” – their adverse outcome will be a nasty surprise, and will be included in the study. Let’s say another 5/100 births fall into this category.
I’m sure you can all think of a few other aspects where the public hospital population has a higher prior probability of adverse birth outcomes no matter what intervention obstetricians put in place.
Sure, I’ve built in some assumptions above, and if my suggested rates are way off I’d be grateful for better information (especially if the ratio of unpredicted to predicted serious adverse outcomes is horribly inverted). But I hope you can see that excluding only “predicted” serious adverse outcomes ignores the glaring disparity between one population containing a large number of higher-obstetric-risk women who don’t come into the hospital’s radar until after they have presented for delivery, while the other population does not, and presenting the two populations as directly comparable.
Have peer review committees just given up on actually including a statistician these days? or do the statisticians need to do more sociology classes?
There are still middle-class women who have an ideological commitment to public healthcare, and who have all the health advantages that discretionary income provides. Let’s see a direct comparison between their birth outcomes compared to the private hospital birth outcomes, shall we? And just to placate the feminists, let’s also include maternal health outcomes in our study, beyond just the incidence of perineal tears (which tend to heal better than perineal cuts anyway) – how have the middle-class public hospital mothers who had low-intervention births recovered two weeks, four weeks, six weeks after birth? compared to the private hospital mothers recovering from their higher rates of abdominal surgery?
Categories: ethics & philosophy, gender & feminism, health, media, medicine, skepticism
The World Today yesterday had a piece which gave the Midwives assoc good airtime
I read the SMH article yesterday, but couldn’t put my finger on why it seemed wrong to me, although I did notice that unsurprisingly obs found that ob intervention was just fine and what is everyone complaining about…
I also thought that the study was largely saying ‘baby is fine, outcome for mother not so important, qygdb’.
I knew the Midwives wouldn’t miss those glaring methodological flaws!
To clarify before some new reader comes in and objects, the Hoydens believe that evidence-based obstetric intervention in birth complications is a good thing. It’s just that there’s an awful lot of routine intervention which is not evidence-based at all, and this study appears designed to bolster that income-generating status quo.
”How exactly did the study “take into account” the population differences other than by excluding the “predicted serious adverse outcomes”?”
I am not familiar with medical stats are constructed, but am assuming that the adjusted odds ratio is some sort of multivariate analysis, which suggests that smoking, regional location, age etc are in fact controlled for (table 3). Many of these things are correlated with SE status, so it may take out some of effect. There is no acknowledgement of income/ wealth as an omitted variable.
The paper itself is here: http://www.mja.com.au/public/rop/robson/rob10880_fm.html#Box1
I didn’t read it because of the unwritten but obvious bias in the headline. I’m glad however someone has and taken them to task.
The SMH is becoming more atrocious by the day/hour/minute with its attention grabbing, specious article headlines that when one actually reads in full, do not stand up to any kind of scrutiny and bear only scant relationship to the ‘advertised’ content. They are, I am convinced, amoral. What a pity for us that they hold such tremendous power at their fingertips.
Thanks for the link to the article. Below table 3 they list their adjusted factors:
ETA: note that they don’t detail how they decided what the adjustment for each factor should be, or based on what properly collected information, if any.
The paper also gives more detail on who was excluded: ALL privately insured patients who gave birth in private hospitals, not just those booked due to predicted serious adverse outcomes i.e. every single privately insured woman who chose to go to a public teaching hospital centre of obstetric excellence was excluded from the public hospital figures. The subjects excluded due to this criterion are nearly 5% of all births – enough to be kinda statistically significant, dontyathink?
That’s probably enough to virtually knock out any chance of income parity amongst the subjects right there.
Medical doctors are notoriously bad at constructing comprehensive statistical analyses unless they actually include a statistician on the research team. Particularly with Bayesian analysis, doctors when tested on calculating the probability of a positive test result indicating a real medical condition have only had 15% get the correct number (Casscells, Schoenberger, and Grayboys 1978; Eddy 1982; Gigerenzer and Hoffrage 1995; and many other studies). One would hope that professors at universities would do better than a typical GP, but considering they have not credited a statistician with crunching their numbers, who knows?
Sadly, Bayesian math concepts are too hard for normal human. In order to properly understand them, you have to go through a special induction process that renders you unfit to be a properly functioning person (see also under microbiologists)
The study is risible, and I blame the MJA for publishing it. It appears custom-designed to undermine the Maternity Services review.
It will get plenty of airtime, and people pushing for evidence-based and woman-centred care will have to divert a lot of their energy into pointing out the flaws to a court of public opinion that has made up its mind. Because you can have a hundred studies pointing to the safety of low-intervention birth, but the one, no matter how flawed, that purports to show the opposite will have people flocking to it, because it reinforces their prejudice and it reinforces the power and control of the medicoindustrial complex – we’ve seen this all before with the Bastian study.
Not controlling for SES nor for the health status of the mother (except for two specific illnesses), nor for previous obstetric outcomes, nor for antepartum complications (eg congenital abnormality, antepartum haemorrhage, etc) – I’m disgusted that it passed review. Then there is the issue of under-reporting (particularly of perineal trauma) in private hospitals.
There may be an issue with undertrained staff or problematic ratios in public hospitals. But this study isn’t being played that way: it’s being played as an obstetricians vs midwives smackdown, which is _not_ what this data actually represents.
Robson has a very clear agenda. He rails repeatedly about the “orthodoxy” of positions that critique unnecessary interventions and non-woman-centred care. He pointedly ignores research that doesn’t support his prejudice. You can read his previous work here: “Throwing out the baby with the spa water?” I find this rather ominous:
And then his colleague, following up a Letter to the Editor on that paper, says:
In other words: ‘Your wishes are important to us, right up to the point where you say “No”.’
@ Oz Ozzie,
I only did undergraduate stats as a background subject, but even I could get a holistic sense for Bayes Theorem. I’m scarcely an expert, but the huge unfactored difference in populations here just screams its presence.
I understand logistically that the data on socioeconomic status simply may not be part of the National Perinatal Data Collection (NPDC) database, but to fail to mention it in their paper as a potential confounding factor is an egregious omission.
@tigtog. Congratulations. Still, observationally, it’s hard to get Bayesian concepts. My observation base is steadily growing…
Now, if you have 20 predictions with p value <0.05 in your paper, what’s the chances that you have at least one that is simply due to statistical chance? If you add more predictions, do you need to drop your p value limit? As you publish more papers, what then?
There was a fun thread about this on the Aus/NZ stats email list late last year.
@ Oz Ozzie:
Ha – now you’re getting complicated! My critique of this paper isn’t even necessarily to do with how they assigned their p values in their adjusted ratios (beyond my own expertise), it’s to do with how their study doesn’t seem to adequately address prior probabilities in the actual design stage of their analysis.
For a bit of fun, I’ll reproduce the sort of question that only 15% of doctors calculated correctly, because it includes a basic Bayesian element, but is not horrendously complicated at all. Now this is one question that I personally would have got correct as an undergrad:
Oz, you can probably look at that and immediately predict where most folks will go wrong, which involves treating two populations with separate probabilities as if they have the same probability. This is exactly the same mistake that the authors of the paper appear to have made in not adequately factoring significant factors into their adjusted ratios and in excluding subjects based on assumptions rather than data about their risk status.
yes. about 1/8 chance she has cancer. Humans are not good at false negatives. On the general subject of the paper, the conceptual design of the paper is more flawed than the statistical issues, but that’s been commented on, so I didn’t bother. Generally, if the conceptual design is flawed, I take it granted that the methodology will also be screwed.
The article I read (given to me by my private obstetrician…) had a comment from Robson that they didn’t control for SES and that SES does have a huge impact on pregnancy/birth.
Christ knows if I’d tried to do the last four months in my old job, on my old wage, I wouldn’t be nearly as healthy as I am now. I got to take sick leave without hassle when I was throwing up. I got to ease off on certain schedules. I got to eat whatever I thought I could keep down, not just whatever I could afford. I still managed to lose weight though. Not enough to worry, since I’ve got padding, but enough that vitamin intake is of concern. Because you can be fat (like lots of low SES people, including women) and lacking in nutrients (because the same reason you’re fat is the same reason your body doesn’t work well). So SES affects maternal health, including mental health, and that impacts birth outcome.
My ob. didn’t mention the under-reported perineal trauma either.
The SMH article gives the game away: “The study was released early online in anticipation of the Federal Government’s maternity services review, which could come out as early as this week.”
And am I being too cynical in noting that every other MJA article is frustratingly paywalled, but this one is freely available to all to visit the site? The AMA ostensibly pushing a political agenda through its journal? Never!!
tigtog: what’s the answer to the question?
Oz Ozzie says 1/8, I say about 8%. Those numbers are really not the same!
(I’m in my first year of a maths degree – we’ve just started looking at Bayes’ stuff… so if I’ve got it wrong, I’ll be sad!)
Rachel, you’ve got it right. Most people find it easier to actually get it right if the rates are expressed as x per 10,000 people rather than as a straight percentage, because then one can always double-check one’s work by making sure that the total is still 10,000. (An old trick my maths teacher in Year 9 taught me.)
So 100 women from 10,000 who have routine screening will actually have breast cancer (datum 1). Of those women, 80 from the 100 will receive a positive mammography test result (datum 2). Of the remaining 9,900 women who do not have breast cancer, 950 of them will also receive a positive mammography test result (datum 3), making a total of 1030 positive test results (datum 4) but only 80 of those women who actually have cancer (back to datum 2). Therefore the probability of any one woman with a positive mammography test result actually having cancer is 80/1030, which is 7.8% or about 1/13.
The two most common mistakes made are to ignore that there is a population of women with cancer and a population of women without cancer pre-existing the application of the mammography (and the mammography itself changes nothing about the rate of cancer) and then to overlook the false positive rate.
Rachel you can check your working here. FWIW (not much! I boggle anew each time I work through a problem) I think you are right.
My faith in my ability to count is restored 🙂
I’ve never heard of the x per 10,000 people thing before, but it kind of makes sense. Your teacher was obviously a misunderstood genius, tigtog!
Also, perhaps this shouldn’t come as a great surprise to me, but you do your maths differently. I really had to think about what I’d done – all of the data was labeled in a way I’m not used to. It was interesting though – thanks, Su!
He taught the x per 10,000 idea as a back-up for whenever we weren’t sure that we’d applied first principles correctly. I wrote it out the way I did above to be clear to those who aren’t working with maths notation all the time, so I’m not surprised that it ended up being a very different method from what you’re being taught as first principles using proper Bayesian notation. (It’s also an awfully long time since I actually did a stats calc myself, so it saved me some confusion).
Getting more technical, there’s the prior probability of 1% breast cancer incidence, then there’s the two conditional probabilities that have to be taken into account (true positive rate for those with cancer, false positive rate for those without cancer) – you only get the correct answer if you use all these figures and apply them in the proper order.
Which is where the study in the MJA misses out – their study doesn’t include relevant figures in their calculations. They do talk about SES in the discussion section of the paper and how they couldn’t factor for it because it wasn’t included in the database, and they do mention that their exclusions for potential bias towards predicted adverse outcomes may have in fact introduced a different bias to the results, but it’s a caveat right towards the end and that point has not been highlighted at all in any MSM coverage that I’ve seen.
Problems with the one sided contributions so far if any of you are actually interested in what lessons may be learned from the paper
1. One of the authors is Prof Liz Sullivan, a respected public Health epidemiologist working at the Uni NSW Perinatal and Reproductive Epidemiology Research Unit, and a strong advocate for the public health system. I don’t think she would risk her professional reputation in co-authoring the paper if she felt that it was methodologically unsound. She certainly doesn’t have any conflict of interest that you presume the lead author has.
2. Many of the SES and demographic risk factors increase the risks for private patients ie older mothers, higher proportion of first time mothers, and significantly higher proportion of previous caesarean delivery.
The conclusion of the authors is that the often quoted assumption that populations undergoing increased interventions are not receiving any benefit from these interventions is pretty unambitious and supported by the evidence presented.
Devil’s Advocate: Liz Sullivan is the third author. We do not know how much input she has had into the way the statistics have been presented and used.
What we do know is that the press release from her university on this study looks very, very different from the way it’s being spun elsewhere.You can read that press release here. It includes this opener:
The adjusted odds ratios on perinatal outcome are worse for the public hospitals than the unadjusted odds ratio. That seems strange if they are also saying that public hospitals get more high risk patients.
It may be that risk factors for maternal age and first time mothers outweigh the risk factors in smoking, diabetes, remoteness and indigeneity.
However one of the factors they are adjusting for in the odds ratios is method of birth. This may be a major factor in making the adjusted odds ratio for the public hospital than the unadjusted odds ratio.
How can they adjust for method of birth and then say the results show that private hospitals have better results because they choose more interventions?