By Dave Andrusko
I’m guessing that were I to make a LexisNexis search, I’d find an endless stream of stories about a recent study purporting to “prove” that women do not regret their abortions that parroted the study’s conclusions down to the last syllable. Indeed, having read the original study in Social Science & Medicine, it’s fair to say that some news accounts went even further in “proving” that it is absurd to think that women might regret taking their baby’s life. (Not to mention the responses of the study’s lead author to media inquiries. Freed of the pretense of objectivity, she laid into pro-life legislation with a partisan’s passion.)
As I wrote yesterday, the only story I encountered that gave critics a fair shake appeared in the Washington Post.
Then, lo and behold, a friend forwarded me a story today from (of all places) Mother Jones which did an exquisite job pointing out the study’s flaws.
Do read Kevin Drum’s story for yourself. The headline is likely tongue in cheek and/or was written by someone else: “That Recent Abortion Study Is . . . Maybe Not the Final Word.” Clearly it isn’t the final word, as Drum methodically points out.
The single most important consideration paralleled what we wrote on Wednesday. After quoting a passage from a story written by Robin Abcarian, a long-time abortion apologist for the Los Angeles Times, Drum writes
“Abcarian doesn’t mention the real criticism of Rocca’s study: namely that it included only women who agreed to participate in the first place.¹ In other words, her sample is self-selected, not random.”
(Not mentioned, but right up in making the study so shaky is that women who did NOT abort were not interviewed, as Dr. Randall K. O’Bannon, NRLC Director of Education & Research, observed.) UCSF epidemiologist Corinne Rocca, the lead author, offered a painfully lame excuse which Drum was too kind to really bash.
Basically, Rocca says that 38% [self-selected] enrollment is good enough and that
We have no reason to believe that women would select into the study based on how these emotions would evolve over three years.
Drum cuts her lots of slack and then cuts to the core of the study’s [non-]representativeness:
It’s true that longitudinal studies, by definition, include only people who agree to be part of the study in the first place. If you’re studying some concrete physical phenomenon like lead poisoning or the effect of a new drug, that’s probably OK. But this is precisely why longitudinal studies aren’t generally useful for assessing things like emotional states, which can easily affect participation in unexpected ways. Rocca says “we have no reason to believe” that happened here, but that’s a pretty lackadaisical approach to legitimate criticism. I can think of half a dozen reasons off the top of my head why emotional states might affect participation. Maybe people who feel guilt are less likely to want to be reminded of it periodically for the next five years. Maybe introverts are less likely to participate. Maybe women with traditional upbringings are less likely to participate. Etc. This is an endless list, and “no reason to believe” mostly suggests that nobody bothered looking.
Drum is no pro-lifer, which makes his analysis harder for pro-abortionists to explain away. Indeed, first he writes
None of this means the study’s conclusions are wrong. My own guess, based on other research, is that it’s basically correct.
Only to conclude (honestly)
That said, I just don’t see how you can claim to get any kind of reliable results from an extremely non-random sample like this. This is true both for studies we agree with and those we don’t.