Search
Close this search box.

Analysis of the APA Task Force on Abortion and Mental Health by Dr. Priscilla Coleman

Critique of the APA Task Force on Abortion and Mental Health Priscilla K. Coleman, Ph.D. Bowling Green State University August 13, 2008 This document is not copyrighted and may be distributed or quoted directly without the author’s permission. The charge of the APA Task Force on Abortion and Mental Health was to collect, examine, and summarize peer-reviewed research published over the last 17 years pertaining to outcomes associated with abortion. Evidence described below indicates an extensive, politically motivated bias in the selection of studies, analysis of the literature, and in the conclusions derived by the Task Force. As opposed to bringing light to a complex literature, the misleading report carries enormous potential to hinder scientific understanding of the meaning of abortion in women’s lives. The report should be recalled and at a minimum, the conclusion changed. There is sufficient data in the world’s published literature to conclude that abortion increases risk of anxiety, depression, substance use, and suicide. At this juncture, the APA can not be trusted to provide accurate assimilation of information. Problematic Features of the Report Substantiated in this Critique The conclusion DOES NOT follow from the literature reviewed When comparing reviews of the literature there is selective reporting Avoidance of quantification Biased selection of Task Force members and possibly reviewers Power attributed to cultural stigmatization in women’s abortion-related stress is unsupported Selection criteria resulted in dozens of studies indicating negative effects being ignored Methodologically based selection criteria as opposed to geographic locale should have been employed and consistently applied Shifting standards of evaluation of studies presented based on the conclusion’s fit with a pro-choice agenda The conclusion (in italics below) DOES NOT follow from the literature reviewed “The best scientific evidence published indicates that among adult women who have an unplanned pregnancy the relative risk of mental health problems is no greater if they have a single elective first-trimester abortion than if they deliver that pregnancy.” They also note “Rarely did research designs include a comparison group that was otherwise equivalent to women who had an elective abortion, impairing the ability to draw conclusions about relative risks.” They are essentially basing the final conclusion of the entire report on one study by Gilchrist et al. (1995) which has a number of ignored flaws. The three studies that I authored or co-authored with unintended pregnancy delivered as a comparison group indicated that abortion was associated with more mental health problems. A few flaws of the Gilchrist study are highlighted below 1. The response rate was not even provided. 2. Very few controls for confounding third variables. The comparison groups may very well have differed systematically with regard to income, relationship quality including exposure to domestic violence, social support, and other potentially critical factors. 3. On page 247 the authors report retaining only 34.4% of the termination group and only 43.4% of the group that did not request a termination at the end of the study. The attrition rate is highly problematic as are the differential rates of attrition across the comparison groups. Logically, those traumatized are less likely to continue in a study. 4. No standardized measures for mental health diagnoses were employed and evaluation of the psychological state of patients was reported by general practitioners, not psychiatrists. The GPs were volunteers and no attempt was made to control for selection bias. When comparing reviews of the literature there is selective reporting A review of Bradshaw and Slade (2003) in the report ignores this statement from the abstract: “Following discovery of pregnancy and prior to abortion, 40–45% of women experience significant levels of anxiety and around 20% experience significant levels of depressive symptoms. Distress reduces following abortion, but up to around 30% of women are still experiencing emotional problem after a month.” Also ignored from Bradshaw and Slade (2003) is the following: “The proportion of women with high levels of anxiety in the month following abortion ranged from 19-27%, with 3-9% reporting high levels of depression. The better quality studies suggested that 8-32% of women were experiencing high levels of distress.” Coleman is quoted from a testimony given in South Dakota rather than quoting from the two reviews she has published in prestigious peer reviewed journals. There is a claim that other reviews such as those of Coleman and a very strong quantitatively based one by Thorp et al. (2003) are incorporated, but the conclusions of these reviews are avoided entirely. Avoidance of quantification The authors of this report avoid quantification of the numbers of women likely to be adversely affected by abortion. This seems like an odd omission of potentially very useful, summary information. There is consensus among most social and medical science scholars that a minimum of 10 to 30% of women who abort suffer from serious, prolonged negative psychological consequences (Adler et al., 1992; Bradshaw & Slade, 2003; Major & Cozzarelli, 1992; Zolese & Blacker, 1992). With nearly 1.3 million U.S. abortions each year in the U.S. (Boonstra, et al., 2006), the conservative 10% figure yields approximately 130,000 new cases of mental health problems each year. In the report the authors note “Given the state of the literature, a simple calculation of effect sizes or count of the number of studies that showed an effect in one direction versus another was considered inappropriate.” What??? Too few studies to quantify, but a sweeping conclusion can be made? Biased selection of Task Force members and reviewers No information whatsoever is provided in the report regarding how the Task Force members were selected. What was done to assure that the representatives do not all hold similar ideological biases? What was the process for selecting and securing reviewers? How many were offered the opportunity? Did any decline? How was reviewer feedback incorporated into revising the document?…very minimally from this reviewer’s vantage point. Disclosure of this information is vital for credibility and accountability purposes. Power attributed to cultural stigmatization in women’s abortion-related stress is unsupported There are few well-designed studies that have been conducted to support this claim. In fact, many studies indicate that internalized beliefs regarding the humanity of the fetus, moral, religious, and ethical objections to abortion, and feelings of bereavement/loss often distinguish between those who suffer and those who do not (see Coleman et al., 2005 for a review). Selection criteria resulted in dozens of studies indicating negative effects being ignored According to the report “The TFMHA evaluated all empirical studies published in English in peer-reviewed journals post-1989 that compared the mental health of women who had an induced abortion to the mental health of comparison groups of women (N=50) or that examined factors that predict mental health among women who have had an elective abortion in the United States (N=23).” Note the second type of study is conveniently restricted to the U.S. resulting in elimination of at least 40 studies. As a reviewer, I summarized these and sent them to the APA. There is an insufficient rationale (cultural variation) for exclusively focusing on U.S. studies when it comes to this type of study. Introduction of this exception allowed the Task Force to ignore studies like a large Swedish study of 854 women one year after an abortion, which incorporated a semi-structured interview methodology requiring 45-75 minutes to administer (Soderberg et al, 1998). Rates of negative experiences were considerably higher than in previously published studies relying on more superficial assessments. Specifically, 50-60% of the women experienced emotional distress of some form (e.g., mild depression, remorse or guilt feelings, a tendency to cry without cause, discomfort upon meeting children), 16.1% experienced serious emotional distress (needing help from a psychiatrist or psychologist or being unable to work because of depression), and 76.1% said that they would not consider abortion again (suggesting indirectly that it was not a very positive experience). Methodologically based selection criteria as opposed to geographic locale should have been employed and consistently applied. If the Task Force members were interested in providing an evaluation of the strongest evidence, why weren’t more stringent criteria employed than simply publication of empirical data related to induced abortion, with at least one mental health measure in peerreviewed journals in English on U.S. and non-U.S. samples (for one type of study)? Employment of methodological criteria in selection would certainly have simplified the task of evaluation as well. Sample size/characteristics/representativeness, type of design, employment of control techniques, discipline published in, etc. are logical places to begin. I am shocked to not see the development of criteria that reflect knowledge of this literature. Shifting standards of evaluation of studies presented based on the conclusion’s fit with a pro-choice agenda There are numerous examples of studies with results suggesting no negative effects of abortion being reviewed less extensively and stringently than studies indicating adverse effects. Further the positive features of the studies suggesting abortion is a benign experience for most women are highlighted while the positive features of the studies revealing adverse outcomes are downplayed or ignored. All the studies showing adverse effects were published in peer-reviewed journals, many in very prestigious journals with low acceptance rates. Clearly then, the studies have many strengths, which outweigh the limitations. The same standards and criteria are simply not applied uniformly and objectively in the text and I could literally write pages and pages pointing out examples of this blatantly biased survey of the literature. A few examples are provided below a. The Medi-Cal studies are sharply criticized for insufficient controls; however with the use of a large socio-demographically homogeneous sample many differences are likely distributed across the groups. Moreover, the strengths of the study include use of actual claims data (diagnostic codes assigned by trained professionals), which eliminate the problems of simplistic measurement, concealment, recruitment, and retention, which all are serious shortcomings of many post-abortion studies. The authors of the Medi-Cal Studies also removed all cases with previous psychological claims and analyzed data using an extended time frame, with repeated measurements enabling more confidence in the causal question. . b. Results of the Schmiege and Russo (2005) study are presented as a superior revision of the Reardon and Cougle (2002) study, yet none of the criticism that was publicly leveled against the former study on the BMJ website is described. I contributed to this Rapid Response dialogue and I reiterate a few of my comments here: “The analyses presented in Table 3 of the article do not incorporate controls for variables identified as significant predictors of abortion (higher education and income and smaller family size). These associations between pregnancy outcome and depression are troubling since lower education and income and larger family size predicted depression (see Table 4). Without the controls, the delivery group, which is associated with lower education and income and larger families, will have more depression variance erroneously attributed to pregnancy resolution. Among the unmarried, white women, 30% of those in the abortion group had scores exceeding the clinical cut-off for depression, compared to 16% of the delivery group. Statistical significance is likely to have been achieved with the controls instituted. This group is important to focus on as unmarried, white women represent the segment of the U.S. population obtaining the majority of abortions. Failure to convey the most scientifically defensible information is inexcusable when the data set contains the necessary variables. I strongly urge the authors to run these analyses. Curiously, in all the comparisons throughout the article, the authors neglect to control for family size without any explanation.” c. Fergusson and colleagues’ (2006) study had numerous positive methodological features: (1) longitudinal in design, following women over several years; (2) comprehensive mental health assessments employing standardized diagnostic criteria of DSM III-R disorders; (3) considerably lower estimated abortion concealment rates than found in previously published studies; (4) the sample represented between 80 – 83% of the original cohort of 630 females; and (5) the study used extensive controls. Variables that were statistically controlled in the primary analyses included maternal education, childhood sexual abuse, physical abuse, child neuroticism, self-esteem, grade point average, child smoking, history of depression, anxiety, and suicidal ideation, living with parents, and living with a partner. Very little discussion in the report is devoted to the positive features of this study and the limitations, which are few compared to most published studies on the topic, are emphasized. d. Attrition as a methodological weakness is downplayed because the studies with the highest attrition rates (those by Major et al.) are also the ones that provide little evidence of negative effects and are embraced despite attrition as high as 60%. Common sense suggests that those who are most adversely affected are the least likely to want to think about the experience and respond to a questionnaire. Research indicates that women who decline to participate or neglect to provide follow-up data are more likely to be negatively impacted by an abortion than women who continue participating (Soderberg, Anderson, Janzon, & Sjoberg, 1997). Suffice to say, there is clear evidence of bias in reporting and in keeping with the rather transparent agenda of discrediting studies showing negative effects regardless of their true methodological rigor. I strongly recommended evaluating only studies that met stringent inclusion criteria, and then summarizing the studies is table format in such a way that the reader can quickly note the strengths and limitations of every study in a non-biased manner. Picking and choosing particular criteria from a large assortment of methodological criteria to evaluate various studies is inappropriate, suggestive of bias, and obfuscates the informative literature that is currently available. Lack of uniform application of evaluation standards creates a warped perception of the relative contributions of the studies. The following quote by the editors of the Canadian Medical Association Journal (CMAJ) would have been insightful to the Task Force members as they incorporated feedback and endeavored to produce a report in keeping with their charge of objective assessment: “The abortion debate is so highly charged that a state of respectful listening on either side is almost impossible to achieve. This debate is conducted publicly in religious, ideological and political terms: forms of discourse in which detachment is rare. But we do seem to have the idea in medicine that science offers us a more dispassionate means of analysis. To consider abortion as a health issue, indeed as a medical “procedure,” is to remove it from metaphysical and moral argument and to place it in a pragmatic realm where one deals in terms such as safety, equity of access, outcomes and risk–benefit ratios, and where the prevailing ethical discourse, when it is evoked, uses secular words like autonomy and patient choice.” (CMAJ, 2003. p. 169)