Monthly Archives: December 2013

Quack is as quack does: How modern pharmaceutical companies have become quacks.

Alicia M. Alexander is a second-semester senior at Grinnell College. She studied Psychology with a focus in Neuroscience. Outside of her coursework, she enjoys going on adventures, listening to vinyl records, and snuggling with her personable cat Waffles.

Americans have become increasingly more wary of pharmaceuticals, the pharmaceutical industry, and relationships between pharmaceutical companies and doctors. While dubious marketing practices continue to result in marketing settlements, more and more investigative journalist books, such as Jacky Law’s Big Pharma, are hitting the market to expose the pharmaceutical industry’s influence on health care. As direct-to-consumer (DTC) advertising continues to shape the public’s perceptions of health, illness, and cures, these ethically questionable promotions signify the re-emergence of yesterday’s patent medicine peddlers.

While there are many definitions of “quacks,” this analysis is specifically geared towards marketing behavior [1] for promoter profit rather than public good. Quacks are people who define normal life-problems using medical terms in order to create a market where consumers will purchase the advertised product, or cure, in the hopes of regaining health. It is important to clarify here that medications are not bad in and of themselves. Rather, it is the presentation of such medications for profit under examination here. Clearly drugs such as insulin, vaccines, antibiotics, and antihistamines (just to name a few) have saved millions of lives since their discovery, and are undeniable medical advancements.

The object of interest, therefore, is the direct-to-consumer advertising of drugs as cures for problems that are not inherently medical. Because such advertising techniques are critical in shaping the public’s perception of health and illness, as well as the relationship between doctor and patient, there are several ethical considerations that accompany DTC marketing. However, big pharma has seemingly ignored these in the interest of company profit. These are all too familiar characteristics of quacks that have been plaguing America for centuries.

Quackery was introduced to Colonial America through English Nostrum imports during the 18th century [2]. Originally, simple ads containing the name of the imported goods were listed in newspapers. The real advertisement took place directly towards product consumers as removable, elaborate labels wrapped around distinctively packaged bottles. As the American Revolution erupted, trade became more restricted, imports were greatly reduced, and the distinctive bottles were re-filled with American counterparts. Though British imports were resumed after the war, imports were noticeably more expensive than local remedies. Whereas there was no incentive for elaborate newspaper advertisements before the war, as proceeds would mainly support over-seas producers, postwar marketing changing to promotional pamphlets and medicine shows signified American interest in profit.

Rooted in theatrical performance [3], quacks appealed to potential consumers by catering to folk beliefs, attracting crowds through singing, and re-assuring people that the products would cure their ills. These were salesmen of the most charming variety, who boosted their legitimacy by using medical terminology in order to describe everyday problems. Their advertisements produced most of the income for newspapers by the early 19th century. With the advent and boom of radio in the early 20th century, companies could continue to strengthen their marketing relationship with the media using DTCs. This relationship was profitable to both parties involved. “In 1934, radio grossed $72,887,000 in advertising, more than 80% of which went to the advertising of drugs, foods, and other convenience items” [4]. When adjusted for inflation, today that would be $1,242,515,861.48.

After media statements such as, “the quickest way to a woman’s lips is in her ears” [4] had quickly proven to be true, the Standards of Practice for Radio Broadcasters of the United States of America issued a regulation for radio advertisements in 1937. Prior to this point, the 1906 Pure Food and Drug Act attempted to promote public safety by requiring manufacturers to list the active ingredients on their labels. While this act brought ingredients into awareness, it did not regulate advertising. The new radio regulation meant that broadcasters could only report the name of the program sponsor rather than their DTC advertisements. By 1938, Congress was beginning to define what drugs could be used and who could administer them. The Federal Food, Drug, and Cosmetic Act of 1938 brought safety to the forefront of marketing regulations by requiring proof of product safety prior to advertising; it was followed by drug limitations.

The period between 1951 and 1970 was marked by increasing legislation for drug safety, restriction, and drug information provided to consumers. Drug advertising had been directed towards physicians in journals rather than directly at consumers. Then Reagan took office, and the 1980s began the shift from “the golden age of doctoring” to an “increasingly buyer-driven system” [5]. With a change in the wealth perspective, pharmaceutical companies – as well as other big industries – embraced the concept and benefits of fortune. Patents, intended to be a financial incentive for pharmacological research [6], became a method of accruing profit rather than better pharmaceuticals as companies teamed up with tax-payer-funded university research departments. The Hatch-Waxman Act in 1984 [see page 25] extended patent life from an average of 8 years in 1980 to an average of 12 years by 1993. Subsequent Acts throughout the 1990s increased the average patent life to almost 16 years, assuring an increase in a company’s revenue.

Because pharmaceutical companies – assuming they are funding their own research – gain financial expenditures back once the drugs are patented and approved for marketing, patent expiration is a revenue killer. Once generic drugs are able to be marketed and sold, developer profit is greatly decreased. For example, Zantac sales decreased by 90% within four years of the generic release [see p. 245]. Three components, then, contribute to product sales. First, brand name recognition becomes important for continued selling, because patients as consumers will recognize the brand and request it from their physician. Second, DTC advertising is a way to familiarize patients with that name. Finally, marketing drugs for multipurpose use keeps them on the market.

The need for marketing regulations in the 1930s illustrates the power of advertising on consumer demand. Setting those regulations aside in the late 20th century, The Food and Drug Administration Modernization Act (FDAMA) increased the previously reigned in freedom of advertising to promote medication [5]. As patient consumers are bombarded with advertisements about drugs that are framed around everyday situations with suggestions of possible medical explanations, these non-medical moments become illnesses to be treated with the marketed product. In comparison to America’s past several centuries, the advertisements today now that regulations are no longer as strict as they were in the 20th century are frighteningly similar to the advertisements of proclaimed cure-alls. It’s time for the public to inquire and the government to reflect on medicine’s historical contexts.


  1. Porter, Roy. Health for sale: quackery in England, 1660-1850. Manchester [England: Manchester University Press ;, 1989.
  2. Young, James Harvey. The medical messiahs; a social history of health quackery in twentieth-century America.. Princeton, N.J.: Princeton University Press, 1967.
  3. Anderson, Ann. Snake oil, hustlers and hambones: the American medicine show. Jefferson: McFarland & Co., 2004.
  4. Fowler, Gene, and Bill Crawford. Border radio. Austin, Tex.: Texas Monthly Press, 1987.
  5. Conrad, Peter. The medicalization of society: on the transformation of human conditions into treatable disorders. Baltimore: Johns Hopkins University Press, 2007.
  6. Brekke, Kurt R., and Odd Rune Straume. “Pharmaceutical Patents: Incentives for Research and Development or Marketing?” Southern Economic Journal 76, no. 2 (2009): 351-374.

Comments Off on Quack is as quack does: How modern pharmaceutical companies have become quacks.

Filed under Uncategorized

A Tale of Two Diseases: ADHD and Neurasthenia

Jeanette Miller is completing her undergraduate studies in English at Grinnell College in Grinnell, IA. She enjoys writing poetry, listening to folk music, and drinking an occasional cup of tea.

Consider two diseases: Disease A and Disease B. Children with Disease A are described as being “excitable” and “precocious,” at risk of being “overstimulated.” Thus, they are unable to balance “academic, intellectual, and physical growth” (Schuster, 116). Children suffering from Disease B, on the other hand, are “active, restless, and fidgety” and have difficulty “sustaining attention to tasks, persistence of effort, or vigilance” (Barkley, 57).  At first glance, the symptoms of the two diseases in children seem oddly similar. Yet these are two wildly unique diseases that have never overlapped in time. The former, Neurasthenia, was popularized in the nineteenth century, diagnosed primarily in adults but often in children. The latter, Attention-Deficit/Hyperactivity Disorder, or ADHD, did not enter the mass consciousness until the latter part of the twentieth century and is diagnosed primarily in children but often in adults. If a child with ADHD were to travel backwards in time seeking psychiatric help, my money is on a Neurasthenia diagnosis.

Neurasthenia was the “popular disease” of the nineteenth-century. Characterized by a lack of nerve energy, Neurasthenia’s symptoms vary wildly from depression to hyperactivity to indigestion. Since anything abnormal could be attributed to Neurasthenia, the disease became an umbrella term. ADHD gained momentum more than a century later. Generally understood today as a lack of inhibition in the cerebral cortex, the symptoms of ADHD include hyperactivity, inattentiveness, distraction, and failure to complete tasks effectively.  Thus, both definitions attempt to characterize abnormal or unproductive behaviors as treatable diseases.

I am not suggesting that the symptoms of ADHD are fabricated; the symptoms and their effects on people’s lives are very real. Yet, many of these same symptoms were ascribed to an entirely different disease in the previous century. The fact that physicians in different eras have pathologized this set of symptoms speaks to the arbitrariness of the diagnosis and the significance of the broader social context. Or, as David Schuster puts it in his book on Neurasthenia, “Depression, irritability, insomnia, lethargy, indigestion, and pain—these have long been part of what it means to be human.” It wasn’t until the 19th century that these “unfortunate but entirely normal aspects of life…had begun to represent something: the intolerable symptoms of disease.” (1)

The instability of diagnostic categories doesn’t make the lived experience of these ailments any less real. Try telling a depressed person that their disease isn’t real because it wasn’t a diagnostic category until 1980—actually, please don’t try. Instead, ask the question: why have these diseases been constructed in divergent ways over the past couple centuries? Neurasthenia was considered the American Disease, conceived of as a side effect of the rapidly industrializing world and schedule-oriented society. While proponents of Neurasthenia looked to the outside world to explain physical and emotional abnormality, ADHD does the opposite.

So, where do we put the blame? Neurasthenia over-emphasized external causes for abnormality, while ADHD emphasizes internal causes to explain deviant behavior. Perhaps, as Conrad argues in his groundbreaking piece “The Discovery of Hyperkinesis: Notes on the Medicalization of Deviant Behavior,” this reflects a broader shift in society’s “individualization of social problems” (19). By casting Hyperkinesis, or ADHD, as an illness, we ignore the possibility that the behaviors in question are means of adaptation rather than an illness, diverting attention from “the family and school and from seriously entertaining the idea that the ‘problem’ could be in the structure of the social system” (19). And isn’t it easier to blame ADHD on “a glitch in the brain that could be tweaked with stimulant drugs” (Smith 98) rather than on problems in the school or family?

Somewhere between Neurasthenia and ADHD, society shifted in its understanding of abnormal behaviors. The blame shifted from the society to the individual, or more specifically, from society to an isolated brain dysfunction—a dysfunction that can only be treated with medication. We treat the symptoms without fully understanding their origin. Reducing complicated behavior patterns to neurological abnormalities may prevent us from seeing how these “problems” fit into the broader experience of the individual.

And the irony behind all of this? By placing the blame on the individual, the individual is removed from the equation. Disease, as we understand it, has become individual yet impersonal—and thus is approached with a limited perspective.


Peter, Conrad. “The Discovery of Hyperkinesis: Notes on the Medicalization of Deviant Behavior.” Social Problems 23, no. 1 (October 1975): 12–21.

David G. Schuster. Neurasthenic Nation: America’s Search for Health, Happiness, and Comfort, 1869-1920. New Brunswick, NJ: Rutgers University Press, 2011.

Matthew Smith. Hyperactive: The Controversial History of ADHD. Reaktion Books, 2013.


Comments Off on A Tale of Two Diseases: ADHD and Neurasthenia

Filed under Uncategorized

Triage and Trauma Medicine in United States Military History

Amanda Snodgrass is an English Major at Grinnell College, in Iowa who has an interest in biological chemistry, genetics and the zombie apocalypse. She has a chocolate cocker spaniel named Buddy and a hedgehog named John. She is a Lord of the Rings, Harry Potter and BBC Sherlock buff. Live long and Prosper.

Military medicine has come a long way since the Civil War. Gone are the days of the willy-nilly amputations and a shot of whiskey as an anesthetic. Today, military surgeons have better training, more experience, and better tools to save the lives of the soldiers defending our country’s interests. The journey to today’s triage and traumatic surgery was dangerous and often uncoordinated. The United States’ wartime participation has produced a modern, flexible form of military medicine despite the nation’s originally messy forms of medicine.

During the Civil War, Jonathan Letterman, former head of the medical staff in the Army of the Potomac (1862) assembled a small group of medical doctors to work with specific regiments in the Union Army. He introduced the Ambulance Corps, a group specifically for medical purposes. Previous ambulance groups were disorganized, understaffed, and frequently commandeered by non-medical officers. Letterman assigned each regiment litter bearers and doctors, but, as Kyle Wichtendahl notes, because these regiments were so divided, doctors “worked for their unit only, were either swamped with casualties or idle”. One of the most memorable parts of Civil War medicine is the amount and type of amputations performed in the battle field hospitals; these were done to lessen the chances of infection by the bullets lodged in limbs. The development of amputation and the Ambulance Corps were developed to help civilians get to hospitals and survive.

World War I brought new types of weapons, which called for new types of medical practices. A Belgian doctor, Antoine De Page created a five-step evacuation system adapted by the US that allowed the injured to be removed from the trenches and taken to newly established field hospitals for care.

  1. Remove the injured under cover of darkness: This had to happen at night because the German lines were so close, sometimes only 500 yards away.
  2. Casualty Clearing Station (CCS): De Page found that clearing wounds of all debris and dead tissue aided in healing
  3. Ambulances: The American Red Cross and the YMCA taught drivers’ training in order to quickly navigate potholes and road bombs while evacuating the injured
  4. Postes Avances des Hospitaux du Front: These mobile hospitals were set up several miles behind the lines and cared for the intermediately injured, those who could not make it farther but could be saved.
  5. Pre-Established Hospitals: The injured were then evacuated to hospitals along the coasts of France and Belgium.

Machine gun, shrapnel-flinging shell, and poisonous gas injuries brought soldiers by the hundreds, even thousands to the CCS’s causing the doctors, surgeons, nurses, and anesthetists to form an assessment order, giving priority to the most injured who could be saved in order to conserve supplies and energy. This evacuation technique needed to be adapted for World War II, as the soldiers avoided being stuck in trenches like World War I.

World War II brought much of the same wounds, giving doctors the chance to create a better way of healing. Doctors began inspecting the open wounds they worked on, searching for debris, drainage or edema. If the wound was clean, it would be closed and allowed to heal, if not, the wound would be treated and observed before closing. The fighting in North Africa was so bloody that the US army called for the creation of a blood bank. The creation of surgical outfits in the military helped develop new forms of medical aid in the next wars in the East.

The Korean and Vietnam Wars adapted to their landscape by creating MASH units, or Mobile Army Surgical Hospitals. These units were deployed throughout Korea and Vietnam. Helicopters became a very important form of transportation for the critically wounded, making the MASH units the first and only requirement for the treatment of the general wounded population. According to Robert Love, Tom H Brooking, and Darryl Tong, specialty surgeries, like use of the newly developed artificial kidney, were performed in pre-established in hospitals in large cities. The MASH units began as quick-moving 60-bed mobile units but grew to 200-bed fixed hospitals throughout the war.

Between the Persian Gulf War and the current war in the middle east, the United States Army discovered that rapidly moving fronts did not allow for MASH units any longer and thus developed five levels of care: “Level I, front line first aid; Level II, FST [Forward Surgical Teams]; Level III, CSH [Combat Support Hospitals], which is similar to civilian trauma centers; Level IV, surgical hospitals outside the combat zone; and Level V, major US military hospitals.” The response time of evacuation personnel dropped with the development of this framework of care. Those injured in Iraq or Afghanistan could be evacuated to a Level II or Level III hospital within 30 to 90 minutes, depending on the situation.

From drastic amounts of amputations and unorganized doctors to detailed practices and definitive levels of care, US military medicine has changed greatly, each step aiding not only medical practices in the US, but also the world as well as the thousands of lives saved by doctors around the world.


Robert Love, Tom H Brooking, and Darryl Tong. “The management of maxillofacial trauma during the Korean War – A coming of age of a specialty.” Journal of Military and     Veterans’ Health 19, no. 2 (2011): 10-14.

M.M. Manring, Alan Hawk, Jason H. Calhoun, and Romney C. Andersen. “Treatment Of War  Wounds: A Historical Review.” Clinical Orthopaedics and Related Research 467, no. 8 (2009): 2168-2191.

Kyle Wichtendahl, “Dr. Jonathan Letterman: Father of Modern Emergency Medicine.” Civil War Museum -Gettysburg & Antietam battlefield. (accessed September 30, 2013).

Comments Off on Triage and Trauma Medicine in United States Military History

Filed under Uncategorized

Post-traumatic Stress Disorder: The End of a Journey or the Start of a New One?

Marissa Yetter is currently a second year student at Grinnell College. She intends to declare a major in Psychology with a concentration in Neuroscience. She chose this topic to research based on an interest in PTSD and the current research being done to find new and better treatments. She also developed a general interest throughout the course in the development of psychiatry as a legitimate medical field.

On February 6th, 2013, The New York Times published an article by James Dao about a team of researchers in the psychiatry department at NYU who have recently begun a five-year study with the hopes to find biomarkers (concrete biological signals) that are linked to the existence of PTSD. These markers would provide a more concrete and definite diagnosis for PTSD or post traumatic stress disorder, and could eliminate the number of PTSD cases that go undiagnosed or are wrongly diagnosed.  It has been estimated that over the past decade, half a million war veterans have been diagnosed with post-traumatic stress disorder (PTSD) or a traumatic brain injury [1]. But how many of these diagnoses missed something or were wrong? We have no way of knowing if this number, already shockingly large, should be higher or lower. It is clear however, that PTSD is a significant problem in our society today, inflicting thousands of soldiers returning from war. It is fascinating that what stands in our way now is finding the presence of physical markers to ensure the effects felt by a patient are indeed caused by PTSD when it once was not even recognized as a legitimate mental disorder. Could this biological research be the final step in the long and drawn out development of PTSD over nearly two centuries?

Evidence of psychological war trauma began to surface back in the era of the American Civil war in the 1860’s. The Civil war was one of the first instances that drew attention to lasting emotional and mental trauma as well as physical trauma. Many soldiers’ reactions to the horrors of combat were simply brushed aside as was cowardice or malingering, but it soon became clear that these emotional tolls had a more lasting effect on men, plaguing them even after their service was over and making it difficult to readjust to their normal lives [2]. Shell shock and hysteria became the more commonly used terms for these lasting psychological effects in World War I, which was the next major instance of combat trauma seen by the U.S. [3]. In the later years of the American involvement of World War I however, psychiatrists began to take more of an interest in the classification of these symptoms, labeling it as traumatic neurosis, and by the end of the war, military psychology had become its own field of medicine, marking an important step in the rise and recognition of PTSD [4].

By the end of the second World War and the beginning of the Vietnam war, the criteria and understanding of PTSD as its own disorder had at long last been established and put into use. Once again, the need for this push forward was driven by the amount of war trauma seen in these two major conflicts. The medical progress that came out of World War II in terms of diagnosing and treating PTSD was accompanied by social changes that led to a wider understanding and acceptance of the condition. This enabled veterans to take a stand for the first time and lobby the government for care and compensation for psychological injuries as a result of serving in the war [5]. Perhaps at the peak of this political and social change surrounding psychological war trauma, came the inclusion of the PTSD in the newest version of the DSM, which is still the most widely used and accredited diagnostic tool in American psychiatric practice today. A full list of symptoms and criteria for diagnosis were published in the DSM-III in 1980, becoming the final step in solidifying the disorder as a legitimate and treatable condition. Many would argue that the story of PTSD ends here. But what of the potential for new findings on biological markers for PTSD?

Research has shown that veterans with PTSD have decreased activity in the frontal cortex of the brain which is associated with memory deficits. This contributes to loss of ability to extinguish or inhibit conditioned fear-responses. This makes it hard to let go of the fearful memories acquired in combat [6]. Genetic studies have also identified a gene that has been linked to predisposition to PTSD, making some more susceptible than others to the disorder. The military can now use this gene as a marker for which soldiers are higher risk for psychological combat trauma [7]. These findings are also being used to modify and improve treatment methods for PTSD.

There is still much room for error and overlap in a DSM diagnosis, and it may be the case that many of our diagnoses are still inaccurate or lacking in some way, leading to the mistreatment of a disorder. There has been a lot of recent discussion stemming from the concern that we are now over-diagnosing PTSD. It is estimated that nearly thirty percent of soldiers returning home from Afghanistan and Iraq are being treated with PTSD, a rate that has increased significantly [8]. Perhaps the discovery of biological indicators that can aid in the diagnosis of PTSD is the only way to determine if this is an accurate figure or not, ending the medical journey of the development of PTSD. Or perhaps this biological research is the start of a whole new journey. It is yet unknown.


  1. James Dao, “Study Seeks Biomarkers for Invisible War Scars,” The New York Times, February 6, 2013.
  2. John Talbott, “Combat Trauma in the American Civil War.” History Today 46.3 (1996): 41-47. Print.
  3. Peter Leese, Shell Shock: Traumatic Neurosis and the British Soldiers of the First World War. New York: Palgrave, 2002. Print.
  4. R. Levandoski, The medical discourse on military psychiatry and the psychological trauma of war: World war I to DSM-III thesis].University of North Carolina at Chapel Hill, 2010.95 Pp., 95. Retrieved from (742952963;         92797).
  5. G. C. Lasiuk and K. M. Hegadoren. “Posttraumatic Stress Disorder Part II: Development of the Construct Within the North American Psychiatric Taxonomy.” Perspectives In Psychiatric Care 42.2 (2006): 72-81.
  6. Society for Neuroscience. “Biomarker For PTSD And Why PTSD Is So Difficult To Treat.”ScienceDaily, 11 Nov. 2007. Web. 6 Dec. 2013.
  7. Elaine Schmidt, “UCLA Study Identifies Genes Linked to Post-traumatic Stress Disorder.” UCLA Newsroom (2012).
  8. Jamie Reno, “The Hero Project,” The Daily Beast, October 10, 2013.

Comments Off on Post-traumatic Stress Disorder: The End of a Journey or the Start of a New One?

Filed under Uncategorized

Victims of Forced Sterilization (Then and Now)

Chelsie Salvatera is a sociology major on a pre-medical track at Grinnell College. She currently holds leadership positions in several campus organizations: Sociology Student Educational Policy Committee, Philippine United Student Organization and the Young Gifted and Black Gospel choir. Her professional interests include medicine and public health, specifically minority health and health disparities in the U.S.

When the words “forced sterilization” come up in conversation, we tend immediately to think of an inhumane and disturbing procedure that took place many decades ago. More specifically, between 1909 and 1964 during the eugenics movement, 20,000 men and women from California were sterilized forcibly. This movement in the United States was one implemented to “better” or “improve” the genetic characteristics of society through the process of sterilization and breeding. American eugenicists sterilized those whom they thought “unfit” or biologically defective and who theoretically might bring financial burdens to the state due to their mental, physical, or behavioral problems. Due to this movement’s aim to preserve white, native-born Americans’ social, economic, and political power, poor, disabled, and women of color became the targets for coerced sterilization [1].

Unfortunately, a severe concern about forced sterilizations remains relevant even today. A report from The Center for Investigative Reporting documents, from 2006 through 2010, sterilizations of 148 women at California’s Institution for Women in Corona and Valley State Prison for Women in Chowchilla. While sterilizations in California are allowed with use of state funds and approval from high medical officials in Sacramento, no physician working in the prisons requested such consent. Moreover, most women prisoners claimed that they were coerced to follow sterilization procedures, also described as tubal ligation.

Christina Codero, 34, a former inmate at the Valley State prison, describes the pressure of sterilization from medical professionals. As she states, “As soon as he found out that I had five kids, he suggested that I look into getting it done. The closer I got to my due date, the more he talked about it. He made me feel like a bad mother if I didn’t do it.” The physician seemed to make assumptions about Codero’s economic burdens to justify her need to be sterilized. Whether or not doctors are in favor of tubal ligation procedures for women prisoners who may not be financially stable, they are not authorized to control women’s reproductive rights. The mass sterilizations of California women prisoners’ reflects the struggles associated with women of color and poor women’s reproductive rights in in the ‘60s, ‘70s and ‘80s.

Specifically, the medical profession’s role, as represented in the California prison cases, is quite similar to that involved with Mexican women in the 1970s. Due to Mexican women’s ethnic and economic vulnerability, Rickie Solinger, author of Pregnancy and Power: A Short History of Reproductive Politics in America argues that doctors consistently “defined [Mexican women] as undeserving reproducers, as inappropriate for ‘membership in the national community’, and as potential mothers of ‘future’ undesirable’ citizens.’” [2] In the case of Madrigral v. Quilligan, Los Angeles County Medical Center doctors were accused of sterilizing Mexican women without their complete understanding and consent of what the procedure encompassed. In 1973, Guadalupe Acosta, a poor Mexican living in Los Angeles, gave birth to a child suffering from brain damage. The child did not survive and her doctor sterilized her without her permission. Meanwhile, the doctor blamed her husband for giving consent to terminate Acosta’s reproductive capacity. Her husband denied the charge, yet medical authority prevents anything from being resolved.

Indeed, the language and cultural barriers confronting the Mexican-origin group left them more vulnerable to abuses in the health care system. Doctors were convinced that sterilization procedures were providing women with an optimal solution to diminish their economic burdens. Also, the procedure was seen as an “easy” fix for women who were thought incapable of using effective contraception. In such cases, medical professionalism played a major role in controlling the reproductive rights for women of color.

The lack of sensitivity among the general public, medical professional, and state concerning forced sterilization and reproductive rights for marginalized populations are problematic. During the reproductive rights movement, women of color and white women formed various feminist organizations and groups, to create a community for discourse as well as to increase political, social, and economic awareness regarding reproductive issues. Moreover, as a nation, we should by now have progressed further in preventing the denial of women’s reproductive rights.

The fight for women’s reproductive rights continues even today. The current debates on legal contraception use are widely discussed, especially in regard to women utilizing family planning. Specifically, Congressmen have argued against having women’s health care plans cover contraception. Bizarrely, illegal or forced sterilizations persist, yet there is little or no current debate on the ways this impacts women and their right to bear children. Controlling women’s reproduction system seems unacceptable through legal means, meanwhile some see it as not so harmful when it is forced and not approved by institutions, such as the prison system.


1. Jennifer Nelson, Women of Color and the Reproductive Rights Movement. NewYork University Press, 2003.

2. Rickie Solinger, Pregnancy and Power, A Short History of Reproductive Politics in America. New York University Press, 2005.

Comments Off on Victims of Forced Sterilization (Then and Now)

Filed under Uncategorized

A Brief History of Federal Intervention in Public Health

Lena Parkhurst is a third year English and Spanish major at Grinnell College.  She enjoys biking, movies and sleep. Although George Washington’s work with inoculation was pretty snazzy William Howard Taft is still her favorite president.

Medical rights, never exactly an uncontroversial topic in the United States, has become an especially touchy subject with the implementation of the Affordable Care Act. While both sides argue over the veracity of federal control and the morality of mandatory healthcare, it’s worth taking a look at when medicine and the federal government first began to mix. The mandatory inoculation of Continental troops, beginning in the 1770’s, was one of the first instances where the American federal government interfered with patient’s medical rights. These inoculations, meant to counteract the scourge of smallpox, met resistance from individuals and colonies alike. The compulsory inoculation of Continental troops helped shape healthcare in today’s America. For better or for worse, the smallpox inoculations set the precedent of federal control superseding patient choice in order to ensure community wellness.

During the Revolutionary War illness was a constant threat to the well-being of the American troops. Some historians estimate that, at the height of the American troop’s collective ill health, 30-35% of the American soldiers were incapacitated by illness.[1]  Smallpox, as one of the most highly contagious of the colonial diseases was especially feared.  The disease spread through physical contact and was contagious before any symptoms occurred. Moreover, smallpox proved fatal in somewhere between 15-50% of cases and left survivors with severe disfigurements.[2] In short, smallpox was the perfect disease to quickly destroy an army, leaving high mortality and desertion rates in its wake. General Washington knew that one of the major hurtles he faced was ensuring that his troops were properly safe guarded against the pox. Unfortunately, the only known preventative measure was the risky and highly controversial act of inoculation.

Smallpox inoculations were a contentious topic in this time for a lot of very sensible reasons. To begin with, the concept of the smallpox inoculations was “repulsive and offensive to many colonists.”[3] No one wanted to put a terrifying disease into their body or introduce it to their community. Moreover, inoculations were common among minorities, such as Native Americans, which made the new technology seem uncivilized. If a white colonist did overcome their suspicion and fear, the inoculation still carried painful risks. Death or another smallpox outbreak could easily follow inoculation.

Washington’s uphill battle to convince troops of the necessity of inoculation was further complicated by the nature of the Continental Army. American troops were “semiprofessional” soldiers with short enlistment terms and greater freedoms than the contemporary military.[4] Away from their homes and the colony whose medical authority they might accept, soldiers had little reason to submit to mandatory inoculations. Nonetheless, Washington persevered, splitting his persuasive efforts between the continental congress and colonial governors, and emphasizing the vital importance of inoculations to the war effort. In 1776 Washington received permission from the New England Authorities to inoculate a portion of his troops.[5] After this successful trail run in New England, Washington expanded the program and implemented mandatory inoculations for all his troops. It is important to note that Washington, and through him the federal government, asked for permission from the colonies before beginning inoculations.[6] While America’s new federal government took control of the army’s health, its power was still subordinate to the colonies and their own interest in securing colonial wellness.

Throughout US history, the federal government has inscribed its power – positively and negatively – upon the bodies of Americans. The implementation of this power began with General Washington’s mandatory smallpox inoculations. These inoculations did more than help Washington win the Revolutionary War, they also set important precedent for the federal government’s control over patient’s rights. The federal government to proscription of mandatory healthcare in the 1770’s set a standard for personal and public health that continues today.

[1] Ann M. Becker, “Smallpox in Washington’s Army: Strategic Implications of the Disease During the American Revolutionary War” The Journal of Military History 68.2 (2004), 393.

[2] Ibid., 385.

[3] Ibid., 387.

[4] “Army, U.S.” : The Oxford Companion to American Military History Oxford Reference. Ed. John Whiteclay Chamber. Oxford Reference, n.d. Web. 06 Dec. 2013.

[5]   Becker, 388.

[6]   Vincent J. Cirillo, “Two Faces of Death: Fatalities from Disease and Combat in Principal Wars, 1775 to Present.” Perspectives in Biology and Medicine 51.1 (2008): 121-133.

Comments Off on A Brief History of Federal Intervention in Public Health

Filed under Uncategorized

Morality, Capitalism, and The Limitless Unconscious: The Development of Psychoanalysis in the United States

Rachael Morgan is a student of Psychology and Russian at Grinnell College with a particular interest in divergent behavior. After her education, she plans to follow in her grandmother’s footsteps and pursue a career in therapy, concentrating on human sexuality and gender identity.

Entering a therapist’s office today and looking around, you’re likely to see a variety of strange objects. Amid the usual comforting sofas, potted plants, and boxes of tissues, you might find a hand-held maze, erotic art, drawing paper and molding clay, or lamps to mimic natural sunlight. Placed to spark thought and stimulate the unconscious, these items are the result of recent theoretical developments and treatment methods in clinical psychology. Born from Freud’s psychoanalytic method and shaped by the social, economic, and scientific climate of the United States, talking therapy has gained validity in the medical community. Yet for a mentally ill patient 200 or even 20 years ago, seeking treatment would have been a radically different experience.

Until the mid-nineteenth century, there was virtually no understanding of psychiatric disease. The mentally ill were considered immoral “lunaticks,” possessed by the devil and sequestered into cruel madhouses away from the public eye. The earliest efforts in the United States of actual treatment, rather than confinement, of the mentally ill were asylums begun by religious groups. Although these early institutions had little scientific basis for their methods of care, they became the first consistent effort to alleviate the suffering of the patients rather than relying on sedatives to control their behavior. By the late nineteenth and early twentieth centuries, the somatic style of care was used to treat patients exhibiting abnormal psychological symptoms. The somatic style was based around the idea that mental diseases originated in the nervous system, and that the correct stimulation of nerves could improve a patent’s “mental hygiene”[[1]]. In effect, impressive treatments like shock therapy satisfied patents’ desires for tangible physical of their medical care, like spasming muscles, even if the treatments didn’t significantly affect their mental distress. Results, however, were lacking; despite advances in clinical technology and psychological research, the recovery rate for mentally ill patents dropped 50% between 1870 and 1910[[2]].

In the last decades of the nineteenth century, a theory called parallelism gained prominence among medical professionals.  Proponents of parallelism argued that the mental state and the physiological nervous state were distinctly that is, lapses in “moral behavior” caused mental illness. The further a patient’s behavior from the social norm, such as indulging in overly sexual behavior, the greater the degree of their mental illness. The result? A population of morally “superior” people somehow not getting better despite their idyllic notion of civilized morality[[3]]. This social climate set the stage for the rise of psychoanalysis in the United States.

Freud’s psychoanalytic method at the turn of the century focused on tapping into the unconscious and cleaning psychological dirt from the mind. But what about psychoanalysis made it so popular, particularly in the United States? Americans’ continuing fascination with mental and moral hygiene was certainly placated by the idea that their unconscious could be cleansed, and since everyone had some mental dirt needing cleansing, psychotherapy became a comfortingly medicalized practice that somewhat reduced the stigma of seeking psychological care[[4]].

 Some historians argue capitalism, too, was significant to Americans’ affinity for psychoanalysis; the limitless unconscious became the new “American frontier”, a bottomless source of material from which to produce results and that would never run dry, assuring the continued services of a therapist [[5]]. However, just as psychological services today are generally limited to those who can afford both the time commitment and the high costs of therapy, the extensive time required for talk therapy made psychoanalysis a tool with which only wealthy Americans could periodically cleanse their minds and increase their own market value.

The United States’ love affair with psychotherapy reached its peak between 1920 and 1960, until less costly and time-consuming therapeutic methods gained popularity [[6]]. Through the development of psychological care in the United States, social and economic factors drove change as much as scientific development. Although therapy has since dramatically evolved into a far more empathetic practice than Freud’s original psychoanalytic method, many of the tools for emotional development found in therapist’s offices today are based in the early American ideals of psychoanalysis and the limitless unconscious.


[1]Lilian R. Furst, Before Freud: hysteria and hypnosis in later nineteenth-century psychiatric cases. (Lewisburg: Bucknell University Press, 2008), 21.

[2] Nathan G. Hale, Freud and the Americans:the beginnings of psychoanalysis in the United States, 1876-1917. (New York: Oxford University Press, 1971), 78.

[3] Ibid., 56.

 [4] Philip Cushman, Constructing the self, constructing America: a cultural history of psychotherapy. (Cambridge: Da Capo Press, 1996), 152.

 [5] Ibid., 150.

 [6] Russell A. Dewey, Introduction to psychology. Belmont: Wadsworth, 2004. Accessed December 3, 2013.

Comments Off on Morality, Capitalism, and The Limitless Unconscious: The Development of Psychoanalysis in the United States

Filed under Uncategorized

The American Depression: Reliance on Antidepressants

Kaya Matson is a senior biology major and neuroscience concentrator at Grinnell College. In her experience with neuroscience, she has always been struck by how little we know about the brain. Humans have 100-billion neurons in the brain, with each neuron making between 1-10,000 connections with other neurons; the amount of complexity in this connectivity is greater than the number of stars in the universe. Given our knowledge of the nervous system, we make remarkable assumptions in treating mental illnesses, such as depression. In this blog post Kaya took the opportunity to explore the roots of antidepressants in the United States in order to comprehend the prevailing uninformed trust of antidepressants.

Based on the current numbers of Americans treated for mental illness, it seems as though we are in the midst of a raging epidemic illness. Record numbers of people are being treated for mental illnesses such as depression, with “10 percent of all Americans over the age of 12… on antidepressants.” Over the last 25 years there has been a 350 percent jump in youth mental illness that coincidently aligns with the introduction of the popular antidepressant, Prozac. Antidepressants are seen as the cure for the “chemically imbalanced” brain. However, antidepressants may not be as effective as advertized.

In 2008, Dr. Irving Kirsch from Harvard University conducted a meta-analysis on the effectiveness of antidepressants. He and his colleagues found no significant difference in depressive scores between patients taking an antidepressant or a placebo. Are the effects of antidepressants simply the result of a placebo effect? Despite this evidence contesting the efficacy of antidepressants, antidepressants are continually prescribed.

Most psychiatrists have shifted from “talk therapy” to drugs as the dominant mode of treatment. In Unhinged: The Trouble With Psychiatry—A Doctor’s Revelations About a Profession in Crisis, Dr. Carlat treats a grieving woman whose father died in a car accident while she was behind the wheel. After an hour of consultation, Dr. Carlat prescribes the antidepressant Zoloft and the tranquilizer Klonopin, and refers her to a social worker, but does not offer her psychological counsel:

Carol saw me for mediations, and saw a social worker colleague for therapy. Her symptoms gradually improved, but whether this was due to the medications or the therapy, or simply the passage of time, I cannot say…We have convinced ourselves that we have developed cures for mental illnesses like Carol’s, when in fact we know so little about the underlying neurobiology of their causes that our treatments are often a series of trials and errors.”[1]

Carol is not alone; even though 10% of Americans over the age of 12 are on antidepressants, less than 1/3 of them have seen a physician in the last year. Therefore, the dominant treatment of depression is a prescription (and only a prescription).

Depression was not always seen as a chemical imbalance. It was not until the modern “psychiatric revolution” that mental illnesses including depression were thought to be “caused by chemical imbalances in the brain that can be corrected by specific drugs.” Before the appearance of Prozac and similar antidepressant medications, depression was characterized in the 19th and 20th century as melancholic and nonmelancholic depression: conditions attributed to biological as well as social and environmental causes.[2] The singular term “major depression” was not defined until the 1980 edition of the Diagnostic and Statistical Manual (DSM), after the rise in popularity of antidepressant medications, which were used to treat all forms of depression, regardless of the root cause.

The transition to a single label for major depression came with a single prescription for depression. In the 1950s, antidepressant drugs such as MAOIs were found to increase levels of the neurotransmitter serotonin in the brain. Thus it was postulated that depression is caused by too little serotonin.[3] The most popular of these antidepressants, Selective Serotonin Reuptake Inhibitors (SSRIs) prevent the reuptake of serotonin by the neurons that release it, so the neurotransmitter remains in the synapse for a longer period, extending the period of excitation. Instead of developing a drug to treat an abnormality, an abnormality was postulated to fit a drug. Given the small amount known about the brain, and in particular the depressed brain, the “chemical imbalance hypothesis” may be a step too far; using similar logic, one could argue that fevers are caused by too little aspirin. The main difficulty with this theory is that after decades of trying to prove the chemical imbalance hypothesis, researchers have still come up empty-handed.

The history of the treatment of depression in the United States has resulted in a reliance on antidepressant medications despite their increasingly evident ineffectiveness. Using antidepressants alone is a problem because depression is not just an individual problem; it is also a social problem. Depression is exacerbated in an individual, often due to outside factors. People who are most likely to become depressed are poor, unemployed and undereducated.4 By limiting their treatment to an antidepressant, their chances of relief from depression are limited to a placebo effect. Although it may be strong, the placebo effect does not alleviate symptoms caused by poverty, unemployment, or the loss of a loved one. Given recent evidence concerning antidepressants, we should ask the question: if antidepressants are ineffective, then why are they still the most common and singular form of treatment for depression?

*Those who are taking antidepressants right now should continue taking their medications. This blog post is not intended as a source of medical information and should not replace a visit to a doctor or emergency room. This blog post only expresses the opinion of the author and is intended to raise historical questions. 


1. Daniel J. Carlat, Unhinged: the trouble with psychiatry–a doctor’s revelations about a profession in crisis, (New York: Free Press, 2010), 4-5.

2. Laura D. Hirshbein, American melancholy: constructions of depression in the twentieth century, (New Brunswick, N.J.: Rutgers University Press, 2009), 50.

3. Daniel, J. Carlat, Unhinged: the trouble with psychiatry–a doctor’s revelations about a profession in crisis, (New York: Free Press, 2010), 40.

4. Ibid, 35-36.

Comments Off on The American Depression: Reliance on Antidepressants

Filed under Uncategorized

Insight into Victorian Era Medicine: Context to Where We Stand Today Regarding The Female Physician

Kayleigh Kresse is a third-year undergraduate student at Grinnell College on the pre-med track, working towards a Bachelor of Arts Degree in Biological Chemistry. At the college, she is currently a co-leader on the Student on Health-Oriented Tracks committee, co-founder and co-leader of the Philippine United Students Organization (PUSO), golfs on the women’s varsity golf team, and is involved with the YGB Gospel Choir.

In 2000, women made up 24% of the medical field, and in 2003, more than half of U.S. medical school applicants and matriculates were women. These statistics are the results of two centuries of social reform and health movements led by female activists who believed that the woman’s role in society consisted of more than just her obligations and duties as a housewife.

Up until the nineteenth century, it was held that the woman’s occupation in the household was to heal and nurture, in order to maintain a healthy and sound environment for the family. Women utilized holistic techniques of folk medicine to carry out this traditional role of caretaker.

However, with “increased leisure accruing from technological advances,” women began to more often occupy themselves vocationally. [1] Elizabeth Blackwell became the first woman to graduate from medical school in 1849, setting the wheels in motion for the opening of a few all-female medical schools and a slow, but steady acceptance of women into coeducational medical schools by 1880.

In the nineteenth century, while the number of women medical graduates increased from the hundreds to the thousands, female subordination was still perpetuated by the emphasis on the biological differences between men and women. It was thought that if women should be allowed to practice medicine, it is because only they possessed the virtues of female sentiment, (e.g. maternalism, spirituality, and female nurture) that male physicians could not provide. Mary Putnam Jacobi, who received an M.D. from the Woman’s Medical College of Pennsylvania in 1864, was the first to assert that the permission of women to pursue medicine was not a matter of whether women have unique female qualities of use to the field, but was instead a matter of what is just and equal. [2] In order to justify equal opportunity in the medical field, she championed similarities between genders through biological evidence, thus dismissing the relationship between biological features and societal niche.

Despite the cultivation of change brought by Elizabeth Blackwell and Mary Putnam Jacobi in the late Victorian era, women still faced many obstacles. One common argument was the belief that women would discredit and diminish the professionalism of medical practice because they were “poorly trained” and had “inadequate schooling”. [3] Another claim, made by Dr. E. H. Clarke in 1873, was that it was detrimental for women to pursue extra schooling as it would “[sap] the energy needed for the normal development of the reproductive organs” and the woman could become sterile.[4] Yet another theory was that a woman’s experiences in medicine could strip her of her soft, nurturing characteristics and her instincts as a caretaker and moral compass. Afraid of this occurring, men supported that they were the brutal and uncultivated counterparts who needed women to stay soft and nurturing, in order to soften their edges. The last major, and poor, argument was that men’s own sexual self-control “would be gravely shaken by close contact with women students and physicians.”[5] Men claimed that they could not be focused in the operating or examining rooms, due to uncontrollable sexual urges from the presence of a female. Overall, while women were hailed as morally superior and domestically powerful, they were labeled inferior and weak in the medical profession.

While progress was made, the ideologies of nineteenth-century women doctors still did not drift from mainstream thinking. They accepted that gradual change is all that can be expected of a public influenced by patriarchal standards and worked towards slowly creating a positive image for the female physician. With the exception of Mary Putnam Jacobi, whose ideas were radical at the time, most were accepting of gender stereotypes, only asking for more gender equality in the work place. The popular argument at the time was that women can be competent doctors and still maintain their femininity.[6]

Today, we are finally reaching a point in which more women’s capabilities and smarts are recognized in the professional sphere, as the number of women applying and attending medical are higher than ever before. While gender roles still have not been entirely rethought, we are observing a shift away from the cultural mandate of the housewife, thanks to the various reform movements and the evolution of attitudes in the past two centuries that have secured more freedoms and opportunities for women to become medical professionals. However, while we have come a long way, it is crucial to save a critical lens for the long road ahead of us in the eternal fight for gender equality in medicine.

[1] Morantz-Sanchez, Regina Markell. Sympathy and Science: Women Physicians in American Medicine. New York: Oxford University Press, 1985, pp. 56.

[2] Bittel, Carla. Mary Putnam Jacobi & the Politics of Medicine in Nineteenth-Century America. Chapel Hill: University of North Carolina Press, 2009.

[3] Morantz-Sanchez, pp. 70.

[4] Morantz-Sanchez, pp. 54.

[5] More, Ellen S., Elizabeth Fee, and Manon Parry. “New Perspectives on Women Physicians and Medicine in the United States, 1849 to the Present.” In Women Physicians and the Cultures of Medicine, edited by Ellen S. More, Elizabeth Fee, Manon Parry. Baltimore: Johns Hopkins University Press, 2009, pp. 3.

[6] Morantz-Sanchez, pp. 61.

Comments Off on Insight into Victorian Era Medicine: Context to Where We Stand Today Regarding The Female Physician

Filed under Uncategorized

The Rise of the Male Obstetrician: Childbirth as a Medical Procedure

Clara Kirkpatrick is a Senior Art History major at Grinnell College who specializes in 20th Century European Art. She intends to study Graphic Design on a graduate level and eventually work as a designer in New York City.

Contemporary discourse surrounding childbirth reveals a great deal about the transformation of birthing practices over the past 300 years. Because females give birth and men do not, we might imagine the historical discourse surrounding pregnancy to be female-centric and perhaps dominated by womanly concerns. Instead, the discourse surrounding birth has quite remarkably transformed into a highly medical conversation that seems to ignore the female’s role in birth.

Today, a Google search of the word “childbirth” results in a series of articles that resemble instruction manuals and advertisements for the “newest” and “safest” ways to deliver your baby. Articles with titles like “Car Mechanic Invents Technology to Ease Childbirth” litter the web, further solidifying the idea that birth cannot occur naturally but must be directed by (usually male) medical personnel. What is arguably the most natural process in the world has been transformed into a kind of technological, medical competition. Natural childbirth is considered abnormal, and birth outside of the hospital is considered masochistic.[1]

Until the 18th century, birth was an entirely female process. Female family, friends, and midwives gathered around the bed of the childbearing woman before, during, and after birth, comforting and helping her through the trying and often dangerous process.[2] The past hundred years has seen the majority of women perform the subservient role of patient or nurse while the male doctor dominates the birthing room. Only in the past two decades have women obstetricians gained a significant presence in the field. Most women in the United States no longer feel comfortable laboring at home—instead, new mothers generally feel safer and better cared-for in a hospital maternity ward. How did birth become a medicalized, male-dominated event? What caused this essential shift?

Historian Jacqueline Wolf explains in Deliver Me from Pain: Anesthesia and Birth in America that despite the support women provided to each other during and after labor, the process of childbirth remained a source of extreme fear and panic for most women until the twentieth century. This fear was not without reason—many women fell ill or died due to complications sustained during birth. The widespread perception of women (mainly white and upper class) as weak and fragile creatures contributed to this fear as women were considered unable to bear the stress of daily life in a modern environment, let alone the incredible pain of giving birth.[3]

This widespread fear of giving birth created an environment conducive to the rise of male obstetricians and hospitalized births. Women’s fear caused them to search for safer, easier, and more comfortable ways of giving birth if they had the means. Many poor women and women of color continued to give birth at home as they were either unable to afford the hefty hospital fees or were unaffected by the fear of home birth. [4] Judith Walzer Leavitt notes that, because of intellectual insecurity, many women decided to follow male birthing advice, despite the fact that these men could not understand birth the way a midwife who’d given birth could. The incredible anxiety surrounding birth at this time outweighed women’s desire for social births. Thus, hospitalized, male-led birth was the new trend.

Though women increasingly delivered their babies in hospitals under the care of a male doctor, survival rates did not improve as quickly as expected. A 2006 article in the New Yorker entitled “The Score: How Childbirth Went Industrial” cites a 1933 study from the New York Academy of Medicine. Of 2,041 maternal deaths: “There had been no improvement in death rates for mothers in the preceding two decades; newborn deaths from birth injuries had actually increased. Hospital care brought no advantages; mothers were better off delivering at home.” Women’s belief that they would be safer giving birth in a hospital under the care of licensed, experienced male doctors (as opposed to in the home with female midwives) proved to be largely untrue.

Nevertheless, women increasingly bestowed their unwavering faith on male doctors, undergoing experimental treatments during birth, such as taking opium in order to avoid pain and fear.[5] Once childbirth was in the hands of obstetricians, there was no turning back. Though obstetrics today improved exponentially since the opium days, the history of the field reveals a disturbing rise to power that initially did more harm than good and cemented childbirth in the medical realm.

[1] Jacqueline Wolf Deliver Me from Pain: Anesthesia and Birth in America (Baltimore: Johns Hopkins University Press, 2012), 11.

[2] Leavitt Brought to Bed, 36.

[3] David G. Schuster Neurasthenic Nation: America’s Search for Health, Happiness, and Comfort, 1869-1920 (New Jersey: Rutgers University Press, 2011), 10.

[4] Leavitt, Brought to Bed, 36.

[5] Leavitt, Brought to bed, 39.

Comments Off on The Rise of the Male Obstetrician: Childbirth as a Medical Procedure

Filed under Uncategorized