Psychopathology and Psychotherapy

In introducing the scientific study of mind and behavior, we have focused primarily on adaptive behavior, and the normal mental processes that underlie it. There have been occasional references to cases of brain insult, injury, and disease, for the light they shed on normal mental life. Now we wish to examine mental illness in its own right: 

  • abnormal and maladaptive behavior;
  • the disordered mental processes that underlie it;
  • and interventions that treat and prevent mental illness.

The term psychopathology is, obviously, derived from two Greek roots:

  • psycho, from the Greek psyche, soul, referring to the mind;
  • pathology, from the Greek pathos, suffering, referring to disease or illness.

The term makes it clear that mental illness is analogous to physical illness. Just as physical illness involves abnormalities of bodily structure (anatomy) and function (physiology), so mental illness involves abnormalities of mental structure and function -- abnormalities of cognition, emotion, and motivation -- that result in abnormal, deviant behavior.


Defining Psychopathology

It has not proved easy to define psychopathology in the abstract.

By analogy with the concept of pathology in medicine, psychopathology may be defined as abnormalities in mental structures, processes, and states that give rise to abnormal, deviant behavior. But the concept of abnormality implies an opposite construct of normality, from which abnormality deviates. So what do we mean by normality?

Normal mental and behavioral functioning is characterized by:

  • Accurate and Efficient Cognition: Normal people generally see the world the way it is, remember things the way they happened, think clearly, and communicate comprehensibly. And beyond cognition, they tend to have feelings and desires that are appropriate to the situation.
  • Self-Awareness: Normal people generally are aware of their thoughts, feelings, and desires, and of their behavior and its impact on other people.
  • Self-Control Normal people generally are able to control their impulses and emotions, and to delay gratification.
  • Self-Esteem: Normal people generally think reasonably well of themselves.
  • Social Relations Based on Affection: Normal people generally treat others with respect, and not like objects.
  • Productivity and Creativity: Normal people are generally productive at work, at play, and in their family lives; and although most of us can't become great artists, we are nevertheless able to create things on our own.
(Note: I am pretty certain that I derived this list of features of normality from an early edition of a textbook in abnormal psychology written either by Richard Bootzin, or one by Gerald Davison and John Neale, but I can no longer identify the source precisely.)

But having defined a sort of prototype for "normality", what do we mean by deviance?

Deviations from normality can be defined in various ways:

  • Deviance from Statistical Norms: By statistical convention, a score is "abnormal" if it lies more than 2 standard deviations above or below the population mean.
    • This frequency criterion is certainly objective, but it has some problems attached to it -- not the least of which is the problem of estimating population means for all the various mental characteristics on which people might deviate.
    • There is also the problem of what to do about positive deviations. An IQ less than 70 is more than 2 standard deviations away from the mean IQ of 100, and (if other factors are also present) can lead an individual to be classified as intellectually disabled (what used to be known as "mental retardation"). But an IQ of more than 130 is also more than 2 standard deviations away from the mean, and can lead an individual to be classified as a "genius". But while intellectual disability is a form of mental illness, we usually don't think of genius that way. A further problem is that even negative deviations are not necessarily signs of mental abnormality. For example, a person who is more than 2 standard deviations below the mean on Extraversion might be merely shy. 
  • Deviance from Social Norms: Every group, organization, and society imposes certain expectations and demands on its members, and some people simply don't do what they are supposed to do. Given that human experience, thought, and action takes place in an expressly social context, this compliance criterion may well be useful for evaluating which deviations we should pay attention to, but it also has its problems. 
    • Norms vary across societies. In the former Soviet Union, political dissidents could be classified as mentally ill, and confined to mental hospitals, simply for disagreeing with their government. 
    • Norms also vary across epochs within societies. When I began my graduate studies, in 1970, homosexuality listed in the official Diagnostic and Statistical Manual of Mental Disorders. Then, about 1973, the American Psychiatric Association took a vote and decided that it wouldn't call homosexuality a mental illness any longer. 
      • One may agree with the vote (as I do), but the essentially political process by which the status of homosexuality was changed should give us pause. If we are looking for an objective standard by which to evaluate deviance, we want one that is constant across groups. The length of a foot or a yard doesn't vary from Denmark to Ghana -- why should the criteria for mental disorders be any different? 
  • Personal Distress: mental illness is usually manifested in symptoms that create problems for the patient, and cause considerable concern. This subjective criterion may be important in leading the patient to seek the help of a professional, but it too has a couple of problems. 
    • People's self-perceptions are not always accurate. Some people believe they are ill when they are not; but more important in the present context, some mentally ill people do not believe that they are mentally ill, and resist diagnosis and treatment. This is a particular problem in schizophrenia and the personality disorders.
    • Even when people's self-perceptions are accurate, we would not want to substitute self-diagnosis for an objective assessment by a trained professional. We don't let patients self-diagnose cancer and heart disease -- why should we allow them to self-diagnose depression and anxiety disorder?
  • Maladaptiveness: Mental illness often leads people to engage in behaviors that are harmful to themselves and others. For example, people with depression may be at elevated risk for suicide. People with antisocial personality disorder, by definition, engage in antisocial behaviors. Normal mental function is by definition adaptive, because the purpose of the mind is to aid the organism's adaptation to its environment, so a harmfulness criterion is helpful in diagnosing mental illness. On the other hand:
    • Not all maladaptive behavior is a sign of mental illness. Criminal behavior is maladaptive, harmful to the people against whom the crime is perpetrated, and harmful to the criminal when he or she is caught and punished. But we do not label all criminal behavior as the product of mental illness. In fact, the insanity defense is attempted in only a very small minority of criminal cases, and it is successful in only a very small minority of these.

The Insanity Defense


In 1981, John Hinckley attempted to assassinate President Ronald Reagan: one of his gunshots actually hit Reagan, and others seriously injured James Brady, Reagan's press secretary, a Secret Service agent, and a District of Columbia policeman.  Hinckley's motive was a desire to impress Jodie Foster, an actress, with whom he was infatuated.  At his trial, in 1982, a jury found him not guilty by reason of insanity.  More than 30 years later, he remains confined to St. Elizabeth's Hospital, a federal facility in Washington D.C.

Up until the late 18th century, the mentally ill were treated little differently than criminals.  It wasn't until after the French Revolution that Jean-Etienne Dominique Esquirol formally distinguished between insanity, mental deficiency, and criminality, and his protege Phillippe Pinel "freed the insane from their chains".  As medicine developed further, however, psychiatrists began to understand that, in certain instances, criminal behavior could be a product of insanity.  If that were the case, then the "criminal" could not be held morally and legally responsible for his or her criminal acts.

At lest since the 17th century, English common law has required that a criminal act (actus rea, or "guilty act") be accompanied by criminal intent (mens rea, or "guilty mind".  There is no criminal liability for injuries committed involuntarily (e.g., because of a reflex, or while sleepwalking). 

The formal insanity defense has its beginnings in 1843, when Daniel McNaughton tried to kill Robert Peel, the British prime minister (he shot and killed his secretary instead).  At his trial, McNaughten testified that he believed that the British government was plotting against him, and he was acquitted of murder.    The McNaughton Rule requires that a criminal defendant (1) not know what he was doing at the time or (2) not know that his actions were wrong (because of his delusional belief, McNaughton thought he was defending himself).

In the United States, the next advance in the insanity defense was The Durham Rule or "product test" adopted in 1954, which states that "... an accused is not criminally responsible if his unlawful act was the product of mental disease or defect".  This "product test" was overturned in 1972, largely because its ambiguous reference to "mental disease or defect" places undue emphasis on subjective judgments by psychiatrists, and can easily lead to a "battle of the experts".

Many states now adopt a version of guidelines set out by the American Law Institute in 1962, which allows the insanity defense if, by virtue of mental illness, the defendant (1) lacks the ability to understand the meaning of their act or (2) cannot control their impulses.  This is sometimes known as the "irresistible impulse test".

Other states allow for a compromise verdict of "guilty but mentally ill", resulting in commitment to a mental institution for treatment, rather than incarceration in a prison for punishment.

In whatever form, the insanity defense requires both that the defendant meet the criteria for some psychiatric diagnosis and that his ostensibly criminal act be attributed to his mental illness. 

Hinckley clearly met this criterion, but the insanity defense is rarely successful.  It has been estimated that it is invoked in only about 1% of criminal trials, and it succeeds in fewer than 25% of those cases.  And while commitment to a mental hospital is arguably better than incarceration in a prison, there is a definite downside.  Prison terms lapse, and prisoners can be released or paroled.  But commitment to a mental hospital can be forever -- until the relevant medical authorities can persuade a judge that their patient's illness has been resolved. 

  • Lynette "Squeaky" Fromme, who attempted to assassinate President Gerald Ford in 1975, was sentenced to life in prison and paroled in 2009.
  • Hinckley remained in St. Elizabeth's until 2016, when he was, effectively, paroled to house arrest.  He now lives with his aged mother, and cannot travel more than 30 miles from her house without supervision.

The links between psychology and the law go far beyond the insanity defense.  Cognitive psychologists have studied the problems created by the unreliability of eyewitness testimony, and social psychologists have studied how juries, and individual jurors, arrive at verdicts of guilty or not guilty.

For a recent survey of the relations between neuroscience and the legal system, see "Neuroscience and the Law: Don't Rush In" by Jed Rakoff, a prominent Federal District Court Judge (New York Review of Books, 05/12/2016).

Each of these definitions has certain assets and liabilities. Taken together, these two lists of definitions -- of normality and of deviance -- comprise a kind of "prototype" of the "typical" case of mental illness. Not every mentally ill person will lack all the criteria of normality, or display all the criteria of deviance. But most mentally ill people will display some or most of them, so that the mentally ill are related to each other by a principle of family resemblance.


Syndromes of Mental Illness

In actual practice, mental illnesses are not identified by abstract conceptual definitions of mental abnormality and deviance, but rather in terms of various syndromes characterized by particular signs and symptoms.


The Diagnostic Nosology

I identify nine (9) major categories of mental illness.  Warning: these groupings differ somewhat from such "official" classifications as the Diagnostic and Statistical Manual of the American Psychiatric Association, but the overlaps are clear.

1.  Organic Brain Syndromes,in which there are gross impairments in mental function resulting from known insult, injury, or disease in the central nervous system.

  • Alzheimer's disease is a clear example: here the patient suffers memory loss and other aspects of dementia resulting from plaques and tangles in cortical tissue.
  • Other examples are the amnesic syndrome (such as Patient H.M.) associated with damage to the hippocampus and related areas,
  • and the various forms of aphasia associated with damage to Broca's and Wernicke's areas.

2.  Developmental Disorders,in which there is an abnormal pace of development in one or more mental functions since birth.

  • The classic example is intellectual disability, in which the individual shows subnormal levels of mental function (as indexed by an IQ less than 70), in degrees ranging from mild to profound, accompanied by an inability to meet the demands of his or her environment.






    • Henry H. Goddard, an early authority on intelligence, classified what was then known as "mental retardation" into three subcategories -- moron, idiot, and imbecile -- based strictly on IQ test scores (Illustration above right from the state government report on Mental Defectives in Virginia (1915).
    • More recent practice has abandoned these offensive terms, but more importantly assess intellectual disability not just in terms of test scores, but also in terms of the individual's ability to cope with environmental demands.  If people with low IQs can get along effectively in their environment, there is no reason to classify them as intellectually disabled.
      • DSM-5 assesses the severity of intellectual disability from mild t profound, taking account of the individual's ability to adapt in the social and practical as well as the purely intellectual domains.
      • Similarly, the American Association for Intellectual and Developmental Disabilities (formerly the American Association for Mental Retardation) takes account of how much environmental support the individual needs, from intermittent to pervasive.
  • Another example is autism, a disorder characterized by a severe inability to relate to, and communicate with, other people. Autism is now often referred to as autism spectrum disorder, an umbrella term that covers conditions like Asperger's syndrome as well.
    • Traditionally, classical autism was characterized by three criteria: impairment in social interaction; impairment in social communication (language); and restricted, repetitive and stereotyped patterns of behavior, interests, and activities. Asperger's syndrome was used for patients who displayed impairments in social interaction but not impairments in language.  The 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) collapses social interaction and social communication into a single criterion, and has been very controversial.
  • Attention deficit hyperactivity disorder (ADHD), a relatively new syndrome, is typically diagnosed in children. Originally, it was thought that children would "outgrow" ADHD with time, but in fact many children with ADHD grow up to be adults with ADHD, and the syndrome is now diagnosed and treated in adults as well.
    • Originally, the syndrome was called "hyperactivity" or "hyperkinetic disorder".
    • The syndrome entered the diagnostic nomenclature as Minimal Brain Dysfunction (MBD) in DSM-II (1968) -- "minimal" because there was no physical evidence of any brain damage.
    • Following an argument by Virginia Douglas (1972), a Canadian psychologist, that the basic problem in "hyperkinesis" was insufficient attention, the syndrome was renamed at Attention Deficit Disorder (ADD) in DSM-III (1980)
    • In DSM-IIIR (1987), the diagnosis was expanded to Attention deficit Hyperactivity Disorder (ADHD).

Intellectual Disability and the Death Penalty

In 2002, in the case of Daryl R. Atkins (Atkins v. Virginia), a convicted murderer with an IQ of 59, the United States Supreme Court prohibited the execution of intellectually disabled prisoners. However, the Court did not provide guidelines for determining who is intellectually disabled.  Instead, the Court left it up to individual states to determine the standards by which intellectual disability is diagnosed -- provided that they are "informed by the medical community's diagnostic framework" (Hall v. Florida, 2014).  Accordingly, some two years later, Atkins still faced the death penalty ("New Challenge for Courts: How to Define Retardation" by Adam Liptak,New York Times, 03/14/04). 

One problem is the way intellectual disability is defined in the psychiatric nosology: subnormal IQ plus a demonstrated inability to meet environmental demands. The reasonable approach would be to adopt the standards set by the DSM or the similar standards promoted by the AAIDD.  But even so, low test scores can be faked, and judgments of adaptiveness are inherently subjective. These problems are confounded by the theory, promoted by some prosecutors and other proponents of the death penalty, that certain murders (such as the one Atkins was convicted of committing) require highly sophisticated planning, and therefore are beyond the capacity of the intellectually disabled.  Even "low IQ" is problematic: an IQ below 70 is two standard deviations below the mean, but all such measurements are somewhat unreliable.  Should someone with an IQ of 71 be put to death just because he or she

A 2004 ruling by the Texas Court of Criminal Appeals (ex parte Brisno) essentially adopted Virginia's reasoning, effectively ruling intellectual disability out in cases where a crime entailed "forethought, planning, and complex execution of purpose". The ruling poses an interesting "Catch-22": a defendant's crime can be used to impeach the claim that he or she is intellectually disabled!   As it happens, the standards in Texas, whose public officials have an inordinate fondness for the death penalty, are not only outmoded, they include various stereotypes about the intellectually disabled, which have no scientific basis.  In fact, Judge Cochran, writing for the Texas CCA, referred to Lennie Small, the "retarded" character in John Steinbeck's novel, Of Mice and Men (1937) -- she actually talked of "the Lennie standard" ("Supreme Court to Consider Legal Standard Drawn from 'Of Mice and Men'" by Adam Liptak, New York Times, 08/22/2016).  The effect of these standards, if that is what they are, is to severely limit those who would qualify as intellectually disabled. 

Moore v Texas, a case brought before the Supreme Court in 2016, challenges the Texas standards in a case of a man convicted in 1980 of murder in the course of a robbery.  Moore's death sentence was overturned by a lower court, which used modern medical standards to determine that he was intellectually disabled, and thus could not be executed.  The Texas CCA reversed that decision, expressly criticizing the lower court judge for applying contemporary scientific standards, instead of the ones set out in earlier decisions by the CCA.  These standards, in the CCA's view, more closely reflect the beliefs about intellectual disability held by ordinary Texans -- scientific evidence and medical standards be damned.  In 2017, the Court overturned the Texas decision by 5-3, ruling that the Texas court relied too heavily on IQ scores -- not to mention outmoded stereotypes about the intellectually disabled.  The Court advanced a new three-point standard for identifying the intellectually disabled:

  1. "Sub average intellectual functioning", meaning IQ scores lower than "approximately 70".
  2. Lack of fundamental social and practical skills.
  3. Presence of both of these conditions before the age of 18. 

Three justices dissented, on the grounds that Moore's two "reliable" IQ scores were both over 70 -- high enough to permit his execution.

3.  Psychoses, in which there are gross impairments in reality testing. Psychoses are often labeled as "functional", meaning that they have no organic cause. However, these disorders are almost certainly "organic" in nature, and as their underlying brain pathology becomes known they may well be shifted to the category of organic brain syndromes.

  • Schizophrenia, characterized by disordered language and thought processes.
    • Very early medical texts, going back to the Egyptian Book of Hearts and the Chinese Yellow emperor's Classic of Internal Medicine describe illnesses resembling modern-day schizophrenia. 
      • In 1809, Phillippe Pinel described a "premature dementia" in young patient
      • In 1896, Emile Kraepelin renamed the disorder dementia praecox, or "early dementia", distinguishing the disorder from the "senile dementia" associated with aging.
    • The term schizophrenia was introduced by Eugen Bleuler (1911), who distinguished among five subtypes:
      • Simple
      • Hebephrenic (childlike)
      • Catatonic (immobile)
      • Paranoid (delusional)
      • Chronic Undifferentiated
    • Schneider (1959) characterized schizophrenia in terms of a set of "first-rank" symptoms
      • Auditory hallucinations, particularly voices speaking to the patient (arguing or giving instructions.) or about the patient (commenting on the patient's actions).
        • Schneider considered other hallucinations to be "second-rank" symptoms.
      • Experience of one's mind or body being controlled.
      • Thought disorder:
        • That one's thoughts are being heard aloud.
        • Thought withdrawal/
        • Thought insertion.
        • Thought broadcasting.
      • Delusional perceptions, in which an actual stimulus event (not a hallucination) is given a bizarre interpretation.
  • A variety of affective disorders, primarily affecting emotional functioning (as their name implies), including
    • major depressive disorder (also known as unipolar depression),
    • bipolar disorder (formerly known as manic-depressive illness,
    • and pure mania.

4.  Neuroses, a set of syndromes that share primary symptoms of anxiety in common. These are also "functional" in nature, but in contrast to the psychoses there is less question of organic involvement; rather, they are commonly attributed to the patient's experiential history of social learning.

  • A variety of phobic disorders, entailing excessive, unwarranted, and irrational fears of specific objects or situations, such as snakes and spiders, heights, open spaces, or public places.
  • In contrast, anxiety disorder is characterized by a free-floating state of apprehension and worry, unattached to any object.
    • Sudden, unexpected waves of anxiety are characteristic of panic disorder.
    • For an excellent personal account of anxiety disorder, see My Age of Anxiety: Fear, Hope, Dread, and the Search for Peace of Mind by Scott Stossel.
  • Obsessive-compulsive disorder (OCD) is characterized by recurring, unwanted ruminations about certain events (past or future), often accompanied by overt behaviors intended to reduce the impact of these events, or the likelihood that they will occur.
  • As its name implies, post-traumatic stress disorder occurs in some individuals who have been exposed to high levels of stress, such as soldiers on a battlefield, victims of sexual assault and other violent crimes, and victims of natural disasters such as earthquakes and hurricanes (for a review, see Rosen & Lilienfeld, 2008).
    • The syndrome was first recognized in World War I: the term "shell shock" first appeared in the Lancet, a British medical journal, in 1915.  As its name implies, it was originally attributed to a kind of concussion, caused by artillery shells exploding near the soldier, and the resulting "commotion" in the brain.  Affected soldiers received a war ribbon and a disability pension.  However, cases began to be diagnosed in soldiers who had been nowhere near exploding ordnance, leading psychiatrists to shift their thinking.  Now, "shell shock" was called "war neurosis", and viewed as reflecting a "nervous breakdown" or neurasthenia -- emotional shock, if you will, rather than concussion or "commotional" shock.  Instead of recovering from their putative brain injuries in hospitals, they were now sent to convalesce in mental hospitals.  (See "the Shock of War" by Caroline Alexander, Smithsonian, September 2010).
      • War neurosis was diagnosed in World War II, as well, where it was the subject of an important documentary film, Let There Be Light, directed by by John Huston -- a film that the War Department censored, worried that the public would balk at learning about the psychological damage wreaked by warfare.  
      • And, in retrospect, war neurosis occurred before World War I, as well. 
        • In the Civil War, is was known as "irritable heart", or Da Costa's Syndrome" (see "PTSD: The Civil War's Hidden Legacy" by Tony Horwitz, Smithsonian Magazine, 01/2015).
      • War neurosis, redefined as PTSD, surfaced with a vengeance during the Vietnam War (it was originally called "post-Vietnam syndrome"), and again in the post-9/11 wars in Afghanistan and Iraq (as well as en epidemic of concussive head injuries caused by improvised explosive devices, or IEDs).
      • At roughly the same time, mental health professions began to appreciate the effects of trauma off the battlefield, especially in victims of sexual assault and of childhood sexual and physical abuse.  PTSD formally entered the diagnostic nosology with the third edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-III), in 1980 (Scott, 1990).
      • One positive result of the wars in Iraq and Afghanistan is that soldiers suffering from PTSD can now receive disability benefits -- even if they did not directly experience the trauma in question.  Just being in the vicinity, apparently, was traumatic enough.  This policy change, however, only served to increase the controversy over the diagnosis.  Contributing to the debate was the proposal that veterans with PTSD receive a medal, analogous to the Purple Heart, to recognize their war injuries -- mental injuries, rather than physical injuries, but injuries just the same.
    • Usually we think of PTSD we think about victims of violent crime, or war, or natural disaster.  And that's how psychiatrists usually think about it, too.  The Diagnostic and Statistical Manual of Mental Disorders (DSM) defines stress as experiencing, or witnessing, "actual or threatened death or serious injury, or a threat to the physical integrity of self or others".  But as we discussed earlier, "stress" is defined psychologically as any event which challenges the organism's current level of adaptation.  Divorce, being laid off, or losing a close friend are examples.  Exposure to unpredictable and uncontrollable aversive events is inherently stressful.  Even positive events can be stressful, though they rarely play a role as instigators of PTSD.  
    • Documented exposure to trauma, as defined in DSM, is necessary for the diagnosis of PTSD.  But it is not sufficient.  Most people -- perhaps as many as 95% -- who are exposed to trauma do not develop PTSD (Bonanno, 2011).  This is true even when the exposure is prolonged or severe.
    • However, some clinicians diagnose PTSD even in the absence of such documentation, in patients who "have the symptoms" of PTSD.  Sometimes, the clinician assumes that the patient has "repressed" or "dissociated" his memory for the traumatic event.  It's as if they figure, "They have the symptoms, so they must have been traumatized".  But this is backwards reasoning -- technically, the error of asserting the consequent discussed in the lectures on Thought and Language.  In fact, patients can show some of the symptoms of PTSD even in the absence of exposure to trauma: for example, hyperarousal is symptomatic of anxiety disorder, and poor sleep is symptomatic of depression.  Just because someone is anxious and/or depressed doesn't mean that they've been traumatized. They might be anxious or depressed for some other reason. This error in reasoning was, in my view, largely responsible for the "epidemic" of claims of childhood sexual abuse that arose in the 1980s.

5.  Psychosomatic Disorders (also known as psychophysiological disorders) involve actual damage to some organ enervated by the autonomic nervous system, usually associated with psychological stress.

  • So-called psychosomatic ulcers are the classic example: here, peptic ulcers of the stomach (gastric ulcer) or small intestine (duodenal ulcer) occur in the context of high levels of psychological stress.
  • "Type A" behavior -- very high levels of stress, usually self-imposed through a regime of high activity levels, aggressiveness, and competitiveness -- has been associated with increased risk of coronary heart disease.
  • Other instances of anatomical damage or physiological malfunction may also be stress-related, as when stress leads to a breakout of acne or a temporary disruption of the menstrual cycle. Of course, both acne and dysmenorrhea can have purely physical causes, but sometimes they do occur under conditions of stress (like prom night).

6.  Somatoform Disorders are characterized by physical complaints that have no organic basis. In this respect they are similar to the dissociative disorders, except that the symptoms mimic conditions arising outside the nervous system.

  • In hypochondriasis, the patient is excessively concerned with the risk or threat of disease.
  • Somatization disorder (also sometimes known as Briquet's syndrome or just plain "hysteria", involves multiple, constantly changing physical complaints.
  • Somatoform pain disorder is characterized by constant, frequent complaints of pain in various body parts, in the absence of any evidence of a physical condition that could cause this pain.
  • Body dysmorphic disorder is characterized by an excessive concern that particular features of the body, such as one's nose or ears, are "not right". Individuals with body dysmorphic disorder are commonly found in the waiting rooms of plastic surgeons, many of whom are only to happy to have them as returning customers. Somehow, though, the problem never seems to get fixed.
  • A controversial case is irritable bowel syndrome, in which the patient experiences abdominal pain, cramping, bloating, diarrhea, and constipation. Aside from these symptoms, physical examination doesn't typically reveal inflammation or other damage to the colon or other parts of the gastrointestinal system -- leading some authorities to suggest that it is a form of somatization disorder. On the other hand, it could be a physical illness, perhaps stress-induced, whose underlying pathology is still unknown. Or, it could be a straightforward physical illness, whose etiology has nothing to do with stress.
  • Something similar could be said about chronic fatigue syndrome (CFS), where the patient suffers from profound exhaustion, disordered sleep, and pain in the muscles and joints -- sometimes so severe that the patient cannot get out of bed, or engage in his or her normal physical activities. Again, the frequent absence of physical findings has led some authorities to suggest that CFS is a somatoform disorder -- "all in the patient's head", or perhaps depression masquerading as a physical illness. But, as with IBD, this assertion is highly controversial, and there are reputable medical researchers who suspect that CFS stems from an underlying, if still unknown, viral infection, or perhaps a form of autoimmune disease.

7.  Dissociative Disorders, including conversion disorders, in which there is a disruption of conscious awareness and control.

  • In the dissociative disorders, such as
    • dissociative amnesia (also known as functional or psychogenic amnesia),
    • dissociative fugue (also known as psychogenic fugue), and 
    • dissociative identity disorder (also known as multiple personality disorder), the dissociation affects conscious awareness of identity and autobiographical (episodic) memory.
  • In the conversion disorders, traditionally collected under the rubric of hysteria, the dissociation affects sensory-perceptual awareness
    • as in psychogenic or functional blindness,
    • deafness, or
    • anesthesia),
    • and/or the voluntary control of action (as in psychogenic or functional paralysis).

The dissociative and conversion disorders sometimes mimic the effects of damage to the peripheral or central nervous systems, but in these syndromes there is no evidence of brain insult, injury, or disease.

8.  Personality Disorders (e.g., borderline personality, antisocial personality or psychopathy) are deeply ingrained -- longstanding, inflexible, and pervasive -- patterns of maladaptive behavior which typically develop in adolescence. In contrast to the psychoses and neuroses, whose symptoms are "ego-dystonic" (experienced as alien and unwanted), the symptoms of personality disorders are "ego-syntonic" -- experienced as a part of their normal personality.

  • In antisocial personality disorder (also known as psychopathic personality disorder,psychopathy, or sociopathy, the person engages in a pattern of incorrigible antisocial behavior.
  • In borderline personality disorder (BPD), the person experiences a blurring of the boundaries between self and other, difficulty managing affect, etc.  The term "borderline" was coined by Adolph Stern to label patients who seemed to fall in the cracks between neurosis and psychosis.  It was brought into the official diagnostic nomenclature by John Gunderson, who identified six characteristic features of the disorder (for more on BPD, see "The Long Shadow of Trauma" by Diana Kwon, Scientific American, 01/2022; but be wary of Kwon's hypothesis that trauma lies at the origin of BPD as well as PTSD; there's a kind of "trauma industry" among some mental health professionals, which sees trauma everywhere and as the cause of everything):
    • Intense emotions, especially anger and/or depression;
    • Impulsive behavior;
    • Brief episodes of psychosis;
    • Chaotic interpersonal relationships;
    • Illogical, loose, or bizarre thinking (itself a characteristic of schizophrenia, with which BPD patients were often formerly diagnosed);
    • Outward appearance of normality (which, of course, in psychoanalytic thinking, only shows how abnormal the person is!).

Psychopathy

The classical term for antisocial personality disorder is psychopathy, first described by Philippe Pinel, the French physician who famously freed the insane from their chains, in 180, as mania without delirium, because the patient did not display delusions or other typical signs of psychosis. Later, Benjamin Rush, an American physician who pioneered in the medical treatment of the mentally ill, characterized the same syndrome as moral derangement because violent and other antisocial behavior featured so prominently in the cases he observed. This term was replaced by moral insanity, and then by psychopathy.

The classical clinical description of "primary psychopathy" was provided by Hervey Cleckley's book The Mask of Sanity (1941):

  • intelligent
  • charming
  • unreliable
  • dishonest
  • irresponsible
  • self-centered
  • emotionally shallow
  • lacking empathy
  • lacking insight.
These and symptoms were developed by Robert Hare, a Canadian psychologist, into the Psychopathy Checklist (PCL), which has become the standard instrument for assessing psychopathy. Hare's book,Without Conscience: The Disturbing World of the Psychopaths Among Us (1993), is essentially an update of Cleckley's book, reviewing a considerable body of scientific research on the syndrome.

(Cleckley was also co-author, with Corbett Thigpen, of The Three Faces of Eve, a classic case study of multiple personality disorder that was made into an Oscar-winning film starring Joanne Woodward.)

There are also "secondary" or "neurotic" psychopaths, whose antisocial behavior occurs in the context of conflict and anxiety. One classic case of neurotic psychopathy is Robert Lindner's Rebel Without a Cause: Hypnoanalysis of a Criminal Psychopath, which -- I swear this is true -- was made into the famous movie starring James Dean, Natalie Wood, and Sal Mineo.

Cleckley's characterization of psychopathy is summarized by John Seabrook in "Suffering Souls: The Search for the Roots of Psychopathy" (New Yorker, 11/10/2008, p. 67):

"Beauty and ugliness, except in a very superficial sense, goodness, evil, love, horror, and humor have no actual meaning, no power to move him," Cleckley wrote.... The psychopath talks "entertainingly,"... and is "brilliant and charming," but nonetheless "carries disaster lightly in each hand." Cleckley emphasized his subjects' deceptive, predatory nature, writing that the psychopath is capable of "concealing behind a perfect mimicry of normal emotion, fine intelligence, and social responsibility a grossly disabled and irresponsible personality." This mimicry allows psychopaths to function, and even thrive, in normal society.

In the 1930s, the alternative label psychopath was coined by G.E. Partridge, and psychopathy entered the first edition of the DSM as sociopathic personality. In the 2nd edition of DSM, the syndrome was renamed antisocial personality disorder

  • Actually, psychopathy is quite not the same thing as antisocial personality disorder.  Only a minority of individuals with the diagnosis of antisocial personality disorder are psychopaths, as defined by Cleckley and Hare.

Psychopaths would seem to be excellent candidates for the insanity defense -- after all, they suffer from a particular mental disorder which, by its very definition, disposes them to antisocial and criminal behavior.  Unfortunately, psychopathy is generally excluded from the insanity defense from other considerations.  Not the least of which is that most psychopaths, when asked will freely admit that their conduct is immoral or illegal or unethical.  So they do appreciate the difference between right and wrong.  And in other respects, they can appear quite intelligent and charming.  They don't hallucinate, they're not delusional.  So, to all outward appearances, they would seem both to understand the difference between right and wrong and to be able to conform their conduct to societal rules.  But they don't and that makes them look like they're criminals, not mentally ill.  And they're treated as such: it's been estimated that as many as 1/4-1/3 of prisoners in American jails are psychopaths.

9.  Behavioral Disorders consist of specific maladaptive behaviors that occur in the absence of signs of any associated mental disorders (e.g., psychosis, neurosis, or personality disorder).

  • Alcoholism and alcohol abuse is a widely recognized form.
  • Drug addiction and other forms of substance abuse are also classified under this label.
  • Addictions to sex, gambling, and other activities are also recognized as behavioral disorders. Whether these are "real" addictions, like the physical addiction caused by some drugs, is a matter of some controversy.

(10.)  In addition to these forms of mental illness, there are more mundane problems in living (a phrase coined by T.S. Szasz, a famous critic of psychiatry, in his book The Myth of Mental Illness). These include:

  • marital stress
  • sexual dysfunction
  • adjustment problems
  • stress reactions
  • vocational quandaries.

These problems don't remotely resemble mental illness, but they can be extremely distressing to the people involved. Accordingly, they are often treated by mental health professionals, including counseling psychologists as well as clinical psychologists, psychiatrists, and clinical social workers.


Culture-Specific Syndromes

Schizophrenia, depression and anxiety disorder, like cancer, heart disease, and measles, are found everywhere -- though their incidence and precise manifestation can vary from culture to culture.  While this mean seem puzzling at first, the existence of culture-specific syndromes only underscores the point that the individual's mind and behavior exist in and are shaped by sociocultural context, which is why psychology is both a biological and a social science.

In addition, there are certain forms of mental illness that are encountered only in particular cultures.  For example:

  • Latah, observed in Southeast Asia and Malaysia, is characterized by sudden, extreme startle reactions, loss of behavioral control, and profanity.
  • Ataque de nervios, observed in Latin America, is characterized by shouting, tremors, cursing, feelings of loss of control, and extremely high levels of fear, and can be accompanied by interpersonal violence or suicidal behavior.
  • In koro, observed in Southeast Asia and Africa, the person is obsessed by the idea that his genitalia are shrinking and disappearing.
  • In amok, observe din Malaysia, men (mostly) withdraw and brood, followed by a bout of uncontrolled violence -- hence the English phrase, "running amok".
  • In 2-D love, some Japanese men (again, mostly) develop romantic infatuations with animated characters (anime). 

These syndromes are rarely seen in western developed countries -- except, perhaps, among recent immigrants from these regions. 

An interesting recent case is uppgivenhetsyndrom ("resignation syndrome"), which has been diagnosed among refugee children in Sweden who face deportation -- only in Sweden (at least so far), and only in refugee children (not adults).  These children (and, for that matter, their parents) are under constant, prolonged stress -- first from the conflict that made them refugees in the first place, then from the difficult migration from their home country through Europe to Sweden, and then from the uncertainties of refugee life:  How long will they be able to stay?  How will they live while they are here?  When will they be able to return home?  What will things be like when they get there?  In fact, some refugees are denied asylum, even in a country like Sweden, and it's in these children that uppgivenhetsyndrom is diagnosed.  A typical patient appears to be unconscious, even comatose: “totally passive, immobile, lacks tonus, withdrawn, mute, unable to eat and drink, incontinent and not reacting to physical stimuli or pain.”  However, they are not in a coma.  Their reflexes are normal, as are cardiovascular signs such as pulse rate and blood pressure.  In fact, they show no signs of neurological or any other physical illness.  For this reason, even though these children are obviously under a great deal of stress, uppgivenhetsyndrom is not a stress-related psychophysiological disorder, precisely because there is no evidence of any organic damage -- as you would find in extreme cases of Selye's General Adaptation Syndrome (discussed in the lectures on "The Biological Bases of Mind and Behavior").  In our terms, uppgivenhetsyndrom appears to be a culture-specific form of somatization disorder.  Cases started appearing in the early 2000s, and by 2005 more than 400 cases had been diagnosed.  The children typically recover if their families are permitted to stay in Sweden, especially if they (and their families) also receive psychotherapy aimed at their underlying state of fear and hopelessness (Bodegard, Acta Paediatrica, 2005; see also "The Apathetic" by Rachel Aviv, New Yorker, 04/03/2017, which also discusses culture-specific syndromes in general).  

Cultural differences can also work in reverse, preventing "universal" illnesses from being recognized.  The Hmong people of Laos recognize a condition known as  quag dab peg -- literally, “the spirit catches you and you fall down”; it is treated through religious rituals.  In the West, this same condition is known as epilepsy, and is usually treated quite effectively with drugs.  For an excellent treatment of this problem among Hmong refugees in the Unites States, see The Spirit Catches You and Then You Fall Down: A Hmong Child, Her American Doctors, and the Collision of Two Cultures by Anne Fadiman (1997).

Although there are some culture-specific forms of mental illness, for the most part the major psychiatric syndromes are considered universal.  In Crazy Like Us: The Globalization of the American Psyche (2010), Ethan Watters argues that DSM has become a kind of cultural export, shaping non-Western views of mental illness and its treatment.  Watters suggests that this is a bad thing -- a kind of intellectual colonialism.  On the other hand, nobody complains when other aspects of Western science and medicine are exported to non-Western countries, as in the case of treatments for HIV/AIDS, Ebola, or Zika.  It's likely that schizophrenia, depression, anxiety, and other major mental illnesses are, indeed universal.  But still, non-Western countries likely have something to teach us about prevention and treatment.  For example, epidemiological studies have found that the prospects for recovery from schizophrenia are much better in some cultures than others - -suggesting that, when it comes to mental illness, it's not all in the genes and neurotransmitters.


Structure and Function

In many ways, mental illnesses are analogous to the physical illnesses diagnosed and treated by physicians and other medical professionals. Just as physical illness stems from abnormalities in bodily structure or function -- a weak heart valve, or bacterial infection, or whatever -- so mental illness stems from abnormalities in mental structure or function -- a defect in the system for affect regulation, perhaps, or just acquiring, through learning, some maladaptive belief or expectation.

  • Abnormalities in cognition are prominent in Alzheimer's disease and other forms of dementia, and in schizophrenia.
  • Abnormalities in emotion are prominent in the anxiety disorders, and in the affective disorders.
  • Abnormalities in motivation are prominent in psychopathy.


The Medical Model of Psychopathology

In fact, the language of medicine pervades our discussion of psychopathology. Thus, we have:

  • mental patients,
  • with acute mental illnesses,
  • associated with a particular etiology,course, and prognosis,
  • treated in mental hospitals,
  • which also have rehabilitation programs for the chronically mentally ill, and
  • programs of mental hygiene to prevent mental illness from occurring in the first place.

Mental illness is diagnosed by

  • symptoms, or publicly observable manifestations of psychopathology (as when a patient complains about being depressed), and
  • signs, manifestations of psychopathology that are identifiable by a trained professional (perhaps by the results of formal psychological testing).

These symptoms and signs of mental illness may be grouped into

  • syndromes, or clusters of symptoms that tend to occur together;
  • diseases, which are syndromes whose underlying cause is known; and
  • illness, which is the subjective experience of disease

Mental illnesses run a particular time course:

  • There is an acute phase, between the onset of illness and its remission (whether the illnesses is treated or not).
  • If the illness does not remit, the patient proceeds to the chronic phase.
  • Prognosis refers to the likelihood that remission will improve (with treatment or not).
  • Relapse refers to a return of symptoms after a patient has shown some improvement.
  • Recurrence refers to a new acute episode of illness after a patient has achieved remission.

These analogies are one aspect of the medical model of psychopathology.

Beyond these analogies, the medical model also has implications for the nature of mental illness. However, these implications are commonly misunderstood. It is commonly believed that the medical model ascribes mental illnesses to organic causes. That every psychiatric syndrome is ultimately an organic brain syndrome. As Ralph Gerard, one proponent of this viewpoint, once put it:

"Behind every twisted thought there lies a twisted molecule".

Similarly, Eric Kandel, the psychiatrist who won the Nobel Prize for his studies of long-term potentiation in Aplysia, discussed in the lectures on "Learning", has stated that "All mental processes are brain processes, and therefore all disorders of mental functioning are biological diseases....  The brain is the organ of the mind.  Where else could [mental illness] be if not in the brain?" (quoted by Kirsten Weir in "the Roots of Mental Illness", Monitor on Psychology, 06/2012). 

This "somatogenic" view of mental illness is quite popular, but it is not what the medical model is about. All the medical model asserts is that mental illness has natural causes.According to the medical model, the causes of mental illness may be biological in nature, or they might be psychosocial in nature. All that matters is that they are natural causes that can be ascertained through the methods of empirical science -- namely psychology and related fields. By extension, the medical model holds that mental illness can be treated and prevented by methods derived from scientific research.


Misunderstanding the Medical Model

However, there are considerable misunderstandings abroad about the nature of the medical model -- including misunderstandings perpetrated by many writers of introductory textbooks in psychology. For example, the 4th edition Gleitman's Psychology (1995, p. 722), the book that I have used most often in teaching introductory psychology, described the medical model as follows:

Some authors endorse the medical model, a particular version of the pathology model [which assumes that symptoms are produced by an underlying pathology, and that the main goal of treatment is to discover and remove this pathology], that assumes... that the underlying pathology is organic. Its practitioners therefore employ various forms of somatic therapy such as drugs. In addition, it takes for granted that would-be healers should be members of the medical profession.

Many other introductory textbooks (as well as texts in abnormal and clinical psychology) have similar passages. For the most part, they are intended to distinguish an ostensibly somatogenic medical model from the psychogenic models associated with cognitive and behavioral therapy, or to distinguish the profession of psychiatry, with its emphasis on drugs and other physical treatments, from clinical psychology, with its emphasis on behavioral interventions. This common association of the medical model with somatogenic theories and biological treatments reflects a deep misunderstanding, and what I have presented here follows is an attempt to give an alternative perspective on this issue, based on Siegler and Osmond's (1974b) sociological analysis of the medical model,Models of Madness, Models of Medicine (see also Shagass, 1975).

Interestingly, the Osmond of Siegler & Osmond is Humphrey Osmond (1917-2004), a pioneering LSD researcher who (in 1957) coined the word psychedelic to describe the effects of that and other hallucinogenic drugs. Osmond gave LSD to Aldous Huxley, who wrote The Doors of Perception about the experience -- a book from which the rock group The Doors, led by Jim Morrison, took their name. Initially, Osmond thought that LSD would serve as a laboratory model (see below) of schizophrenia (or, at lest, of schizophrenic hallucinations), but he later focused his attention on the potential of it and other psychedelic drugs to treat alcoholism and promote "transcendent" alterations in consciousness (Osmond's obituary appeared in the New York Times, 02/22/04).

According to Siegler and Osmond, the history of psychology can be traced in terms of three major models of psychopathology. The supernatural model prevailed before the 18th century Enlightenment. It assumes that psychology reflects the possession of the individual by demons; by implication, the proper response to psychopathology is exorcism. The moral model, which prevailed in the late 18th and early 19th centuries, assumes that psychopathology -- or, more precisely, abnormal behavior -- is deliberately adopted by the individual, much in the manner of criminal behavior; by implication, the proper response to psychopathology is confinement and other forms of punishment. The medical model, which began to emerge in the 19th century, assumes only that psychopathology is the product of natural causes that can be identified by the techniques of empirical science. By implication, the proper response to psychopathology is diagnosis according to a scientifically validated system, and attempts at cure or rehabilitation by means of scientifically proven methods. Contrary to the popular view, the medical model does not assert that psychopathology is the product of an abnormal biological condition, or that it should be treated only with drugs or surgery. Rather, the medical model is centered on particular rules regulating two primary social roles: the doctor and the patient.

To illustrate the differences between these models, consider the 1973 decision by the American Psychiatric Association, to "de-list" homosexuality as a mental illness. As Charles Silverstein (2011) has noted -- he was one of the psychologists who persuaded the psychiatrists to change their position -- at the time, "homosexuality was considered a crime, a sin, and a mental pathology". So, in that case, homosexuality fell under both the supernatural model (it was considered to be a sin, the work of the Devil), the moral model (it was considered to be a crime, a willful antisocial act), and it was considered to be an illness (a sexual pathology).

The doctor (who does not have to be a physician, or even hold a doctoral degree) possesses a special kind of authority called Aesculapian (after Aesculapius, the Greek god of medicine). Aesculapian authority is a combination of three other kinds of authority recognized by sociologists:sapiential authority, by virtue of the doctor's special knowledge and expertise;moral authority, by virtue of the doctor's concern for the afflicted individual; and charismatic authority, by virtue of the afflicted person's faith that the doctor will be of help. Note that doctors lack structural authority: they cannot enforce their prescriptions, resulting in a markedly low rate of compliance. The doctor's role is to investigate the disorder at hand, by means of procedures that might be unpleasant, intrusive, or even frightening. On the basis of this investigation the doctor makes a diagnosis, informs the afflicted person about the nature of his or her problem, absolves the patient of blame (it is critical to medical ethics that people are not blamed, and thus punished, for their illnesses), and finally creates the conditions for the afflicted person to return to health and his or her proper role in society.

The patient enacts his or her part by taking on the sick role: he or she must seek help from the doctor, and cooperate with treatment; in return, the patient is exempt from some or all of his or her responsibilities during treatment. Note that a doctor's order has supreme authority in society -- it can exempt the person from jury duty, military service, and final examinations. It has this power by virtue of our society's implicit adoption of the medical model and the sick role. However, patients cannot remain in the sick role forever: they must leave it eventually, either by recovering or dying.

A special case is when the illness is chronic, and nothing more can be done to achieve a cure. Under these circumstances the role relationships change. It is the responsibility of the doctor to remove the sick role, and confer the impaired role on the afflicted patient. At this point the patient must leave the hospital and active treatment. What once was an illness is transformed into a handicap; and the doctor is replaced by a rehabilitation specialist. Patients are no longer absolved from their responsibilities: they must return to some socially productive activity, do things for themselves, and cope with their handicaps as well as possible.

What has just been described is what Siegler and Osmond (1974) call the clinical medical model, which is one of many different versions. All versions of the medical model assume that disease is the product of natural causes, and that the proper response is scientifically based treatment. However, they differ in terms of their role relationships. In the clinical medical model, the goal is to cure disease in an individual, and the role relationships are doctor and patient.

  • In the public health medical model, the goal is to cure illnesses that cannot be controlled on an individual basis. Its focus is on prevention of disease in a population, rather than an individual, and in fact its prescriptions for public health may damage some individuals; moreover, the public health official may decide to permit some diseases to occur, perhaps for economic reasons. Note that the role relationships differ in the public health medical model. The doctor is replaced by the public health official, who has structural as well as sapiential authority -- he or she has the power of the law and the courts to enforce "doctor's orders", and to force us to fluoridate our water, or be immunized against smallpox and polio. And the patient is replaced by the citizen, who by his or her vote can place limits on the public health official's authority to act.
  • In the scientific medical model, there is no direct interest in intervention (prevention or cure), but interest only in the acquisition of scientific knowledge about the nature of disease. Again, the role relationships change. The doctor is replaced by the investigator who has only sapiential authority. The investigator has no obligation to cure and prevent disease, and in certain circumstances may even inflict disease (or allow it to occur) as part of a controlled experiment. The patient is replaced by the subject who volunteers his or her services. Subjects are under no obligation to participate in research, and do so only when they are compensated in some way for their services. Subjects have rights that patients and citizens do not: they must be protected from harm, and must be assured that the procedures to which they are subjected are worthwhile; their only responsibility is to honor their commitment to the study.

So much detail has been devoted to the medical model because it has been subject to so much misunderstanding -- and also because it gives us the opportunity to unite two social sciences, psychology and sociology, at least for a moment. However, the interested reader should reflect on the implications of the medical model(s) for understanding psychopathology -- its nature, causes, treatment, and prevention. And also reflect on the proposition that many of the abuses frequently attributed to mental health professionals -- such as the confinement of mental patients in the back wards of mental hospitals, without any active treatment -- actually represent violations, not expressions, of the medical model.

Excerpted from Kihlstrom, J.F. (2002), "To honor Kraepelin...: From symptoms to pathology in the diagnosis of mental illness". In L.E. Beutler & M.L. Malik (Eds.),Alternatives to the DSM (pp. 279-303). Washington, D.C.: American Psychological Association.


Diagnosis as Categorization

The diagnosis of mental illness is an act of categorization in which patients (or their illnesses) are assigned to categories based on the same feature-matching process we use to categorize other objects.

  • The patient's symptoms and signs serve as features.
  • The clinician compares the patient's symptoms and signs to those that are associated with various diagnostic categories, as listed in the Diagnostic and Statistical for Mental Disorders (DSM), published by the American Psychiatric Association. The DSM is the "official" list of mental illnesses recognized by the psychiatric profession in America, and has been adopted by other helping professions, such as clinical psychology and clinical social work, as well.
  • The patient's illness is diagnosed in terms of the illness that most closely fits his or her symptoms.
  • Sometimes a patient receives more than one diagnosis, a situation known as comorbidity. For example, anxiety disorder is often "comorbid" with depressive disorder.
  • There is a natural linguistic tendency to confuse the patient with the illness -- that is, to refer to "schizophrenics" and "depressives" instead of "patients with schizophrenia" or depression. This transformation of a category of illness into a category of people is politically incorrect, and a source of great annoyance to many mental patients and their families. But we can't really help engaging in such linguistic shorthand, anymore than we can help referring to people as "Asians" or "Hispanics" -- or, for that matter, "extraverts" or "homosexuals".

19th
              Century Psychiatric DiagnosisFormal psychiatric diagnosis essentially began with a French physician, Jean-Etienne Dominique Esquirol (1772-1840), who drew a fundamental distinction between the insane, the mentally deficient (today we use the term intellectually disabled), and the criminal. In the 19th century, Emil Kraepelin (1856-1926) divided the psychoses into two major categories --dementia praecox (early dementia, or what we now call schizophrenia) and manic-depressive illness (what we now call affective disorder). And a little later, Pierre Janet (1859-1947) did for the neuroses what Kraepelin had done for the psychoses, dividing them into two major categories --hysteria (including what we now call the dissociative and conversion disorders) and psychasthenia (including anxiety disorders and some forms of depression).

Growth of the
              Psychiatric NosologySince the 19th century, the number of recognized mental illnesses has grown markedly. The first edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-I), published in 1952, listed only about 100 different syndromes. The latest edition,DSM-V, published in 1994, listed almost 300.DSM-VI is due to be published sometime around 2012, and we'll see how many mental illnesses there are then!

Whatever its edition,DSM is essentially a catalog of mental illnesses, with a list of the symptoms characteristic of each. Diagnosis is essentially a feature-matching process that asks whether a patient has the symptoms associated with a particular syndrome or disease. In other words, diagnosis is an act of categorization, so it is interesting to look at what kind of categories the diagnostic categories are.


Diagnostic Categories as Proper Sets

In the past, the diagnostic categories of mental illness were at least tacitly construed as proper sets, where sets of symptoms served as defining features of a syndrome, singly necessary and jointly sufficient to define an illness (such as schizophrenia) or a person (such as a schizophrenic) as having some illness.

  • For example, traditionally mental illnesses were classified as organic (associated with demonstrable brain insult, injury, or disease) or functional (occurring in the absence of obvious brain damage).
    • The functional mental illnesses were also characterized as psychotic (featuring a loss of reality testing) or neurotic (featuring symptoms of anxiety).
      • The functional psychoses were classified (by Ernst Kraepelin, a 19th-century psychiatrist) into two broad groups,dementia praecox (what we now call schizophrenia, featuring symptoms of cognitive disorder) or manic-depressive illness (featuring symptoms of affective disorder).
  • The neuroses were also classified (by Pierre Janet, another psychiatrist of the late 19th and early 20th century) into two broad groups,psychasthenia (syndromes such as anxiety disorder, obsessive-compulsive disorder, hypochondriasis, and "neurotic" depression, where the patient is aware of what is wrong), and hysteria (such as psychogenic amnesia, blindness, or paralysis) where the patient suffers a constriction in awareness.

Hierarchical Organization of PsychopathologyIn this way, the traditional psychiatric nosology formed a conceptual hierarchy with superordinate categories (organic vs. functional, psychotic vs. neurotic) at the top. Subordinate categories were then created by adding symptoms (such as loss of reality testing or problems with anxiety) as defining features.


The Case of Schizophrenia

The nature of traditional psychiatric diagnosis, as a perfectly nested hierarchy of proper sets, is exemplified by the work of Eugen Bleuler, a Swiss psychiatrist who in 1911 redefined dementia praecox as "the group of schizophrenias".

  • Bleuler accepted Kraepelin's classification of "dementia praecox" as a functional psychosis; but he did not believe, as Kraepelin's name implied, that the syndrome was merely a form of dementia that occurred relatively early in life, as opposed to the "senile dementia" associated with old age.
  • Instead, Bleuler believed he had discovered a new form of illness characterized by a discordance among basic mental faculties of cognition, emotion, and motivation -- hence his label,schizophrenia.
  • In Bleuler's view, all schizophrenics shared four symptoms in common -- his "Four As":
    • associative disturbance, manifested in a certain disorganization in the logical organization of thought ("thinking crookedly"), neologisms (made-up words), "word salad" consisting of loose, "clang", and chained associations, and a tendency toward over-inclusiveness in categorization;
    • anhedonia, an inability to experience positive emotions -- and, more generally,blunted or inappropriate affect (emotional responses that are not correctly tuned to the situation);
    • ambivalence, a lack of initiative and a diminished motivation to comply with others' wishes; and
    • autism, withdrawal from others and a general detachment from reality.
  • There were also four subtypes, characterized by additional defining symptoms:
    • simple, as described above;
    • hebephrenic, characterized by childlike demeanor;
    • catatonic, characterized by immobility; and
    • paranoid, characterized by delusions.

Bleuler clearly construed the 4 As as defining features of schizophrenia:

  • Every schizophrenic patient displayed all four symptoms in one form or another.
  • Every patient who displayed all four symptoms was a schizophrenic.

The boundaries between schizophrenia and manic-depressive illnesses were clear: you could have one illness or the other, but not both. And the boundaries between schizophrenic subtypes were also clear: you could be hebephrenic or catatonic, but not both.

A similar hierarchy of syndromes developed around manic-depressive illness.


Problems with the Diagnostic Categories

This view of the diagnostic categories as proper sets, recognized by symptoms that were singly necessary and jointly sufficient to define the diagnosis. And early editions of the DSM were at least implicitly structured around this conceptualization, in terms of the "textbook cases" it used to characterize each syndrome.

However, the traditional view quickly encountered problems of a sort that are familiar from the critique of the classical view of categories as proper sets (discussed in the lectures on Thought and Language).The simple fact was that very few patients actually resembled the textbook descriptions of the various syndromes.

  • Partial Expression: Many patients displayed some but not all of the symptoms that defined a particular syndrome. In the case of schizophrenia, this led to the introduction of new syndromes such as schizoid personality disorder,schizotypal personality disorder, and paranoid personality disorder. There were also many different forms of depression.
  • Combined Expression: Many patients displayed defining symptoms of several different categories. In the case of schizophrenia, again, this led to the introduction of new syndromes such as pseudoneurotic schizophrenia,pseudopsychopathic schizophrenia,schizoaffective disorder. The term borderline personality disorder was introduced to cover patients who displayed the symptoms of both psychosis and neurosis -- they were literally "on the border" between these major diagnostic categories.


Psychiatric Syndromes as Fuzzy Sets

Accordingly, for the third edition of DSM (DSM-III), published in 1980, the diagnostic system was reformed to take into account a new understanding of the structure of natural categories offered by cognitive psychology. Under this "revisionist" view:

  • The diagnostic categories were construed as fuzzy sets, rather than proper sets: there is no clear boundary that distinguishes schizophrenia or anxiety disorder from other forms of mental illness.
  • Symptoms are considered to be characteristic rather than defining features, only probabilistically associated with various syndromes: delusions may be highly likely to occur in schizophrenia, but they do not define schizophrenia because they are also observed in other syndromes.
  • Specific instances of the categories sharing a family resemblance, resulting in a great deal of heterogeneity among patients carrying the same diagnosis.
  • Each syndrome is represented by a "prototypical" patient who has many, but not necessarily all, of its characteristic symptoms.

Consider, for example, several prominent psychiatric diagnoses, as listed in DSM-5 (2013):

Schizophrenia may be diagnosed in the presence of any two "characteristic symptoms". Positive symptoms entail the presence of something normally absent, like delusions.Negative symptoms entail the absence of something normally present, appropriate emotional responses.

One patient may have delusions and hallucinations while another may have catatonic behavior and flat affect, but both are "schizophrenics".

DSM-5 also abandons the Bleulerian subtypes, such as hebephrenic and catatonic schizophrenia, in favor of a distinction between acute and chronic schizophrenia, and whether the patient has had multiple episodes prior to the current one.  Some clinicians and researchers promote a distinction between Type I schizophrenia, where positive symptoms predominate, and Type II schizophrenia, dominated by negative symptoms.


Similarly, Major Depressive Disorder can be diagnosed after observing any five of a large number of symptoms. Note that depression can be diagnosed even if the person doesn't feel depressed (!), so long as he does manifest diminished interest or pleasure in most or all daily activities.


All patients with Anxiety Disorder must display abnormal levels of anxiety, but this singly necessary symptom is not sufficient to make the diagnosis. Other symptoms must also be present, but none of these additional symptoms is necessary for the diagnosis.

Similarly, the diagnosis of Post-Traumatic Stress Disorder requires that the patient have been exposed to traumatic levels of stress, so this is another instance of a singly necessary defining feature. The exposure may take the form of directly experiencing, or "just" witnessing, the trauma; or learning about a trauma which occurred to a significant other.  The exposure can also be repeated or prolonged, not just a single episode.  But none of the other symptoms are necessary to make the diagnosis. The patient must show some "intrusion"  symptoms, as well as symptoms of avoidance, negative alterations of cognitions or mood associated with the trauma, and marked alterations in arousal and reactivity, so these criteria also have some of the qualities of necessary symptoms.  But there are lots of different ways to display these features, no particular manifestation is necessary.

A statistical technique called network analysis can be used to show the relations among the various categories of psychopathology. Denny Borsboom, a Dutch psychologist, and his colleagues created a simple spreadsheet showing the co-occurrence of all the symptoms listed in DSM-IV -- that is, how many times insomnia and fatigue are listed as characteristic symptoms of the same syndrome (such as sleep disorder or depression). The obtained a graph in which each symptom is represented by a node, and nodes are connected whenever two symptoms are characteristic of the same disorder. They then colored each of the nodes, according to the major category of mental illness in which each symptom occurred. The resulting graph shows the extent of symptom overlap in DSM-IV. in which

Symptom Overlap in DSM-IVRegardless of diagnostic category, there was no expectation that all of the symptoms listed under a syndrome will be present in any particular case. In principle, we can observe all possible combinations of characteristic symptoms. Diagnosis, then, is a matter of judgment under uncertainty. Studies of the diagnostic process by Nancy Cantor and her associates show that the certainty with which a diagnosis of schizophrenia is made, for example, will be a function of the number of the patient's symptoms that are highly characteristic of schizophrenia, and the number of symptoms that are more characteristic of other diagnostic categories. In this system, "textbook cases" serve as category prototypes, and there is explicit recognition of heterogeneity among actual patients.


DSM-5

The Diagnostic and Statistical Manual for Mental Disorders  (DSM) was first published in 1952, with a second edition appearing in 1968.  Both manuals were as much literary productions as scientific ones.  They were heavily influenced by Freudian psychoanalysis, centered on the distinction between psychoses and neuroses, and didn't really include clear criteria for making various diagnoses.  All that changed with DSM-III in 1980 (a revised edition, known as DSM-II-TR, for "Text Revision", came out in 1987), and DSM-IV in 1994. In these editions, serious effort went into producing checklists of symptoms by which the various disorders could be reliably diagnosed.

The fifth edition of DSM, known as DSM-5, retains the "fuzzy set" structure of the diagnostic categories, while sometimes revising the criteria for specific disorders.

  • For example, in DSM-IV, the diagnosis of schizophrenia required the presence of any two "characteristic symptoms", whether positive or negative.  In DSM-5, the diagnosis requires that at least one of these "characteristic symptoms" be a "positive" symptom such as delusions, hallucinations, disorganized speech, or grossly disorganized behavior.
    • Moreover, it abandons the Bleulerian" subtypes of simple, hebephrenic, catatonic, and paranoid schizophrenia.
  • DSM-IV diagnosed autism based on three criterial symptoms: impairments in social interaction, impairments in social communication (language), and restricted, repetitive, and stereotyped patterns of behavior, interests, and activities.  DSM-5 lumps the two kinds of social impairments together, so that Asperger's syndrome joins classic autism in a single category of autistic spectrum disorder
    • The implication is that autism and Asperger's syndrome differ only in degree, quantitatively, not in kind, qualitatively.  Many in the autism/Asperger's community disagree, including Temple Grandin, whose vigorous critique-from-the-inside of DSM-5 is included in her 2013 book, the Autistic Brain: Thinking Across the Spectrum (2013, written with Richard Panek).

However, DSM-5 abandons certain other features of previous editions.

  • Previous editions were arranged hierarchically, with major sections devoted to the psychoses, mood disorders, anxiety disorders, and the like.  DSM-5 is organized developmentally, with the earliest chapters covering disorders that first appear in childhood.  
  • Reflecting increasing cultural diversity, DSM-5 pays more attention to "culture-specific" syndromes, as well as cultural factors that may affect diagnosis.
  • Earlier editions made liberal use of a subcategory Not Otherwise Specified (NOS), as in "Psychotic Disorder -- Not Otherwise Specified", for use with patients who displayed some features of psychosis, but not enough to enable a more specific diagnosis. Previously, a remarkably high proportion of psychiatric patients received diagnoses containing the NOS suffix.  This was especially true of the personality disorders and pervasive developmental disorder (autism). NOS allowed psychiatrists too much leeway to diagnose anyone with a mental illness, without being specific as to precisely what illness he or she had: the suffix is now gone.
  • At the same time, DSM-5 also allows practitioners to assess a disorder's degree of severity in the particular patient -- much as medical disorders are rated as mild, moderate, or severe. So, a patient could meet all the criteria for schizophrenia, even though each of the relevant symptoms was present only to a mild or moderate degree.
  • Previous editions classified patients according to several "axes". 
    • Axis I referred to the mental disorder itself, such as schizophrenia or bipolar disorder. 
    • Axis II referred to background personality characteristics, and included the personality disorders themselves, as well as intellectual disability. 
    • Axis III concerned general medical conditions that might relate to diagnosis or treatment. 
    • Axis IV referred to socioeconomic factors, such as poverty, that might be relevant to diagnosis or treatment. 
    • Axis VI consisted of a global assessment of functioning.
Aside from Axes I and II, none of these dimensions got much use in real-world, everyday diagnosis, and so they were dropped.

Some of the changes in DSM-5 are highly controversial.

  • Although it eliminates the NOS subcategory for specific mental disorders such as schizophrenia, it includes a new category of Unspecified Mental Disorder, specifically for patients who "do not meet the full criteria for any mental disorder" (p. 708).
    • And it also revives NOS in another guise, as in the new category of Unspecified Schizophrenia Spectrum Disorder and Unspecified Attention -Deficit/Hyperactivity Disorder.
  • DSM-5 collapses Autistic Disorder and Asperger's Syndrome into a single category of Autistic Spectrum Disorder. The implication is that Asperger's Syndrome differs from Autistic Disorder only in terms of severity. But autism and Asperger's Syndrome don't just differ quantitatively, in terms of severity. They also differ qualitatively, in that patients with Asperger's Syndrome have language abilities that patients with Autistic Disorder simply don't have. Moreover, people with Asperger's Syndrome have an interest in interpersonal relations, even if they lack the skills to build or maintain them. Classifying a child with Asperger's Syndrome as "autistic" may mean that the child will not receive optimal treatment.
  • As another example, DSM-5 eliminates the "bereavement exclusion" for the diagnosis of Major Depressive Episode. Previously, people who showed signs of depression immediately following the loss of a loved one (spouse, child, parent, pet, friend, etc.) were not diagnosed as "depressed" unless their depression continued for more than two months after the onset of symptoms. But now, patients can be diagnosed as "depressed" even though they are still mourning a loss.
  • The diagnosis of bipolar disorder has come to be used more frequently with children, a trend that is highly controversial in itself.  Compounding the controversy, DSM-5 introduces a new diagnostic category of disruptive mood dysregulation disorder -- which, critics suggest, might apply to any child who has frequent temper tantrums.
    • Bipolar disorder used to be a form of what Kraepelin called manic-depressive illness, a broad category which also included syndromes of "pure" mania and depression.  In classic MDI, patients experienced alternating periods of mania and depression, usually separated by a symptom-free interval.  By the time that DSM-III was published, however, bipolar disorder was considered to be distinct from depression; and there were so few cases of mania that it was virtually written out of the diagnostic manual.  But DSM-5 now acknowledges a number of subtypes: Bipolar I disorder (BP I) includes patients with episodes of severe mania, regardless of whether they have episodes of depression; patients with Bipolar II (BP II) experience milder episodes of mania (what is known as hypomania), along with episodes of depression. Then there is a separate category for unipolar depressive disorder; but, so far, there is still no recognition of unipolar manic disorder.  For more on the status of mania and manic-depressive disorder, see "The Undiscovered Illness" by Simon Makin (Scientific American, 03/2019).
  • In addition to anorexia and bulimia, DSM-5 introduces the opposite side of the eating-disorder coin, binge-eating disorder, applied to anyone who eats to excess at least once per week.
  • Hoarding is now a full-fledged mental disorder in its own right, not just one possible symptom of obsessive-compulsive disorder.

Some commentators have expressed the fear of diagnostic inflation -- that these kind of changes may make it possible to diagnose anyone with a mental illness -- or, at least, that DSM threatens to "pathologize normality" by classifying normal mental states, like bereavement or even shyness (which can resemble depression), as mental illnesses.  (Benjamin Wolman, a leading psychoanalytic psychotherapist, once wrote a book entitled Call No Man Normal.)  Over-diagnosis, of course, inevitably leads to over-treatment.

In addition, some of those responsible for formulating the DSM-5 criteria had unacknowledged links with the pharmaceutical industry, raising the possibility that increasing the number of people who can be diagnosed with some form of mental illness, however mild, will effectively expand the market for psychiatric medications.  At the very least, these affiliations might have biased the Manual developers toward biological diagnoses and treatments.  At the worst, it raises the possibility that Big Pharma might encourage clinicians to formulate new diagnoses that expand the market for available psychotropic drugs -- what Marcia Angell, the former editor of the New England Journal of Medicine, has called a "patent-extending game" (see her book, the Truth About Drug Companies: How They Deceive Us and What We Can Do About It, 2004). 

  • In 1993, the editors of DSM-IV considered, but rejected, a proposal for a new diagnosis of Pre-Menstrual Dysphoric Disorder (PMDD), a severe form of Premenstrual Syndrome (PMS) experienced by some women of childbearing age: previously, it was known as Late Luteal Phase Dysphoric Disorder (LLPDD).  At roughly the same time, SmithKline, a major pharmaceutical company, was losing its patent on Prozac, a drug used in the treatment of depression; SmithKline repackaged Prozac as Serafem (changing little more than the color of the pill), and began recommending the "new" drug for the treatment of this "new" disorder.  PMDD was given official recognition in DSM-5.  Many commentators entertain the suspicion that PMDD was proposed as part of a strategy to legitimize extending the patent on Prozac. 
  • Aricept is a drug commonly prescribed for symptom relief in cases of mild to moderate Alzheimer's Disease (AD): it doesn't reverse, or even slow, the progression of the disease, but it does provide temporary relief from some symptoms.  In 1999, some researchers began to talk about Mild Cognitive Impairment (MCI) as a "prodromal" or incipient phase of Alzheimer's disease.  Diagnosis of MCI requires objective evidence of memory impairment -- worse than normal cognitive aging, but not as severe as AD.  But in 2013, some researchers began touting a new syndrome of Subjective Memory Impairment (SMI) to cover cases where the patient (or his family) complains about memory impairment, but does not perform poorly on objective memory tests.  Obviously (just think about the normal bell-shaped curve for a moment) there are more people with MCI than with AD, and probably more people with SMI than MCI -- thus potentially expanding the clientele for drugs like Aricept.
    • If college students take Ritalin as a "smart drug", why shouldn't their grandparents take Aricept for the same purpose?

Other commentators worry that many DSM-5 diagnoses simply lack validity to begin with.  In an important early paper, Robins and Guze (1970) outlined the criteria for establishing the validity of any diagnostic category:

  1. Distinguish the target diagnosis (e.g., schizophrenia) from other possible diagnoses (e.g., bipolar disorder).
  2. Predict performance on laboratory tests involving psychological and biological variables.
  3. Correlate with family history of mental disorders.
  4. Predicts the course of the illness.
  5. Predicts the outcome of treatment.

By these standards, more than 40 years later, many DSM diagnoses do not fare too well.  For example, the American Psychiatric Association proposed to "field-test" DSM-5 before its actual publication, to make sure that clinicians could use it reliably to make diagnoses.  Unfortunately, DSM-5 failed its field trials, in that the rate of disagreement between two clinicians, applying the same criteria to the same patient, was unacceptably high.

This state of affairs has led some investigators to propose alternatives to the DSM.

  • Some theorists have proposed that diagnosis move from a categorical structure to a dimensional structure.  In such a system, patients would be assessed along continuous dimensions representing various aspects of cognition, emotion, motivation, and behavior -- much as personality assessment is guided by the Big Five dimensions of personality.Of course, the precise nature of these dimensions is something for future research to establish.
  • Other theorists have proposed moving toward a biological classification -- perhaps in terms of neurotransmitter activity.  This assumes that all mental illnesses involve underlying biological abnormalities -- an assumption which isn't self-evidently true.  Some mental illnesses, such as phobias and certain types of depression, may reflect maladaptive social learning by patients whose neural functioning is essentially normal.
  • Other theorists, including yours truly, have proposed that the diagnostic categories be based on the results of laboratory tests -- but laboratory tests of psychopathology, not of pathological anatomy and physiology.  What these tests might look like is foreshadowed in the next section, on Experimental Psychopathology.

For now, though, DSM-5 is what we've got.  The Social Security administration won't pay disability benefits, insurance companies won't pay for treatment, and courts won't consider the insanity defense, in the absence of an official psychiatric diagnosis, and DSM is how psychiatrists officially diagnose patients.  It's as simple as that.


Research Domain Criteria

One possible alternative diagnostic system, embraced by the National Institute of Mental Health, is the Research Domain Criteria (RDoC).  Instead of employing traditional diagnostic categories, such as schizophrenia and bipolar disorder, RDoC classifies mental processes into several domains, constructs, and sub-constructs, with each construct and sub-construct hypothetically linked to a different neural circuit in the brain. 

  • For example, negative emotionality includes fear, which is linked to the amygdala. 

Here is a list of the domains and sub-domains, as listed in a document published by NIMH in 2011.

  • Negative Valence Systems
    • Acute Threat ("Fear")
    • Potential Threat ("Anxiety")
    • Sustained Threat
    • Loss
    • Frustrative Nonreward (i.e., frustration due to the withholding of reinforcement for previously reinforced behaviors)
  • Positive Valence Systems
    • Approach Motivation
      • Reward Valuation
      • Effort Valuation/Willingness to Work
      • Expectancy/Reward Prediction Error
      • Action Selection/Preference-Based Decision Making
      • Initial Responsiveness to Reward
      • Sustained Responsiveness to Reward
      • Reward Learning
      • Habit
  • Cognitive Systems
    • Attention
    • Perception
      • Visual Perception
      • Auditory Perception
      • Olfactory/Somatosensory/Multimodal Perception
    • Declarative Memory
    • Language Behavior
  • Cognitive (Effortful) Control
    • Goal Selection
    • Updating
    • Representation and Maintenance
    • Response Selection, Inhibition or Suppression
  • Working Memory
    • Active Maintenance
    • Flexible Updating
    • Limited Capacity
    • Interference Control
  • Systems for Social Processes
    • Affiliation and Attachment
      • Attachment Formation and Maintenance
    • Social Communication
      • Reception of Facial Communication
      • Production of Facial Communication
      • Reception of Non-Facial Communication
      • Production of Non-Facial Communication
    • Perception and Understanding of Self
      • Agency
      • Self-Knowledge
    • Perception and Understanding of Others
      • Animacy Perception
      • Action Perception
      • Understanding Mental States
  • Arousal and Regulatory Systems
    • Arousal
    • Circadian Rhythms
    • Sleep and Wakefulness

So, for example, some patients now carrying a diagnosis of autism (or autistic spectrum disorder) might instead be diagnosed as having deficits in the cognitive system underlying the ability to understand mental states.  And, presumably, that disorder would be treated.  How?  Presumably by altering the underlying brain circuitry (or chemistry).  The idea is not all that different from treating high blood pressure or high cholesterol.  You go in to the lab for a test; out comes the diagnosis and a prescription for statins or a beta-blocker or whatever. 

Following the medical model, it's important to get beyond surface symptoms and signs to underlying pathology.  DSM-5 doesn't do that, its diagnoses are solely based on signs and symptoms.  Diagnosis in the rest of medicine is of diseases, and the RDoC is a step in this direction.  At the same time, we can worry about the underlying assumption, which is that each element of underlying pathology is linked to a dysfunction in some brain circuit (Insel, "Faulty Circuits", Scientific American, April 2010).  It's not at all clear that a faulty brain circuit must be involved in every instance of mental illness.  Moreover, the identification of such circuits is years, decades, maybe centuries away.  The whole project of identifying the circuits of the brain, which is the raison d'etre of the Human Connectome Project announced by President Obama in 2013, begins with extremely simple nervous systems, such as aplysia (the sea slug studied by Eric Kandel and others), and then moves up to the mouse, and only later to the human -- at which point the process of linking neural circuits to the various domains can begin.  Not in my lifetime, nor yours, nor your grandchildren's.

Note, however, that the whole enterprise depends on that initial list of domains.  It looks pretty comprehensive, but who says it's the right list?  What if another list would be better?  What if something important is missing?  If the behavioral analysis is wrong, identifying underlying circuits will be misleading. 

In the meantime, diagnosis needs to move beyond signs and symptoms to the identification of underlying pathology through laboratory tests -- underlying psychopathology.  Exactly how to do that isn't entirely clear.


The Hierarchical Taxonomy of Psychopathology

A more workable system, perhaps, abandons categorical diagnoses, even probabilistic, fuzzy-set ones, for a dimensional scheme similar to the Big Five structure that undergirds much research on personality traits.  This is the Hierarchical Taxonomy of Psychopathology, otherwise known as HiTOP.  HiTOP begins at the very lowest level of description, with a long list of symptoms and signs (anxiety, hallucinations, delusions) and traits (manipulative, unstable, antisocial) observed in patients with various mental disorders.  It then takes data from a large (really large) number of patients and calculates the correlations among them.  Using techniques similar to the factor analysis and cluster analysis discussed in the lectures on Methods and Statistics, it then identifies which features tend to co-occur, progressively extracting primary factors, then secondary factors (factors of primary factors), tertiary factors (factors of secondary factors), etc.  The general idea is that the resulting clusters of co-occurring (or correlated) symptoms constitute syndromes, and related syndromes constitute spectra, super-spectra, etc.  Here's what the HiTOP model looked like, circa 2022:

Official HiTOP Working Model


Here's how the hierarchical scheme pans out:

HiTOP has been subject to an extensive public-relations effort, which has begun to make some inroads into clinical practice.  For example, you now hear a lot of clinicians talk about "internalizing" and "externalizing" syndromes, where once you would have heard about "anxiety" and "psychopathy".  The important feature of HiTOP, however, is that it is derived empirically from the actual co-occurrence of various symptoms.  It is not generated from the head of one person, as was the case with Bleuler's "4 As" or the distinction between neurosis and psychosis.  Only time will tell whether HiTOP will prove more useful than the DSM.  Because it stays closer to clinically observable signs and symptoms, however, the wager here is that it will, at least, prove more useful than RDoC.


The Future of Psychiatric Diagnosis

For a while, the National Institute of Mental Health was so fed up with the system represented by DSM that they discouraged new grant proposals on topics like "schizophrenia" and encouraged investigators to focus instead on the kinds of functional and biological categories represented by the RDoC.  It didn't last.

For now, though, and for the foreseeable future, psychiatric diagnosis is going to be based on symptoms and signs, and organized by something very much like DSM-5.   This line of research in experimental psychopathology has a long and distinguished history, going back to Emil Kraepelin's studies of schizophrenia using Donders's reaction-time methodology.  While the RDoC claims to be "agnostic" about the traditional diagnostic categories (actually, far from being agnostic, it rejects them), experimental psychopathology takes them as a starting point, and tries to identify the pathological mental processes that underlie them.  One result of this work should be the more fine-grained classification, moving beyond superficial symptoms and signs, that the RDoC seeks -- but without the excessive biologizing.

For critiques of psychiatric diagnosis and DSM, see:


  • The Selling of DSM (1992) and Making Us Crazy (1997) by Herb Kutchins.
  • They Say You're Crazy (1996) by Paula Caplan.
  • "The Epidemic of Mental Illness: Why?" and "The Illusions of Psychiatry" an essay-review of several books critical of psychiatry by Marcia Angell (New York Review of Books, June 23 and July 14, 2011).
  • "Head Case: Can Psychiatry Be a Science?" by Louis Menand (New Yorker, March 1, 2010).
  • The Book of Woe: The DSM and the Unmaking of Psychiatry (2013) by Gary Greenberg, who doesn't seem to like diagnosis at all (he also wrote another book, entitled Manufacturing Depression).
  • Saving Normal: An Insider's Revolt Against Out-of-Control Psychiatric Diagnosis, DSM-5, Big Pharma, and the Medicalization of Ordinary Life (2013) by Allen Frances (who led the task force that prepared DSM-IV, but thinks that DSM-5  takes an approach to diagnosis that has become outmoded).
  • "Three Approaches to Understanding and Classifying Mental Disorder: ICD-11, DSM-5, and the National Institute of Mental Health's Research Domain Criteria (RDoC)" by Lee Ana Clark et al. (Psychological Science in the Public Interest, 2017).  A comprehensive overview of contemporary approaches to psychiatric diagnosis, emphasizing differences between DSM and RDoC.
  • "Read the Label" by Manvir Singh, reviewing several recent books on psychiatric diagnosis in general, and on autism, sociopathy, and borderline personality disorder in particular; he also has comments on RDoC and HiTOP (New Yorker, 05/13/2024).


Experimental Psychopathology

Current diagnostic practices classify illness based on surface symptoms, but the symptoms are not the disease. Rather, symptoms are assumed to be caused by some underlying disease process --pathology.


Beyond Surface Symptoms to Underlying Pathology

In medicine, symptoms are not the disease, and progress in medical understanding is marked by a shift in focus from symptoms to underlying disease processes -- to underlying pathology revealed by laboratory analyses of structure and function whose results are interpreted in the light of a theoretical understanding of normal function.

Types of
              'Fever'Consider the example of fever, manifested by chills (a symptom about which patients complain) and increased body temperature (a sign, indicated by a thermometer). Well into the 19th century the medical nosology included a large number of different "types" of fever, based on the symptoms accompanying the fever and the circumstances under which the fever occurred. In fact, Benjamin Rush, the pioneering American physician of the Revolutionary War era, thought there was only one disease -- fever.  But nobody diagnoses and treats fever anymore, because fever is viewed as a symptom of an underlying bacterial infection. Physicians may try to reduce a patient's fever, as an emergency measure, but most of their efforts are devoted to identifying the underlying infection (through blood tests) and then treating it (through antibiotics).

  • In "Rocky Mountain spotted fever" the patient presents with chills and elevated body temperature, as well as a petechial rash beginning on the wrists and ankles; the illness was first diagnosed in the Rocky Mountain area of the United States. Initially, treatment focused on bring the fever down with cold compresses, or else simply letting the fever "run its course". Now, however, a physician who suspects that a patient suffers from this disease will order a laboratory test to look for serum antibodies to the Rickettsia rickettsii virus, which is transmitted by a wood tick found in the western United States. Treatment entails finding the tick and removing it from the patient's skin, followed by a course of antibiotics to eliminate the infection. This treatment, applied in a timely fashion, results in a complete cure, not just symptom relief. Similarly, efforts at preventing the disease are aimed at eliminating the tick from the environment through insecticide sprays.

The Way We Diagnose Now (in the Rest of Medicine)

Lisa Sanders, a physician on the faculty of the Yale School of Medicine, occasionally contributes a column to the New York Times Magazine in which she shows how a puzzling medical diagnosis was resolved through the interpretation of laboratory tests that went beyond the usual symptoms, signs, and history. Here are a couple of examples from her 2003 columns:

  • A middle-aged man with a history of diabetes, hypothyroidism, and bone marrow dysfunction leading to anemia complained of pain in the hips and buttocks and difficulty walking; he was comfortable so long as his legs were not bearing weight. An initial diagnosis of sciatica, based on the patient's symptoms, didn't quite fit. X-rays were negative, as were blood tests for infection or destruction of muscle tissue. However, an MRI revealed an obstruction in an artery leading to his thigh, a complication of diabetes, that was a condition similar to a heart attack, including "ischemic" muscle pain (09/07/03).
  • A young man with morbid obesity (5'7", 350 pounds) was admitted to the Intensive Care Unit complaining of difficulty breathing and extreme fatigue. A test showed that the oxygen in his blood was only 88% of normal. Blood tests suggested an infection in the lungs, while X-rays revealed pneumonia. The patient's obesity was preventing him from clearing carbon dioxide from his blood, and the buildup was causing his sleepiness. The disease, hypoventilation syndrome, is also known as Pickwickian syndrome, after an obese, chronically sleepy character in Charles Dickens's Pickwick Papers (09/21/03).
  • A plump woman in her 50s was taken to the emergency room after a fall. She complained of chronic back pain. An X-ray revealed a collapsed vertebra in her lower spine, and the mottled appearance of surrounding vertebrae suggested cancer. In fact, further physical examination, confirmed by further tests, revealed a cancer which had metastasized from her breast to her ribs, hips, and spine (10/19/03).
  • A young woman with complained of looking like a man, and being mistaken for male. She presented with a somewhat "masculine" appearance, including facial hair, bearing, and voice. Her menstrual periods were irregular. She had no signs of a hormonal abnormality such as congenital adrenal hyperplasia (discussed in the lecture supplements on Psychological Development). Physical examination revealed male pattern baldness, and patches of darkened skin often associated with high levels of circulating insulin. In the absence of a test that could definitively confirm the diagnosis, the patient was treated with medication that would improve her response to insulin, and thus lower her levels of circulating insulin. In response, the patient lost some weight and regained menstrual regularity, consonant with the diagnosis. But the patient was also referred to an endocrinologist for further laboratory tests of hormonal dysfunction.
  • A man in his 50s, being treated for high cholesterol but with no other health problems, complained of shortness of breath, but no other symptoms. The absence of fever or cough ruled out pneumonia. Evaluations of cardiovascular function ruled out heart disease. A course of antibiotics had no effect, ruling out infection. A bronchoscopy was negative for both infection or cancer. The pattern of negative findings led to a diagnosis of interstitial lung disease by default -- it was the only remaining possible cause of the patient's symptoms. In fact, a review of the bronchoscopy results did reveal a white blood cell anomaly consistent with ILD, which may have been triggered by the Lipitor the patient was taking to control his cholesterol levels. The patient's condition improved when he was taken off the Lipitor and put on anti-inflammatory steroids -- an outcome consistent with the diagnosis of ILD.

These examples illustrate how diagnosis is done in advanced, scientific medicine:

  • The patient's symptomatic complaints, and signs revealed by physical examination, lead the physician to generate a hypothetical -- that the patient has a particular disease, which accounts for for the observed signs and symptoms.
  • This diagnosis is then confirmed or disconfirmed through laboratory tests.
    • If the tests are positive, treatment proceeds in accordance with the confirmed diagnosis.
    • If the tests are negative, a new hypothesis is generated, and new tests are performed.
  • If no definitive tests are available, a diagnosis may be supported retrospectively through the patient's response to treatment.
    • If the patient responds positively, then the diagnosis is provisionally confirmed.
    • But if the patient does not respond positively, a new diagnosis may be hypothesized, and a new treatment ordered in an attempt to confirm the hypothesis.

In any event, the patient's response to treatment is assessed by laboratory tests: if the diagnosis is correct, and the treatment is working, the patient's scores will improve on the very tests that generated the diagnosis in the first place.

On occasion, the correlation between symptoms and disease may be so high that no tests are ordered. But in most cases, medical diagnosis is based on laboratory tests, not presenting symptoms, and the success of treatment is confirmed by laboratory tests as well.

In medicine, pathology affects bodily functions through lesions in anatomical structures (e.g., a skull fracture or lung cancer), abnormalities in physiological functions (e.g., hypertension or sinus arrhythmia), or infection by micro-organisms (e.g., the influenza virus; in addition to viral infections, there are also infections by bacteria, fungi, chlamydiae, rickettsiae, and microplasmas). Modern medical diagnosis is based on evidence of these pathological conditions, as revealed by objective laboratory tests.

There are parallels in the medical model of psychopathology:

  • Abnormalities in Mental Structures and Processes, roughly analogous to the anatomical and physiological abnormalities in physical medicine. In these cases of psychological deficit,something has gone wrong with the patient's basic mental apparatus -- the cognitive, emotional, and motivational systems governing the person's experience, thought, and action. Presumably, there is also some underlying biological abnormality contributing to the abnormal mental function.
  • Abnormalities in Mental States (thoughts, feelings, and desires) constructed by mental structures and functions (and their biological substrates) that are essentially intact. These abnormal mental states -- beliefs, expectations, thoughts, feelings, and desires -- are the mental analogies of infections.

In medicine, underlying pathology is revealed by laboratory analyses interpreted in the light of our understanding of normal structure and function. In psychopathology, such laboratory research is known as experimental psychopathology. Experimental psychopathology tries to identify the causes of the patient's manifest symptoms. It is basic research on psychopathology, as opposed to applied research on diagnosis and treatment, and it comes in two major forms:

  • laboratory studies of psychological deficit and
  • laboratory models of psychopathology.


Attentional Deficits in Schizophrenia

Studies of psychological deficit in schizophrenia have a long history, dating back to experimental work by Kraepelin, in Wundt's laboratory, who used Donders's reaction time technique to measure the speed of mental processes in patients with dementia praecox (presenile dementia), the syndrome later renamed schizophrenia. One highly prominent theory of schizophrenia holds that the psychological deficit underlying the patient's presenting symptoms affects the attentional system. According to this theory, schizophrenics suffer from a breakdown in selective attention that renders them highly distractable and unable to filter out irrelevant ideas. The patient shows thought and language disorder because he cannot keep track of what he is thinking and saying; he withdraws socially to shut out this chaos of stimulation. And, in fact, laboratory tests confirm that patients with schizophrenia have a number of difficulties with attention. 

The information-processing deficits in schizophrenia appear to begin at the very beginning of the information-processing sequence.  Think of the multi-store model discussed in the lectures on memory.  In that model, stimulus information is briefly held in modality-specific sensory registers, from which it is extracted into short-term or working memory for further processing, and then encoded into long-term memory.  Studies of information-processing in schizophrenia have focused on the sensory registers, working memory, and attention.  If things go wrong at these earliest stages of information processing, lots of other things will go wrong as well. 

Shadowing
              OmissionsConsider, for example, a study by Wishner and Wahl (1974) of dichotic listening in schizophrenia (there are other studies of this topic, but I chose this one because Wishner was a teacher of mine in graduate school and Wahl was a graduate-student colleague).Shadowing
              IntrusionsIn this paradigm, different messages are played through earphones to the two ears, and the subject is instructed to shadow one message, repeating its words aloud as they are spoken, and ignore the other. There is little or no impairment when normal subjects shadow a single message, with no distracting message coming over the other ear. When the distracting channel is added, however, subjects begin to make shadowing errors: omissions of parts of the target message and intrusions of the irrelevant message, and recall of the target message.Recall of Attended WordsWishner and Wahl found that schizophrenics were particularly prone to make shadowing errors: more omissions and more intrusions, especially when the target message was played at a relatively fast speed;, and poorer recall of words from the target message, even when no distractor was present. These results are consistent with the hypothesis that schizophrenics suffer from an attentional deficit.

Backward
              MaskingComparable findings were obtained in a study of visual attention by Saccuzzo and Shubert (1981) employing the backward masking paradigm. In this procedure, an array of digits or letters is presented very briefly, so that there is not much opportunity for it to register in a very short-term memory store known as iconic memory (see the lecture supplements on Memory). Presentation of this array is followed by a "masking" stimulus that effectively displaces the array from iconic memory, preventing it from being further processed in "primary", "short-term, or "working" memory. Therefore, identification of the elements in the array requires highly focused attention, so that the subject can process the information from the array into primary memory before its representation disappears from iconic memory. In the experiment, subjects had to search for the letter T in an array of As, or for the letter A in an array of Ts: if it is present, they simply say "yes" and another trial begins.Retrieval from Iconic MemoryThe investigators varied the "stimulus-onset asynchrony" (SOA), or the interval between the onset of the array and the onset of the mask -- essentially, the amount of time that a representation of the array resides in iconic memory. They found that schizophrenics were generally poorer at target detection than controls, especially at longer SOAs. They concluded that it takes longer for schizophrenics to transfer information from iconic to primary memory. By virtue of this slower rate of information processing, schizophrenics miss a lot of what goes on around them, and are more distractable.

In recent years, much attention has focused on working memory in schizophrenia.  You’ll remember from an earlier lecture that working memory enables an individual to maintain information in an active state, over short periods of time, while it is being manipulated by other cognitive processes.  This information may be extracted from perception or retrieved from memory.  As such, working memory is critical for a wide variety of cognitive processes, such as selective attention, including both focusing on needed information and inhibition of irrelevant information.  A problem in working memory can permeate far into cognitive function, affecting memory, reasoning, problem-solving, and language.    10 

Working memory is often studied with variants of the Sternberg task, discussed at length in the lecture on methods and statistics.  In the Sternberg task, a subject is asked to memorize a small set of items, and then to search that set for a particular target. It’s different from the Sperling task.  In the Sperling task, subjects have to inspect a study set presented as a visual array.  In the Sternberg task, they have to hold a representation of the study set in working memory while they search it.  Sternberg’s basic finding, you’ll remember, was that response latency varied with the size of the search set, indicating a serial, self-terminating search process.  An experiment by Paul Metzak and his colleagues compared schizophrenic patients with a group of normal control subjects who had been matched with the patients on such demographic variables as age and education.

Like Sternberg, Metzak found that accuracy decreased, and response latency increased, with the size of the study set.  But this variable had a greater effect on the performance of the schizophrenic patients than on the normal subjects.  Performance especially deteriorated at the largest set size.  It’s as if the patients were overwhelmed by the information they had to process – maybe because they’re processing information so slowly, as in the Saccuzzo experiments in iconic memory.

Working memory consists of several different components.  First, there are modality-specific buffers which maintain information in an active state.  These elements are similar to the traditional concept of short-term memory, and there appears to be no problem with them in schizophrenia.  Then there is a central executive, which actually guides various information-processing tasks, manipulating and transforming information held by the buffers.  Here is where the psychological deficit in schizophrenia appears to be located.  Other research shows that there is a particular problem with a component of the central executive that represents and maintains contextual information that is relevant to current tasks – information about the task being performed, other information that has been recently processed, and what’s coming next.  This central executive appears to be mediated by the dorsolateral prefrontal cortex.  Interestingly, this region of the brain is part of a system modulated by dopamine, and the antipsychotic drugs used in the treatment of schizophrenia are antagonists of this neurotransmitter.    

Smooth Pursuit
              Eye MovementsStudies of dichotic listening, backward masking, and working memory illuminate schizophrenic deficiencies in central mechanisms of attention, but attention is also regulated by peripheral mechanisms -- as when we turn our heads or eyes to shift our attention from one object to another. Holzman and his colleagues have studied the peripheral mechanisms of attention with an eye-tracking paradigm in which subjects are asked to hold their head in a fixed position and move their eyes to track a target moving smoothly across a screen; they then examine the subjects' pursuit eye movements (PEMs) as they follow the target. Some aspects of these eye movements are not consciously perceptible to the subject, but can be recorded by an electrooculogram (EOG) similar to that used in sleep research, or by special infrared devices.

Holzman et al. have found that in contrast to the smooth pursuit eye movements (SPEMs) characteristic of normals, schizophrenics show eye movements that are more jagged.Eye-Tracking
              in PsychosisAbout 70% of schizophrenic patients show anomalous PEMs, but PEM anomalies also show up in relatives of patients who are not themselves diagnosed with schizophrenia.Eye-Tracking and Thought DisorderAccordingly, Holzman has suggested that abnormal PEMs might be a biological marker for schizophrenia, perhaps indicating an underlying malfunction in the frontal-lobe of the brain, particularly in those centers involved in the peripheral control of attention. Interestingly, Holzman and his colleagues have found that abnormal PEMs are related to various aspects of thought disorder, as measured in patients' verbal protocols. However, abnormal PEMs are associated with thought disorder across a wide variety of diagnoses, so this aspect of attentional deficit may not be specific to schizophrenia.

Most experimental studies of schizophrenia focus on various aspects of cognitive function, such as attentional deficits, but schizophrenia can involve problems with emotional function as well. Consider anhedonia, one of the classic "Four As" of schizophrenia. Many schizophrenics show flat or bunted affect, or else affect that is inappropriate to the situation. But that's their display of affect. Recall Lang's multiple systems view of attention, according to which emotional responses have three different components: a subjective feeling state, overt behavior, and covert physiological response.

IFacial
              Expressions of Emotionn one experiment, Kring and Neale (1998) showed emotional (positive and negative) and neutral films to schizophrenic patients and controls, and measured all three components of emotional response. In terms of facial expressions, schizophrenics were, indeed, less reactive than controls -- in terms of the frequency, intensity, and duration. But they were actually more reactive when their covert physiological responses were measured by a psychophysiological index known as skin conductance. The self-reported emotional responses of schizophrenic patients were somewhat muted, compared to controls, but both patients and controls showed the same pattern of response to positive and negative films. So, schizophrenics do not appear to be emotionally responsive, in terms of their facial expressions. But in terms of their subjective feeling states, and their physiological response to emotional stimuli, they appear to be experiencing emotion -- even if they aren't showing it in their behavior.


Attention-Deficit Disorder

The problem with diagnosing mental illness based on symptoms is illustrated by attention deficit hyperactivity disorder (ADHD), which is typically diagnosed in children based on such symptoms as failure to pay attention in school or play, running around the room, climbing on things, fidgeting and squirming, and the like. But adults with ADHD don't necessarily do these things. Instead, they display other, more age-appropriate symptoms such as difficulties in "wrapping up" the final details of a project, getting and keeping things in order, remembering appointments, and the like. Both sets of symptoms may have a common origin in some attentional dysfunction, and perhaps a common standard for diagnosis, applicable to children and adults alike, could be achieved by the development of laboratory tests that assess attentional functions directly.


Laboratory Models of Psychopathology

In addition to studies of psychological deficit, it is sometimes possible to create laboratory models of psychopathology, inducing in normal individuals a syndrome that, in at least some respects, resembles some form of actual mental illness. Laboratory models are rarely if ever exact replicas of mental illnesses, down to the last detail; rather, they usual mimic one or more characteristic symptoms of some syndrome. In a sense, laboratory models constitute theories of how symptoms arise in actual patients, because they are based on the assumption that the causative agents that effectively produce symptoms in the lab parallel those that are present in the real world outside the laboratory. Thus laboratory models can be used to test proposals concerning the origins, treatment, and prevention of mental illness. As such, they can be evaluated on a number of dimensions:

  • behavioral (i.e., descriptive features of symptoms and syndromes);
  • cause;
  • cure;
  • prevention; and
  • underlying biological structures and processes.


Anxiety Disorders

Like studies of psychological deficit, laboratory models of psychopathology have a history that goes fairly far back in time -- but this time, to Pavlov's laboratory rather than Wundt's. In some early studies of discrimination learning, dogs were conditioned to salivate to a circle or an ellipse, and then the axes of the stimulus were progressively changed, so that the circle became more elliptical, or the ellipse more circular. The result was that, at some point, the dogs became distressed -- seemingly anxious -- a phenomenon that became known as experimental neurosis. One explanation (proposed by Sue Mineka and myself) was that this increase in anxiety occurred because the animals could no longer predict the onset of the food US. Unpredictability causes anxiety (hold this thought, because in a little while we'll discuss another laboratory model that indicates that uncontrollability causes depression).

In addition, conditioned fear has served as a laboratory model of phobia, while conditioned avoidance has served as a laboratory model for obsessive-compulsive disorder.

  • In conditioned fear, the animal becomes afraid of a previously neutral stimulus that is associated with fear, such as a tone that predicts shock.
    • In 1920, John B. Watson (who gave us the doctrines of behaviorism) and his student Rosalie Rayner showed that they could condition phobia-like levels of fear of furry things in an infant, known to us as "Albert B.", or more colloquially "Little Albert", by making a loud noise whenever he came into close proximity to an otherwise-harmless white rat.Link to a a video of the Little Albert study.
      • The case was controversial, not just on the obvious ethical grounds that they were inducing fear in a child who was not fearful before, but because they didn't engage in an extinction procedure that would have eliminated Albert's fear response.
      • Inevitably, questions are often asked about what happened to Little Albert.  We don't know for sure, because Watson left academia shortly thereafter (following an affair with Rayner, which led to his divorce) -- he turned to a career in advertising with the firm of J. Walter Thompson, for whom he invented the concept of the coffee break" as part of a campaign for Maxwell House.  When he died, his wife, following his instructions, burned all of his papers.
        • In 2009, "Little Albert" was plausibly identified as Douglas Merritte, son of a wet nurse at Johns Hopkins Hospital. A 2012 article also suggests, based on films of Albert, that he was not exactly "normal" to begin with: he may have suffered from a form of hydrocephalus, an accumulation of cerebrospinal fluid in the brain. Douglas died when he was six years old, presumably of the consequences of his neurological condition (Beck et al., 2009; Fridlund et al., 2012)).
        • But in 2014, another group of investigators pointed to another child, William Albert Barger, who had been born at about the same time as Merritte, who died in 2007 (Powell et al., 2014).. 
        • But in the absence of any followup documentation, we'll never know for sure.
      • In 1958, Joseph Wolpe reported a series of studies of conditioned fear and its extinction in cats, which laid the scientific foundation for systematic desensitization, an early form of behavior therapy.
    • Research by Sue Mineka (already described in the lectures on Learning) on the acquisition of snake fear in lab-reared rhesus monkeys showed that conditioned fear could be acquired vicariously, without direct exposure to the shock, which offered an explanation for why many phobics report not having had unpleasant encounters with the objects of their phobia.
    • Although in principle we can acquire phobic-like fears of just about anything, including teddy bears, in practice the phobias encountered clinically are restricted to a relatively narrow range of objects: heights, darkness, being stared at by other people, things that crawl or slither -- you get the idea. It's been suggested that, like snake fear in lab-reared rhesus monkeys, clinical phobias represent highly prepared fear responses, part of our evolutionary heritage.
  • In much the same way, avoidance learning can serve as a laboratory model of obsessive-compulsive disorder.
    • Many compulsions -- like constantly checking the door to see if it is locked -- seem to reflect the patient's attempt to avoid or prevent some undesirable outcome. And, of course, the behavior is reinforced by the fact that the undesirable outcome never occurs. So the patient continues doing it. Similarly, avoidance responses are very difficult to extinguish -- precisely because they succeed so well that the animal never learns that the shock has been turned off!
For a personal account of OCD, as well as good coverage of the scientific literature, see the Man Who Couldn't Stop: OCD and the True Story of a Life Lost in Thought by David Adam (2015).  Adam himself fell into the clutches of OCD after having unprotected not-quite sexual intercourse as a college student.  He became obsessed with the idea that he might contract HIV-AIDS, and went to great lengths to ward off that outcome.


Learned Helplessness and Depression

Learned
              Helplessness in DogsOf particular interest is the development of learned helplessness as a laboratory model of depression. As discussed earlier (in the lectures on Learning), learned helplessness was initially observed in studies designed to test Mowrer's "two-factor" theory of avoidance learning. In avoidance learning, a conditioned stimulus (such as a tone) is followed by prolonged shock as an unconditioned stimulus. If the animal makes a certain conditioned response (such as moving from one side of the shuttlebox to the other) after the onset of the US, it can turn the US off; termination of the US constitutes reinforcement of an escape response. If it makes the CR after the onset of the CS, but before the onset of the US, the US is prevented from occurring at all; this reinforces an avoidance response. Martin Seligman and his colleagues discovered that prior exposure to inescapable shock interfered with escape and avoidance learning. In their analysis, prior experience with inescapable shock taught the animal that shock was uncontrollable, and this learning generalized to the new situation of the shuttlebox. The learned helplessness experiments underscore the principle that in instrumental conditioning the organism is learning to control its environment.

Parallels
              Between Learned Helplessness and DepressionThe animals in these experiments learned to be helpless, but Seligman and his colleagues also observed that they looked and behaved as if they were "depressed" (if you've ever seen a sad dog, you know what they meant). This led Seligman to propose that learned helplessness is a laboratory model for some forms of depression -- i.e., that some people become depressed by virtue of a history of uncontrollable aversive events in their lives.

Subsequent research by Seligman and others, including Steven Maier, Lyn Abramson, and Lauren Alloy, identified a number of parallels between learned helplessness and depression:

  • Symptoms: depression, like LH, is characterized by symptoms of passivity, negative expectations, lack of aggression, and loss of appetite and sexual interest; in both learned helplessness and depression, these symptoms dissipate with time.
  • Cause: some depressives, like animals in the LH experiments, have histories of life experiences from which they learned that outcomes were independent of their behavior.
  • Cure: depression, like LH, can often be overcome by treatments that change the patient's beliefs and expectations (this is what cognitive therapy explicitly tries to do). Just as electroconvulsive therapy (ECT) can successfully treat depression, so electroconvulsive shock (ECS) can alleviate LH. Just as depressed patients respond to antidepressant drugs, so helpless animals respond positively to drugs that stimulate the norepinephrine system.
  • Prevention: individuals who are relatively invulnerable to depression often have a life history characterized by many mastery experiences; similarly, animals can be "inoculated" against LH by giving them prior experience with control.
  • Biological Substrates: animals who have undergone LH treatments give evidence of norepinephrine depletion.

The learned helplessness model of depression, as originally formulated, is not complete, and nobody claims that LH underlies all forms of depression. But Abramson and Alloy have suggested that a certain type of depression, which they label helplessness depression, is caused by the kinds of experiences modeled in the LH experiments.

Abramson, Alloy, and their colleagues subsequently modified Seligman's theory with the hopelessness theory of depression. They argued that the experience of uncontrollable aversive events was not enough to make people (or dogs, for that matter) depressed. Sometimes, people (and dogs) respond to uncontrollable aversive events with anger, instead.

Abramson and Alloy proposed that uncontrollability led to depression only when the individual made a certain causal attribution concerning the uncontrollability. They argued that the explanations that people make for various events vary on certain dimensions:

  • Internal vs. external -- does responsibility for the event lie with the person himself, or with some external agent?
  • Stable vs. variable -- does the event always work out that way, or is the outcome sometimes different?
  • Global vs. local (or specific) -- is it everything that goes this way, or just some specific thing.

They proposed that uncontrollability causes depression only when the individual makes an internal, stable, global attribution for the helplessness -- as if to say, I can't control this thing, it's my fault that I can't control it, I can never control this or anything else. If you thought like this, you'd get depressed, too, and that's the point.

Abramson and Alloy pointed out that, in contrast to non-depressed people, depressed people are often starkly realistic about their inability to control events -- a characteristic of depressive realism that they contrasted to the illusion of control that is characteristic of non-depressed thought. That is, non-depressed individuals often have an unrealistically elevated sense of control (which is why many of us think we can control chance events by picking "lucky" lottery numbers), while depressed individuals are often quite realistic about the prospects.

Abramson and Alloy identified this pessimistic attributional style as a risk factor for what they called hopelessness depression. They and their colleagues also constructed a personality questionnaire, the Attributional Style Questionnaire (ASQ) to identify people who might be "at risk" for depression, based on this aspect of cognitive style.

Seligman, for his part, took off from his studies of helplessness and depression to focus on the other side of things, and proposed that positive psychology should focus on the sunny side of life, and the positive characteristics that enabled people to be resilient in the face of unpleasant circumstances. In particular, Seligman has proposed that giving experiences of control to children who are at risk for depression (by virtue of their pessimistic attributional style) may help these children avoid actual episodes of depression.

In the domain of the psychoses,amphetamine psychosis can serve as a laboratory model of schizophrenia. High doses of amphetamines, which increase dopamine activity in the brain, lead to psychological symptoms similar to those found in acute schizophrenia -- hallucinations, thought disorder, and paranoid delusions. The behavioral parallels between amphetamine psychosis and schizophrenia is one source of the dopamine hypothesis of schizophrenia, discussed below.


Hypnosis and "Hysteria"

While most laboratory modeling has focused on various anxiety disorders, some investigators -- again, beginning in the late 19th and early 20th century -- noticed a phenotypic similarity between some of the phenomena of hypnosis (e.g., suggested blindness, deafness, and analgesia; suggested paralysis; posthypnotic amnesia) and some of the characteristic symptoms of "hysteria", a cluster of syndromes now known as the dissociative and conversion disorders.

  • In both hypnosis and "hysteria", pseudoneurological "symptoms" occur in the absence of brain insult, injury, or disease.
  • Both hypnosis and "hysteria" affect explicit, conscious perception, memory, and action (like conscious perception or recall), while largely sparing their implicit counterparts (like priming).

This has suggested that understanding the mechanisms of hypnosis might help us to understand these forms of mental illness as well. In fact, it's been proposed that both the dissociative and conversion disorders result from a division of consciousness that prevents certain percepts and memories from being represented in conscious awareness.


Linking Laboratory Models to Psychological Deficits

Psychopathy:
              Linking Laboratory Models to Psychological DeficitsSometimes, laboratory models and studies of psychological deficit go hand-in-hand. Consider experimental research on psychopathy, or antisocial personality disorder. One feature of psychopathy is that these individuals tend not to be responsive to aversive stimulation. On experimental tests, psychopaths often show a failure of avoidance learning, and they also show a failure to respond to punishment. Gorenstein and Newman (1980) observed a similar pattern of behavior in laboratory rats who had surgical lesions in a subcortical area of the brain known as the septum. Rats with septal lesions do not freeze when they are punished, they have difficulty with passive avoidance learning (i.e., learning not to do something in order to avoid punishment), and they also have difficulty with delay of gratification. All of these phenomena, of course, closely resemble the characteristic symptoms of psychopathy. This has led to a theory that, by virtue of some kind of brain disorder, psychopaths, like septal rats, are unable to suppress habitual behaviors in order in order to avoid the aversive consequences of these behaviors.

Experimental research by Newman and others have further clarified the psychological deficit in psychopathy to problems linking attention to the reward system.  In one experiment, subjects played a computerized card game: if they turned over a card they would gain a point if it was a face card, and lose a point if not.  The deck was arranged so that 9 of the first 10 cards were face cards, then 8 of the next ten, 7 of the next 10, and so on, and they could stop whenever they wanted.  Of course, with the deck arranged in this manner, the subjects began accruing lots of points, and then started losing.  Normal subjects generally stopped playing after about 50 cards, while they were still ahead; but the psychopaths continued playing, losing everything they had won -- and more.

Newman's latest theory of psychopathy implicates attention rather than reward per se.  The general idea is that psychopaths have difficulty updating working memory with new information, once their attention has been engaged.  They can focus attention, but they can't disengage and shift it very easily.

Still, there does seem to be something missing in psychopaths' emotional lives.  They don't accurately pick up on other people's facial and vocal expressions of emotion, especially fear; don't respond normally to words with positive or negative emotional connotations.

Returning to Newman's septal rats, Kent Kiehl (2006) has used fMRI to record brain activity in psychopaths during various types of tasks.  He has found deficits in a a system of subcortical brain areas known as the paralimbic system:

  • the anterior cingulate, which is important for decision making and other aspects of executive control;
  • the amygdala, which generates emotional responses, especially fear;
  • the orbitofrontal cortex, important for learning under conditions of reward and punishment;
  • the posterior cingulate, involved in emotional processing;
  • the insula, which mediates perception of pain and other bodily states;
  • the temporal pole, involved in the integration of emotion and perception.

For more on psychopathy, see "Inside the Mind of a Psychopath" by Kent A. Kiehl and Joshua W. Buckholtz, Scientific American Mind, September-October 2010.


Linking the Laboratory and the Clinic

We're going to get back to the clinic in a moment, but the point of this whole exercise is that the clinical enterprise of understanding and treating mental illness is not divorced from the sort of basic research that goes on in university laboratories. We can use various research paradigms of the sort discussed earlier in this course -- classical and instrumental conditioning, dichotic listening, and the like -- to understand the basic mechanisms by which mental illness occurs -- as well as ways in which we might more effectively treat mental illness and prevent it from occurring in the first place.

By virtue of basic laboratory research we can move beyond the surface signs and symptoms of mental illness to understand its underlying pathology. And by better understanding its underlying pathology, we will be able to formulate better theories of the causes and cure of mental illness, and better tools for diagnosis, treatment, and prevention.

In fact, laboratory research helps us to identify two different ways that mental illness can occur.

  • Some forms of psychopathology reflect psychological deficits -- disruptions affecting basic psychological functions.
    • Schizophrenia seems to involve malfunctioning of the attention system.
    • Autistic children (and adults) appear to lack a theory of mind.
    • Major forms of depression may be caused by a defect in the system affecting positive or negative affect.
    • Attention-deficit disorder also, obviously, seems to involve a malfunctioning of the attention system -- though presumably a different malfunction from the one implicated in schizophrenia.
  • Other forms of psychopathology reflect maladaptive social learning, in the absence of any particular psychological deficits. The mind is working OK, but the person has somehow acquired maladaptive knowledge, expectations, and beliefs.
    • In phobia, the person has learned to fear and object that is not, objectively fearsome.
    • In obsessive-compulsive disorder, the person performs an avoidance response that isn't, objectively, necessary.
    • In hopelessness depression, the person makes inappropriate causal attributions for unpleasant events.
    • Some of the psychophysiological disorders appear to reflect the effects of environmental stress on internal organs supplied by the autonomic nervous system -- effects that might not occur of the person learned how to handle stress better.


The Biology of Mental Illness

Where mental illness appears to reflect maladaptive social learning, we generally assume that the architecture of the individual's basic mental structures and processes is largely intact, as are the neural substrates of that mental architecture. However, where mental illness appears to reflect an underlying psychological deficit, there are good reasons to think that the neural substrates of that mental architecture are malfunctioning as well -- that is, that the mental disorders are ultimately neurological disorders. This idea is expressed in Ralph Gerard's old adage that


The idea that mental illness have underlying biological causes goes back at least as far as the 19th century -- which is to say, it is almost as old as scientific medicine and scientific psychology themselves. In fact, the history of psychiatry and clinical psychology may be characterized as a cycle in which prevailing views alternate between "somatogenic" theories that mental illness is due to biological causes (i.e., brain insult, injury, or disease) and "psychogenic" theories that mental illness is due to environmental causes, and that the biology of the nervous system is no more relevant to mental illness than it is to normal mental and behavioral functioning.


From Somatogenesis to Psychogenesis and Back Again

The earliest scientific theories of mental illness were neurological theories, based on the assumption, mostly unproven, that patients' symptoms were due to lesions or infections affecting brain tissue. 

With the emergence of Freudian psychoanalysis, in the late 19th and early 20th centuries, the predominant theory of mental illness shifted from somatogenic to psychogenic.  Put briefly, Freud and his followers taught that mental illnesses, particularly the neuroses, had their origins in conflict and defense.  Psychoanalysis, both Freudian and neo-Freudian, dominated psychiatry well into the 1950s.

Another important influence on American psychiatry was Adolph Meyer, who argued that mental patients' problems had their origins in their life histories, not their biology.

The pendulum began to shift back toward somatogenesis in the 1950s, with the introduction of the first psychotropic ("mind-moving") drugs: Thorazine (1954), a "major tranquilizer" used in the treatment of schizophrenia; Miltown (1955), a "minor tranquilizer" used to treat anxiety; and Marsilid (1957), a "psychic energizer" used in the treatment of depression.  These drugs were discovered more or less accidentally, but it was soon learned that they (and other drugs like them) altered the levels of certain neurotransmitters in the brain.  This led to the hypothesis that schizophrenia, depression, and other forms of major mental illness were caused by abnormal levels of these substances. 

Let us just note, in passing, that the logic of this inference is far from airtight.  In fact, it's a variant on the logical error of denying the antecedent, discussed in the Lecture Supplement on Thinking.  The logic seems to go something like this.

  1. Thorazine relieves the symptoms of schizophrenia.
  2. Thorazine decreases dopamine levels in the brain.
  3. Therefore, schizophrenia must result from excessive dopamine levels in the brain.

The problem is that the efficacy of a treatment says nothing about the cause of the illness.  Nobody thinks that a lack of aspirin causes fever to occur.

And, as it happens, the evidence for the dopamine theory of schizophrenia, and similar theories, is not completely convincing.  There has never been any convincing demonstration that, prior to treatment, schizophrenics or depressives actually suffer from any kind of chemical imbalance in their brains.  Still, theories about chemical imbalances remain very popular.  

In fact, over the past 100 years or so, biological causes have been uncovered for a number of mental illnesses, whose biological causes were unknown at the time they were described, and which had previously been attributed to environmental causes. In this way, some "functional" mental illnesses have been reclassified as "organic" in nature.

  • The first of these was a form of dementia associated with syphilis, a venereal disease, and which is now known to be caused by infection by the syphilis spirochete, and which can be cured by timely administration of antibiotics.
  • Alzheimer's disease, once labeled as presenile dementia, is now known to be caused by the build up of plaques and tangles in brain tissue.
  • Autism, a developmental disorder once known as Kanner's syndrome, was once attributed to poor parenting -- and especially by so-called "refrigerator mothers" (it's always the mother, isn't it?) who failed to display emotional warmth toward their children. This was the theory proposed by Bruno Bettelheim, a prominent psychoanalyst, and is now known to be completely wrong (the score is now "Psychoanalysis Zero", while scientific Psychology is batting much closer to 1000). Kanner himself, who first described the syndrome in 1943, believed that it was caused by bad parenting. These beliefs did not change until autism was diagnosed in the first child of Bernard Rimland, a biologically oriented psychologist, and his wife. They knew that they hadn't been bad parents, not least because their child had been autistic since birth -- that is, before they had even had a chance to be bad parents. In 1964, Rimland published Infantile Autism, a classic monograph on the subject, in which he proposed that autism had an organic cause. Although we still do not know what the precise causes of autism are, we are now quite certain that they lie not in the environment, or in the patients' childhood experiences, but presumably reflects malfunctioning in certain brain systems.


Current Biological Approaches to Mental Illness

According to the dopamine hypothesis of schizophrenia , the symptoms of schizophrenia, and their underlying psychopathology, are caused by excess activity of dopamine, a neurotransmitter substance. Evidence for the dopamine hypothesis includes:

  • increased levels of dopamine metabolites in schizophrenic patients;
  • the positive effects of antipsychotic medications, many of which operate to reduce dopamine levels.

In fact, autopsy studies, and also some brain imaging studies, indicate that there are increased levels of dopamine metabolites in schizophrenic patients.  And that's consistent with the dopamine hypothesis.  Moreover, phenothiazine drugs, which are used in the treatment of schizophrenia, block the neural receptors for dopamine, impairing the uptake of dopamine by post-synaptic neurons.  So the fact that there are increased levels of dopamine metabolites found in schizophrenic patients, and that antipsychotic medications operate to reduce dopamine levels -- these are both factors that are consistent with the dopamine hypothesis of schizophrenia.

But another piece of evidence comes from a laboratory model of schizophrenia known as amphetamine psychosis.  Solomon Snyder and his colleagues, working at the National Institute of Mental Health, found that the administration of certain drugs known as amphetamines can actually produce some of the symptoms of psychosis, particularly schizophrenia, in rats and other laboratory animals.  They also noticed that the habitual heavy use of amphetamines by humans can produce some of the symptoms of schizophrenia as well -- particularly hallucinations, thought disorder, and delusions.  These amphetamine drugs, such as Benzedrine (amphetamine), Dexedrine (dextroamphetamine), and methedrine (methamphetamine) may cause the user to experience hallucinations, while habitual, heavy use can cause thought disorder and paranoid symptoms as well.have the effect of increasing dopamine activity in the brain.  So this laboratory model -- first studied in rats, then in monkeys, and then in humans who abuse amphetamine recreationally -- provides further support for the dopamine hypothesis of schizophrenia.In these ways, amphetamine psychosis seems to mimic at least some of the symptoms usually associated with schizophrenia.

According to the monoamine hypothesis of depression, the symptoms of depression, and their underlying psychopathology, are caused by lowered levels of another class of neurotransmitters, the monoamines, which include norepinephrine and serotonin. Evidence for the monoamine hypothesis includes:

  • decreased levels of monoamine metabolites in depressed patients;
  • the positive effects of antipsychotic medications, many of which operate to increase the levels of monoamines in the brain; in particular, the selective serotonin reuptake inhibitors, effectively increase the availability of serotonin by preventing premature reuptake of the neurotransmitter by presynaptic neurons.

Is Depression Adaptive?

Depression is so frequent (it has been called "the common cold of psychiatry"), and has been around for so long (Robert Burton published The Anatomy of Melancholy in 1621), that some evolutionary psychologists have suggested that, counter-intuitively, a tendency toward depression might actually be an adaptive trait. According to one argument, the ruminative thinking that is one of the characteristic symptoms of depression facilitates problem-solving. The only problem is that depressives don't solve their problems -- they just stay depressed.

Actually, that's just one of the problems with the evolutionary argument. Here are some others:

  • It's not clear that clinically depressed patients are all that good at thinking and problem-solving. Most of the research either involves experimentally manipulated emotional states, or states of mild depressed mood that aren't anywhere near clinical severity.
  • Moreover, the evolutionary argument assumes that depression is somehow a response to some kind of instigating psychosocial event -- i.e., whatever problem the patient is confronting in his or her environment. But sometimes depression just happens, and it's difficult or impossible to find an instigating event of sufficient magnitude to cause such a severe change in mood.
  • Even assuming that there is such an event, clinically significant depression is typically a chronic condition -- you'd think that, if depression helped people solve the problems that set their depression off in the first place, it wouldn't come back so readily.
  • And finally, a significant proportion of depressed patients -- not all, or even a majority, but enough -- commit suicide (or attempt it). And suicide is definitely maladaptive (though I suppose that evolutionary psychologists could make up a just-so story about the adaptiveness of suicide, as well!).

This points out the problem with the adaptationist fallacy that lies at the heart of so many speculations by evolutionary psychologists. Depression exists as a human psychological trait, human psychological traits, like human physical traits, a product of natural selection; therefore, depression must be adaptive in the Darwinian sense. A similar argument has been made about homosexuality (which, of course, isn't a mental illness), and even about grandmothers (who aren't necessarily mentally ill either!). But the whole thing begins with a fallacious assumption, which is that every human trait, whether biological or psychological, but be adaptive.

It should be understood that these are only hypotheses about the underlying biology of these syndromes, and that these hypotheses are surely incomplete.

And sometimes biological hypotheses are just wrong. In 1998, a paper published in Lancet by Andrew Wakefield, a British physician, suggested a link between autism and the childhood vaccine for measles, mumps, and rubella (MMR). The resulting concern led to a worldwide reduction in MMR vaccination, as parents hoped to prevent their children from getting autism. The methodology of the study was subsequently criticized, and 10 of Wakefield's 12 co-authors subsequently retracted the paper -- as did the journal in which it was originally published, and Wakefield lost his license to practice medicine in Britain (he relocated to the US). In 2011, an investigation published in the British Medical Journal asserted that Wakefield's paper was not just methodologically weak but actually fraudulent.In fact, subsequent, better designed studies show that there is no evidence for a role of MMR or any other childhood vaccination in autism. Nevertheless, as of 2011, Wakefield (who has relocated to the US) continues to assert such a link, and worldwide MMR vaccination rates have never returned to their pre-1998 levels -- increasing the risk of childhood diseases that are entirely preventable.

For an excellent introduction to the roe of neurotransmitters in mental illness, and the basics of psychopharmacology, see Drugs and the Brain (Rev. Ed., 1996) by Solomon Snyder, one of the deans of psychopharmacology.  Because the pharmaceutical industry is constantly developing new products, the information on specific drugs is necessarily a little dated.  But the basic ideas haven't changed much, and neither has the underlying neuroscience.  It's a good place to start.


Etiology of Mental Illness

Somatogenic and psychogenic theories of mental illness are, first and foremost, theories about the role of nature and nurture, and we have now learned that the proper formulation is nature-nurture questions is not "Which is right?" but rather "How do nature and nurture interact?". The etiology of mental illness is no exception.


Genetics of Mental Illness

One place to look for the origins of psychopathology is in the genes: perhaps certain forms of mental illness, or at least risk factors for them, are passed through families through genetic inheritance.  We know that, for many diagnoses, having a family member with mental illness increases the risk for mental illness in other family members.  Of course, this effect could be environmental as well as genetic.

The study of the genetic basis of mental illness began long before Mendel and research on fruit flies, as the superintendents of 19th-century asylums for the insane or "feeble-minded" traced the family histories of their patients, seeking evidence that mental illness was inherited (and, not incidentally, laying the intellectual foundations for the pseudoscience of eugenics).  For an historical account of this research, see Genetics in the Madhouse: The Unknown History of Human Heredity (2018) by Theodore M. Porter. 

The genetic contribution to mental illness can be assessed by means of the twin-study method described in the lectures on Psychological Development: by comparing the similarity of MZ and DZ twins, we can estimate the contributions of genetics, the shared environment, and the nonshared environment.  When it comes to personality characteristics, such as the Big Five personality traits, similarity is measured by means of the correlation coefficient.  In psychiatric genetics, similarity is more commonly measured by means of the concordance rate -- that is, the probability that two twins will have the same psychiatric disorder.  The calculations for heritability differ a little, but the underlying logic is the same:

  • If a mental illness is completely inherited, the concordance rate for MZ twins should be 100%, and for DZ twins should be 50%.
  • To the extent that the MZ concordance rate is less than a perfect 100%, there is a contribution from the nonshared environment.

This table reflects our best understanding of the heritability of major forms of mental disorder, as of 2012.  The heritability coefficients vary widely, from a low of .37 for Major Depressive Disorder to a high of .80 for Autism Spectrum Disorder and .81 for Schizophrenia.  Note, however, two points about this table:

  • There are many forms of mental illness that are not represented here -- either because they have not been studied, or because existing research has yielded low heritability coefficients.
  • None of the heritability coefficients is a perfect 1.0.  Even for autism and schizophrenia, genetics is not destiny, and there are plenty of families with an autistic or schizophrenic family member where the rest of the family has no significant mental illness.

So, as is generally the case, the origins of mental illness are not to be found in the genotype alone.  Rather, they will have to be found in gene-by-environment interactions (GxE) of the sort discussed under the heading of epigenesis in the lectures on Psychological Development

Still, once researchers have established a significant level of heritability for a mental illness, it makes sense to start searching for the genes responsible (for a review, see Duncan et al., 2014).  Before the 21st century, this was not really possible.  We didn't know enough about the human genome, and we didn't have the technology.  And even with the mapping of the human genome, and the availability of (relatively) inexpensive technology, the task is daunting:

  • There are some 20-25,000 candidate genes -- not to mention all that "intergenic" and "intronic" regions in the genome that are not, technically, genes.
  • To obtain reliable results, researchers typically need huge sample sizes. 

Until recently, researchers had to propose, on the basis of some theory, what genes to look for.  So, for example, if they were interested in schizophrenia, they might look at genes that are involved in the production and metabolism of the neurotransmitter dopamine; for depression, they might look for genes involved with the production and metabolism of the neurotransmitters norepinephrine or serotonin.  This is known as the candidate gene strategy.  However, advances in technology have permitted researchers to go on "fishing expeditions" in which they cast their nets more widely, over thousands or millions of candidate genes and their variants, in what are known as genome-wide association studies (GWAS).  These studies have begun to yield results -- and, interestingly, they have been turning up evidence of genes involved in mental illness other than those "candidates" hypothesized by various biochemical theories of mental illness!

It's pretty clear that the search for "the gene" that is "for" schizophrenia, or any other form of mental illness, is going to be complicated (Duncan et al., 2014). 

  • In the first place, there's almost certainly no such single gene, not for any form of mental illness.  Instead the genetic contribution is more likely to be polygenic in nature, consist of the accumulation of many different genes.
    • For example, there are at least 40 different genetic loci associated with height.
    •  A recent GWAS identified more than 100 different genetic loci, and more than 8,000 genetic variants on these loci, associated with increased risk for schizophrenia
  • And these genes, themselves, are not "for" mental illness in any sense.  Any single gene will have different effects, depending on its local (physical) environment.
    • For example, one of the gene loci associated with schizophrenia also plays a role in the auto-immune system. 
  • Genetic influences may be pleitropic, meaning that a single genetic variant may have more than one phenotypic effect. 
    • For example, some genetic loci are associated with both schizophrenia and bipolar disorder, suggesting that they may well be risk factors for major mental illness (or psychotic disorders) generally.
  • Some genetic factors may be rare variants, present in only a very small number of individuals. 
  • And then there is the problem of missing heritability -- that is, the difference between the proportion of variance in some phenotype (such as schizophrenia) that is explained by heritability, and the proportion explained by specific genetic variants. 
    • For example, we know that approximately 80% of population variability in height is accounted for by genetic factors; but the specific genetic loci "for" height discovered so far account for only 5% of population variance -- leaving 75% of genetic variance unexplained.
  • And finally, there are the problems of epigenetics discussed in the lectures on Psychological Development


Once we've determined that there is, in fact, a genetic contribution to some mental illness, the next step is to determine what genes are involved.  I say "genes", plural, because it's clear that there's no single gene "for" schizophrenia, like the gene for blue or brown eyes presented in standard elementary accounts of Mendelian genetics.  Rather, much as with intelligence, the genetic basis for schizophrenia is most likely to involve the cumulative effects of many genes -- dozens, perhaps hundreds.

Recent advances in understanding the genetics of schizophrenia offer interesting insights into the genetic contribution to mental illness more generally.  These advances have been made possible by the Human Genome Project, which in 2003 delivered a map of the human genome, indicating the location of each of our roughly 22,500 genes on our 23 chromosome pairs.  Then began the process of determining the function of each of these genes.  Over the years, gene-mapping has become less expensive and time-consuming, so it is now possible to search the genomes of large numbers of individuals for genes that are associated with various illnesses, employing such techniques candidate gene association, common variant association, and copy number variation

This isn't a course in genetics, and if you want details of each of these methods, there's an excellent, highly accessible account of this work in "Runs in the Family" by Siddhartha Mukherjee (New Yorker, 03/28/2016), and a more technical survey in "Genome-Scale Neurogenetics Methodology and Meaning" by McCarroll, Feng, and Hyman (Nature Neuroscience, 2014).  This discussion is largely drawn from these sources.

With data available on a very large number of individuals, investigators are able to identify associations between schizophrenia (and other forms of major mental illness, such as bipolar disorder and autism) with various portions of the human genome.  This task has been undertaking by a group known as the Psychiatric Genomics Consortium (PGC).  The "Manhattan plot" at left (so named because it looks like the New York City Skyline), taken from one such study involving more almost 37,000 patients and more than 110,000 controls, sampled from 20 countries, shows which specific locus on each chromosome is significantly associated with schizophrenia (Sekar et al., Nature 2016).  Of course, with 22,500 genes, a number of these associations could appear just by chance, so the investigators employed appropriate statistical corrections.

These sorts of findings would be expected if, as most theorists believe, major mental illness is fundamentally a disorder of the central nervous system. But other findings were more surprising.  For example, the strongest association (the tallest "skyscraper" in the Manhattan plot above) was on Chromosome 6, in a region known as the major histocompatibility complex (MHC), whose genes are linked to the immune system.  But how do we get from the immune system to schizophrenia?  Here's one prominent theory (Sekar et al., Nature, 2016). 

Understand: I'm not saying that this theory is the neuroimmunological cue to the secret of the origins of schizophrenia.  It might very well be wrong -- just like most past hypotheses concerning the genetic origins of schizophrenia.

Another multinational group of researchers, known as the Brainstorm Consortium performed a GWAS of 25 different neurological and psychiatric disorders, involving more than 250,000 patients and more than 750,000 healthy controls -- thanks mostly to European countries whose national health systems maintain large databases (Anttila et al. Science, 2018; see also Gandal et al., Science, 2018).  As had proved to be the case in previous, smaller-scale studies, the psychiatric syndromes showed substantial genetic overlap (the only exception was post-traumatic stress disorder, which by definition has a substantial environmental cause).  That is to say, a number of genetic markers were identified for each syndrome, but the same makers tended to appear from one syndrome to another.  By contrast, the 15 neurological syndromes studied, including Alzheimer's disease and Parkinson's disease, showed much more distinct genetic profiles. And there was little overlap between the psychiatric conditions and the neurological ones.  What to make of this isn't clear.  One possibility is that there is a genetic component to risk for mental illness in general, and other factors, whether biological or environmental, determine which specific mental illness a person will suffer.  Another possibility is that the GWAS method, and genetics in general, isn't going to tell us much about the etiology of mental illness.

At the same time, other researchers have turned their attention to the environment, and in particular to those features of the environment that interact with one's genetic heritage.

For more on the search for the genetic basis of schizophrenia, including a plea that researchers spend more time searching for environmental factors that might interact with genetic predispositions, see "Schizophrenia's Unyielding Mysteries" by Michael Balter, Scientific American, 05/2017.


The Diathesis-Stress Model


The origins psychopathology (etiology) may be viewed within the framework of the diathesis-stress model of psychopathology proposed initially by Meehl (1962) and Rosenthal (1963), and elaborated more recently by Monroe and Simons (1991) and Belsky and Pleuss (2009). According to the model:

  • Diathesis represents a predisposition toward a specific breakdown in normal mental functioning. Its source may lie in the person's biological (genetic -biochemical) endowment, experiential history of social learning, or both. The diathesis renders the person vulnerable to, or at risk for, some specific form of psychopathology (not psychopathology in general). Every person achieves a more or less successful adaptation to this genetic or psychosocial "inheritance".
  • Stress refers to any event (or series of events) which challenges the person's current level of adaptation to the diathesis. Again, stress factors may be either biological or psychosocial in nature.
  • The interaction of diathesis and stress precipitates an acute episode of mental illness -- what used to be called a "nervous breakdown".
  • Looking backward from the acute episode, we can examine the patient's level of premorbid adjustment, or what is sometimes called premorbid personality. In medicine, the term "premorbid" refers to the patient's status before he or she became ill.
    • Individuals with good premorbid personality have "inherited" relatively little diathesis, whether through genes or social learning, or made a relatively successful adjustment to a relatively high level of diathesis.
    • Individuals with poor premorbid personality have "inherited" a relatively high amount of diathesis, or made a relatively unsuccessful adjustment to a relatively low level of diathesis.

The diathesis-stress model of psychopathology is a special case of the person-by-situation interaction, where diathesis is an attribute of the person and stress is an attribute of the environment.

In principle, diathesis and stress factors could combine in a number of ways.

Diathesis-Stress Independence (Additive Model)In an additive model, diathesis and stress are independent of each other, and the likelihood of an acute episode is simply a function of the sum of diathesis and stress factors. Following Lewin, we might symbolize this situation as E =f(D + S).

Diathesis-Stress Interaction (Multiplicative Model)In a multiplicative model, diathesis and stress truly interact, so that the combination is truly potent: following Lewin, it would be expressed as D =f(D x S).

  • For individuals carrying substantial levels of diathesis, relatively little stress would be required to precipitate an acute episode of mental illness, and the individual would likely show relatively poor premorbid adjustment.
  • On the other hand, catastrophic levels of stress would likely produce an acute episode even in individuals who carry little or no pre-existing diathesis, and who would show relatively good premorbid adjustment.
  • If diathesis levels are within normal limits, an acute episode would occur as a function of stressors in the individual's life.
  • If stressors are within normal limits, an acute episode would occur as a function of the individual's level of diathesis.

Note that diathesis factors are specific to particular forms of mental illness. In theory, some particular diathesis predisposes an individual to schizophrenia, but other specific diathesis would be relevant to depressive disorder, anxiety disorder, etc. In this way, if stress precipitates an acute episode of mental illness, that illness will take a specific form.


Diathesis and Stress in Theory and Practice

Concordance
              Rates for PsychopathologySchizophrenia. The diathesis-stress approach was first articulated in the context of schizophrenia. With respect to diathesis, there is a clear genetic component to schizophrenia. Compared to a base rate of about 1% in the population at large, the concordance rate for schizophrenia is clearly elevated among relatives. In monozygotic twins, the concordance rate is about 38%; for dizygotic twins, 8%. Risk for schizophrenia is also increased if a first-degree relative (father, mother, brother, sister) is schizophrenic. Among adoptees, risk is increased if one's biological parent has schizophrenia, but not if one's adoptive parent has schizophrenia. These figures are consistent with the proposition that people can inherit a predisposition to schizophrenia. But note that the concordance rate is far from a perfect 100% (in fact, it appears to be far from even 50%), suggesting that genes are not solely determinative . Any difference between monozygotic twins must be due to the unshared environment, and that is where differences in stress probably come into play. In any event, schizophrenia appears to occur as the product of the combination is of a shared genetic diathesis and an unshared environmental stress.

The Genain
              QuadrupletsRosenthal (1963) originally got his idea about diathesis and stress from the Genain Quadruplets -- identical quadruplet girls, born in 1930 into a family with a history of mental illness, three of whom were hospitalized for schizophrenia on at lest one occasion. "Genain" is a pseudonym, derived from the Greek words for "dire birth", intended to protect the identity of the girls and their family. Because they were studied intensively at the National Institute of Mental Health, they are known as Nora, Iris, Myra, and Hester. What was particularly interesting about the twins was that not all of them fell ill -- at least, Myra was never actually hospitalized. Thus, schizophrenia is not purely a result of genetics -- or else, these identical twins would all have had identical outcomes. Instead, Rosenthal hypothesized that a genetic diathesis interacted with some environmental stress to precipitate schizophrenia in some of the children, but not in all.

Another family challenged by mental illness were the Galvins -- Mimi, Don, and their 12 children, six of whom -- all boys born between 1945 and 1965 -- ended up with a diagnosis of schizophrenia.  The whole sad story is told by Robert Kolker, a journalist, in Hidden Valley Road (2020).  The post-World War II interval was the heyday of psychoanalysis, with its theory that the illness was caused by cold, domineering "schizophrenogenic" mothers who themselves suffered from a "perversion of the maternal instinct".  An entirely bogus theory, like the rest of psychoanalysis, but that didn't prevent Mimi from shouldering the blame; nor did it prevent the boys from getting treatment that might actually have helped them.  Then came the pharmacological revolution, and the boys were so filled with antipsychotic medication that they suffered massive side-effects.  The other six siblings -- four boys and two girls -- escaped the illness, but not really -- because they grew up in an often chaotic household (on Hidden Valley Road) that the parents could barely hold together.  What results, in Kolker's hands, is a compelling depiction of what severe mental illness can do to a family.  And another picture of the complexities of diathesis-stress theory.  Wherever the boys schizophrenia came from, the presence of six mentally ill children in a single family can't help but have piled on the stressors.

One environmental stressor that has been implicated in schizophrenia is socioeconomic status: schizophrenia is rare, affecting less than 1% of the population, but it is more likely to be observed in individuals with relatively low socioeconomic status. One theory is that the stresses of lower-class living interact with a genetic diathesis for schizophrenia, resulting in the higher incidence -- an idea known as sociogenesis. However, careful epidemiological studies have shown that low SES follows, rather than precedes, the onset of schizophrenia. That is, schizophrenia occurs in all socioeconomic strata, but when it happens to upper-class individuals, they tend to drift down to lower socioeconomic strata -- a phenomenon known as social drift.

However, the failure of the sociogenic hypothesis does not rule out environmental contributions to schizophrenia. Other environmental influences that have been causally linked to schizophrenia include:

  • Coping failures, including losses and frustrations of various sorts. Loss, frustration, and coping failure do not by themselves cause schizophrenia, but they appear to be the sorts of things that can precipitate an episode of schizophrenia in someone who is at risk for it.
  • Expressed emotion: Schizophrenics who have recovered from an episode of the illness, and then are discharged into a home environment in which criticism and other negative affect is directed at them, or family members and others become overly emotionally involved with the patient, are more likely to relapse, and have another episode.
  • Lack of Social Support: Schizophrenia is often associated with a poor prognosis, and the expectation that schizophrenia is a chronic disease from which patients never recover.  But it turns out that it's possible for schizophrenics to make a pretty good re-entry into ordinary, everyday life -- especially if they had made a good premorbid adjustment before their initial episode.  One key to this recovery is medication, for symptom control.  But another key, equally important, is social support.   If schizophrenic patients learn to cope with their residual symptoms, and they receive the support and encouragement of family, friends, neighbors, and coworkers, the prognosis for successful recovery is actually pretty good (think about John Nash, subject of A Beautiful Mind by Silvia Nasr, who won the Nobel Prize in Economics).  If social support can lead to successful recovery, the implication is that social support might have prevented the initial episode in the first place.

Communication
              Disorder and Thought DisorderOne environmental stressor that has received a great deal of research attention is deviant communication: Vague and fragmented verbal exchanges, especially on the part of family members. A longitudinal research project known as the Finnish Adoptive Family Study of Schizophrenia examined the long-term outcome of "high risk" children who were born to 167 women hospitalized for schizophrenia and control children born 202 women hospitalized for other illnesses. On medical advice, the children of these women were given up for adoption. In one study, Wahlberg, Wynn, and their colleagues tested the adoptive families for signs of communication deviance, and then tested the adopted children themselves (known in medical terminology as "probands", because their family history gives them an elevated probability of becoming ill) on an index of thought disorder characteristic of schizophrenia.

  • When there was little communication deviance in the adoptive family, the high-risk probands of the schizophrenic women showed little evidence of thought disorder.
  • But with increased levels of communication deviance in the adoptive family, the incidence of thought disorder in the schizophrenic probands progressively increased -- but no such increase was seen in the non-schizophrenic probands.

This is exactly the kind of person-environment interaction anticipated by the diathesis-stress model: the combination of high genetic risk (being the child of a schizophrenic mother)and high environmental stress (being exposed to communication deviance in one's adoptive family) leads to increased occurrence of schizophrenic symptoms. No such trend occurred, however, in children who were not already at risk for schizophrenia.

Now look carefully at the graph. Note that the level of communication deviance in the adoptive families of control children "maxes out" at 8 units, while the level observed in the adoptive families of the high-risk probands goes as high as 10 units. Perhaps, by some stroke of bad luck, these high-risk probands were adopted into families who were carrying more than their fair share of schizophrenic diathesis. More likely, the high-risk probands contributed to the high levels of communication deviance observed in their adoptive families -- communication deviance that might not have been present, but for the probands themselves. Perhaps this is another instance of the person creating the environment to which s/he responds.

Mood Disorder. Studies of bipolar and (especially) unipolar affective disorder show the same patterns: clear evidence for a genetic diathesis, but equally clear evidence for an unshared environmental stress (we cannot calculate the contributions of genetics, shared environment, and nonshared environment from concordance rates in exactly the same way you can for twin correlations, but the logic is the same).

Ulcers. The diathesis-stress approach is also relevant to psychosomatic ulcers, where lesions in the lining of the gastrointestinal system (that's what peptic ulcers are) occur to people who are under a high amount of stress.

  • Gastric ulcers affect the lining of the stomach. 
  • Duodenal ulcers affect the lining of the small intestine (or duodenum).
  • There are also esophageal ulcers, which don't concern us here.

The psychosomatic nature of peptic ulcers has recently been discounted by some physicians, who note the presence of a particular bacterial infection,helicobacter pylori, in the stomachs and small intestines of as many as 80% of ulcer patients. But while almost everyone who suffers from ulcers is infected with h. pylori, not everyone infected with h. pylori has ulcers. In fact,h. pylori is also found in the gastrointestinal systems of 70% of patients who do not have ulcers! What makes the difference? Plausibly, stress. Part of the stress response is to secrete acid into the stomach (to aid in the digestion process; when it doesn't encounter food, it eats away at the stomach lining, creating ulcers (gastric ulcers are lesions in the stomach; duodenal ulcers are lesions in the small intestine). Infection by h. pylori increases the gastrointestinal system's vulnerability to ulceration: it is a genuine diathesis factor. But, prolonged stress-related autonomic activation (see the discussion of Selye's general activation syndrome in the lecture supplements on the Biological Basis of Mind and Behavior) can interact with the bacteria to make ulcers even more likely to occur.

The diathesis-stress hypothesis is supported by a laboratory model of ulcers developed by Steven Maier (one of the investigators involved in the discovery of learned helplessness). In this model, rats are infected with h. pylori bacteria, and then are exposed to stress in the form of unpredictable and uncontrollable shock. This combination is particularly likely to produce ulcers, compared to conditions in which neither diathesis nor stress, or only one factor but not the other, are present.

Phobias. In experimental psychopathology, phobias are a classical example of psychopathology acquired through learning -- particularly, fear conditioning. As such, phobias would seem to be a case of all stress and no diathesis: the stress is the anxiety that accompanies exposure to the feared object. So, if a person has a negative encounter with a snake, he or she will come to fear snakes. In this conditioning theory of phobia, the snake is a CS that predicts unpleasant consequences.

This is a fine theory, so far as it goes, but it has two problems.

One problem is that people with phobias don't always, or even usually, have histories of negative experiences with the objects of their fears. Readers who have phobias concerning snakes, for example, might ask themselves what snakes have ever done to them. Once in a while a snake phobic has been bitten by a snake, but not too often. Instead of resulting from direct experience with the phobic object, it is more likely that the snake phobia has been acquired through social learning or vicarious conditioning. That is, people become afraid of snakes because they know other people who are afraid of snakes. We learn to fear what other people fear, without having frightening experiences ourselves.

The second problem is that people don't always acquire phobias following association of an object with negative consequences. To use an example from Seligman (the same theorist who proposed the learned helplessness model of depression), when we have a bout of food poisoning we don't become afraid of the crockery and cutlery; we become afraid of the food. And not just any food we may have eaten; we tend to become afraid of thinks like Lima beans and cream sauces. In fact, clinical phobias are largely limited to a relatively small number of situations: open spaces, high places, the gaze of other people, and wriggly, slimy things. According to Seligman, we are prepared by evolution to easily and quickly acquire conditioned fear responses to these sorts of objects and situations. In this view, the diathesis in phobia is a set of "prepared" associations, a part of the organism's evolutionary heritage, which predispose the individual to acquire intense fears even with minimal exposure.And the stress is a negative event. The stressful event can result in phobic levels of fear, but only by virtue of these prepared associations.

A laboratory model of phobias incorporating both social learning and preparedness has been studied by Mineka and her colleagues in research described in the lecture supplement on Learning. Mineka and her colleagues showed that observational learning was sufficient to produce intense conditioned fear in monkeys who themselves had no experience of negative consequences in association with the CS: they learned to fear what other monkeys feared. But Mineka et al. also found that observational learning didn't produce fear of just anything. Through vicarious learning, monkeys acquired conditioned fear responses to snakes, but not flowers. According to the preparedness argument, a disposition to fear snakes is built into monkeys by evolution, and can produce full-blown snake-fear even in with little or no direct experience.

The Dunedin Studies. The interaction of a biological diathesis with environmental stressors can be illustrated by two studies by Avshalom Caspi, Terrie Moffitt, and their associates, based on data collected in the "Dunedin Multidisciplinary Health and Development Study". In this project, longitudinal data was collected from a "birth cohort" of 1,037 children (roughly half of them males) born near Dunedin, New Zealand, and tested approximately every two or three years from ages 3 to 26 (actually, they're still being followed).

MADA,
              Maltreatment, and Adolescent Conduct DisorderIn one study, Caspi et al. (2002) examined the role that the MAOA gene played in adolescent conduct disorder. This gene, located on the X chromosome, promotes monoamine oxidase A, a substance that metabolizes many different neurotransmitters, and which has been linked to increased aggression in both laboratory mice and humans. Caspi et al. also explored the role of stress in conduct disorder -- particularly a history of childhood and adolescent maltreatment, which some theorists have proposed initiates an intergenerational "vicious cycle of violence" in which maltreated boys become maltreating fathers, producing maltreated boys who also become maltreating fathers. In fact, subjects with high levels of MAOA activity showed a relatively low incidence of conduct disorder, regardless of their history of maltreatment. However, subjects with low levels of MAOA activity, who also had a history of severe maltreatment, showed a very high incidence of conduct disorder. The MAOA gene is a diathesis which interacts with severe maltreatment to produce conduct disorder.

5-HTT,
              Life Stress, and DepressionIn another study, Caspi et al. (2003) examined the role of the 5-HTT gene in major depressive disorder. This gene, located on chromosome 17q11.2, comes in two forms, "short" and "long", yielding four genotypes: SS, SL, LS, or LL. Caspi et al. also explored the role of life stress in depression, by counting the number of stressful events occurring in the life of each subject between ages 21 and 26 (in psychology, a "stressful" event can include getting married as well as getting divorced). Subjects with the LL form of the genotype showed a relatively low incidence of depression, regardless of their history of life stress. However, subjects with the "short" form of the genotype (with at least one short allele, as in SS, SL, or LS), combined with a history of many stressful events during the previous five years, showed a much higher incidence of depression. 

5-HTT, Social
              Support, and Behavioral InhibitionSimilarly, Fox5-HTT, Social Support, and Rated Shyness et al. studied the role of the 5-HTT gene in pathological shyness -- children and adults who are severely withdrawn. Their study actually involved multiple assessments of children's temperament -- how inhibited the children were in the presence of strangers, and their mothers' ratings of their shyness. The mothers also provided ratings of the amount of social support (e.g., friends) their children had. The results of the study showed a clear gene x environment interaction:

  • Children with the "short" allele of the 5-HTT gene, coupled with poor or very poor social support, showed much higher levels of behavioral inhibition, compared to children with the "long" allele, or good levels of social support.
  • Similarly, children who combined the short allele with poor social support received much higher ratings of shyness -- by which, of course, we mean pathological shyness, not the ordinary sort of shyness that children (and adults) can display.

Notice the shape of the graphs in the Fox et al. study of pathological shyness, which combines the "crossover" and "fan" effects that are so characteristic of the person-by-situation interaction.

Still another study from the Caspi-Moffitt group focused on marijuana and psychosis.  It’s long been known that some people who smoked marijuana as adolescents develop a form of psychosis as adults, but the precise pathway has been unclear.  Certainly, most adolescents who smoke marijuana don’t develop psychosis, but it does appear to be a risk factor.  Caspi and Moffitt focused their attention on yet another gene, known as COMT, located on Chromosome 22. COMT is involved in the metabolism of dopamine, which has been linked to schizophrenia.  The gene comes in two forms, methionine (“Met”), and valine (“Val”).  Individuals who have two copies of the “Met” allele show the fastest breakdown of dopamine; those with two copies of “Val”, the slowest; and those with one of each, somewhere in between.  So, if you’re a “MetMet” person, dopamine metabolizes faster, and resides in your system for less time.  Again using subjects from the Dunedin study, Caspi and Moffitt classified their subjects according to the form of the COMT gene, and also by their history of adolescent marijuana use.  And when they looked at the incidence of psychotic symptoms in these subjects when they were young adults, they found a clear gene-by-environment interaction.  The affected subjects didn’t always show full-blown schizophrenia or any other psychotic syndrome.  Still, their risk for delusions, hallucinations, and other “schizophreniform” symptoms was greatly increased if they had two copies of the “Val” allele, coupled with frequent marijuana use as adolescents.  If they had only one copy of “Val", or two copies of “Met”, their risk was greatly reduced.

Another study showed how the COMT genotype interacted with stress to affect performance on a academic tests (Yeh et al. 2009) -- not, admittedly, a major mental illness, but perhaps indicative of GxE interactions in anxiety disorders.  Every year, Taiwanese students who wish to move on from junior to senior high school must take a rigorous test known as the Basic Competence Test, which measures educational achievement in a number of subjects, including Chinese and English language, mathematics, science, social science, and writing.  The BCT is an extremely stressful "high-stakes" test, because its outcome determines whether the student will have any chance of going to college (at least in Taiwan).  Yeh drew a sample of 779 Taiwanese high-school students (i.e., who had already passed the BCT), and examined their scores on the BCT subtests as a function of their COMT genotype.

  • Students with the Met/Met genotype consistently performed more poorly than those with the Val/Val or Met/Val genotype.
  • There were no differences between students with the Val/Val or Met/Val genotypes.

The conclusion is that having two copies of the Met genotype renders the person vulnerable to high levels of stress, to the detriment of performance.

Some of these gene-environment interactions are controversial. 

  • The specific findings of Caspi and Moffitt, as well as those of have proved difficult to replicate, possibly because of the highly unusual population from which they drew their samples (Duncan et al. 2014).
  • The interpretation of the gene-by-environment effects is also subject to controversy. 
    • I have presented them as illustrative of the diathesis-stress approach, with -- for example -- the short allele of the 5-HTT gene functioning as a kind of risk factor, rendering a person more vulnerable to the effects of a stressful environment.
    • An alternative view is that the 5-HTT gene, and presumably others like it, acts as a kind of "sensitivity gene", amplifying the effects of the environment. 
      • Possessing the short allele will magnify the negative effects of exposure to a negative, stress-filled environment.
      • But possessing the same short allele will magnify the positive effects of exposure to a positive, pleasure-filled environment. 

In particular, the gene-by-environment interaction in depression, involving the 5-HTT gene, has stimulated a great deal of interest, but it’s also been controversial.  Some researchers have failed to replicate Caspi and Moffitt’s findings, while some critics have complained about the assessment of stress.  Katja Karg and her colleagues recently surveyed 56 studies, involving more than 40,000 subjects, and found that, overall, these studies confirmed the G-by-E effect.  A history of stress, especially defined by childhood maltreatment or life-threatening or chronic medical conditions, coupled with the “short” form of the 5-HTT gene, greatly increases one’s risk for a major depressive episode. So please note that the findings on 5-HTT or COMT, for example, are not to be taken as firm.  Rather, I cite them here simply as illustrations of a particular approach, using the diathesis-stress model, that is commonly used to identify the genetic and environmental contributions to mental illness.  So stay tuned!


The Nature of Diathesis and Stress

In standard presentations of the diathesis-stress model, the diathesis is often biological (like the 5-HTT gene) and the stress is often psychosocial (like stressful life events), but this need not necessarily be the case.

Usually, we think of biological diatheses as specific genes, like MAO-A or 5-HTT.  But it's possible that there is a genetic diathesis for a broader group of mental illnesses.  Smoller and his colleagues (2013) conducted the largest study to date of the genetics of mental illness, including more than 60,000 subjects from 19 countries (roughly half were patients carrying a psychiatric diagnosis).  They found that five different disorders -- schizophrenia, bipolar disorder, depression, attention deficit hyperactivity disorder, and autism -- shared a relatively small set of genetic aberrations in common.  For example, one identical twin might develop schizophrenia, while another might develop bipolar disorder. 

Somewhat similar results were obtained from another study which extracted RNA (not DNA) from the cerebral cortex of deceased patients who had been diagnosed with various forms of major mental illness, such as autism, schizophrenia, bipolar disorder, depression, and alcoholism (Gandal et al., Science, 2018).  They compared these assays to samples taken from individuals who did not carry a psychiatric diagnosis (an obvious control), and from other patients with a physical illness, irritable bowel syndrome (in order to control for illness in general).  As in the Smoller study, they found that there was significant overlap in the patterns of gene expression across the various syndromes -- except alcoholism, which had a pattern that was quite distinct compared to the others.  For example, there was considerable overlap in gene activity between schizophrenia and bipolar disorder -- despite the fact that the symptoms associated with these two syndromes are very different.

These studies suggest that there might be a genetic diathesis for these major forms of mental illness in general, and that whatever specific mental illness an affected individual develops would depend on other factors (perhaps environmental, perhaps genetic as well). 

Biological Stressors

In some instances, stress is better conceptualized as biological rather than psychosocial in nature.

For example, prenatal and perinatal complications are often found in the life histories of people who eventually develop schizophrenia. These are environmental stressors, but the fact that they occur in the prenatal or perinatal environment mark them as more biological than psychosocial in nature. A difficult birth doesn't have the same meaning for a person as a difficult childhood or adolescence.


Psychosocial Diatheses

In other instances, diathesis is better conceptualized as psychosocial rather than biological in nature

For example, people may be predisposed to depression by histories of social learning that lead them to acquire certain beliefs.

  • Beck has noted the presence of depressogenic schemata -- what he calls the depressogenic triad negative views of the self, the world, and the future -- in the belief systems of many depressed patients. In Beck's view, these cognitive structures of knowledge and belief render the person vulnerable to depression and all its symptoms -- sadness, anhedonia, guilt, withdrawal, inactivity, and loss of appetite -- in the face of adverse life events.
    • Beck has recently expanded his cognitive theory of depression (Beck & Bredemeier, 2016) to include yet another diathesis factor: a heightened reactivity to stress, which causes the individual to view any loss of "essential human resources" as particularly "devastating and insurmountable".  As a result, it takes relatively little stress to activate the depressogenic schemata, leading to the development of the symptoms of full-blown depression.
    • Another psychosocial diathesis for depression was noted by Nolem-Hoeksema (1991; Nolem-Hoeksema et al., 2008): rumination, as opposed to self-distraction, perpetuates the symptoms of depression. Especially in women, a tendency toward rumination may serve some of the same maladaptive functions as Beck's depressogenic schemata or Abramson and Alloy's depressogenic attributional style.
    • Similarly, Abramson and Alloy noted that many depressed patients displayed a particular "depressogenic" attributional (or inferential) style in which they tended to explain events in terms of stable, global, internal factors (as opposed to unstable, local, external ones). In their view, way of thinking rendered people vulnerable to depression -- because they tend to think that negative events are uncontrollable even when they aren't.  This idea has led to the hopelessness theory of depression (Abramson, Metalsky, & Alloy, Psych Rev., 1989), which states that, in response to negative life events, individuals with a depressogenic attributional style will become hopeless, and this persistent state of hopelessness is the proximal cause of clinical depression.  Note however, that A&A do not believe that hopelessness lies at the root of all instances of depression.  Instead, they posit that hopelessness is the cause of a particular subtype of depression, which they call, naturally enough, hopelessness depression.  This is not as circular as it sounds.  A&A argue that there are many different forms of major depressive disorder, each with its own unique etiology, course, prognosis, and preferred treatment.
      • There is, of course, an extensive literature showing that depressed individuals have a history of negative life experiences.
      • And there is also an extensive literature showing that depressed individuals tend to have a depressogenic attributional style.
      • It was not until 2018, however, that A&A, along with their colleagues, put the whole thing together to show that the depressogenic attributional style, as a diathesis factor, interacted with negative life experiences, as a stress factor, to generate hopelessness, and thus depression. A study by Mac Giollabhui and her colleagues, including A&A,  tested the theory with a diverse sample of 249 adolescents, ages 12–13 years.  The subjects were first assessed at baseline and at subsequent follow-up sessions over approximately 2.5 years. Employing self-report questionnaires and clinical interviews, the investigators assessed attributional style, NLEs, feelings of hopelessness, depressive symptoms, and depression diagnosis. Subjects who showed signs of depression at initial testing were excluded from further consideration.  Statistical analysis indicated that subjects with a negative attributional style, and a high number of negative life events displayed more depressive symptoms, and were more likely to experience their first acute major depressive episode within the follow-up period.  However, this was only the case for subjects who also showed high levels of hopelessness.  Subjects with low levels of hopelessness had fewer symptoms, and were less likely to experience an episode of depression.  The study demonstrates the validity of the hopelessness theory of depression and its clinical relevance in predicting depression in adolescence.  

Consider how this "reversal" of the standard account of diathesis (psychosocial, not biological) and stress (biological, not psychosocial) might help explain the incidence of postpartum depression, which occurs in some (but not all) women who have recently given birth. Sudden biochemical changes associated with pregnancy and childbirth may alter the person's characteristic mood states and activity levels. These particular alterations may be similar to those that occur in depression, but they do not necessarily result in a depressive episode. However, if these changes are interpreted in terms of Beck's "depressogenic schemata" or Abramson and Alloy's "depressive attributional style", they may well be interpreted in such a manner precipitate an episode of depression. In this case, the diathesis factor is psychosocial in nature -- a "depressogenic" way of thinking. But mental illness doesn't occur unless this diathesis interacts with a stressor-- and in this case, the stressful events are biochemical in nature, consisting of certain biochemical changes that occur naturally with pregnancy and parturition.

A similar account could be given of the depression which occurs in some (but not all) women who are going through menopause.

Post-traumatic stress disorder (PTSD) would, at first glance, appear to be an almost "pure" case of stress-related mental illness. But, in fact, only a minority of people actually exposed to traumatic levels of stress actually go on to develop PTSD (there is, of course, an acute stress disorder affecting large proportions of trauma victims, but even this isn't universal).  This variability was noted even as far back as World War I, where it was attributed to individual differences in soldiers' predisposition to stress.  This diathesis might be biological -- perhaps something related to the functioning of the hypothalamic-pituitary-adrenal (HPA) axis, or perhaps a genetic predisposition.  Or it might be something psychological, analogous to the depressogenic schemata and attributional styles implicated in some forms of depression.  But the basic point is that even in PTSD, the stressor alone is rarely sufficient to cause the disorder all by itself; the stressor has to combine with a predisposition or vulnerability.
Interestingly, the role of diathesis or predisposition in PTSD has contributed to the reluctance to award honors or disability benefits to some soldiers suffering from PTSD.  The argument is that the soldiers' disabilities weren't caused by the stress of war, but rather are related to personality problems that predated the war -- much as insurance companies will sometimes deny medical coverage on grounds of a "pre-existing condition".  Setting this policy issue aside, PTSD may be an example where both the diathesis and the stress are psychosocial in nature.

Diathesis is often biological, and stress is often psychosocial, in nature. But the really important feature of diathesis is that it is something that the person carries with him into a situation, either as a biological or a psychosocial "trait". By the same token, the really important feature of stress is that it is something that happens to the person in a particular situation, ether as a biological or psychosocial event (or series of events).


Treatment of Mental Illness

The diathesis-stress model for the origins of mental illness offers a framework for understanding their treatment and prevention as well. So, if mental illness is caused by the interaction of diathesis and stress, then effective interventions should alter diathesis factors, stress factors, or both.

To orient ourselves, let's first examine what happens during an episode of mental illness.  As in earlier lectures in this module, we'll be using language derived from medicine patient, symptom, syndrome, and so on, to identify what we can call, following Steven Hollon, a prominent depression researcher at Vanderbilt University, and others, the 5 Rs of Mental Illness. I'll use depression as my example, but the same general points would apply to any syndrome of mental illness.


First, let's assume that the patient begins at a more-or-less "normal" state, with no identifiable symptoms of mental illness.  At some point, however, given a certain combination of diathesis and stress, he or she begins to show symptoms of some form of mental illness -- depression, or anxiety disorder, or schizophrenia.  At some point, enough symptoms develop, with enough severity, to compromise normal functioning, at which point we can say that the person has "progressed" -- that's the medical term -- to a full-blown acute episode of mental illness.

  1. Remission.  With time, even without active intervention, the symptoms may disappear on their own, in which case we would talk about spontaneous remission of the illness.  "Remission".  Spontaneous remission is surprisingly common in depression, which tends to come and go in its natural course.  But it also takes time, maybe 6-9 months, so rather than wait the illness out, it's probably better to get some sort of active treatment.
  2. Response.  Now let's suppose that the patient does receive some active treatment, whether it's psychotherapy, or some kind of biological treatment, like medication.  Many patients will respond positively to whatever treatment they receive.  If they don't, the therapist may switch to a different form of treatment -- a different approach to therapy or perhaps a different drug -- until something seems to work.
  3. Recovery.  In depression, subjects may respond positively after a couple of weeks of appropriate treatment, but it may take a while before they really get back to normal -- a complete recovery, in which they are symptom free.  Alternatively, may show a partial recovery, in which some symptoms remit completely, but not others; or there may be a clinically significant reduction in the severity of symptoms.  Sometimes, unfortunately, recovery doesn't occur at all, in which case the patient proceeds from an acute to a chronic illness, calling for continuation treatment.
  4. Relapse.  Even with a complete recovery, there is some chance that the symptoms may come back, perhaps not as severely as before, but enough to interfere with normal functioning.  In depression, for example, the likelihood of a relapse even after what appeared to be a complete recovery is about nine times the risk of depression occurring in the general population.  Apparently the acute episode is still going on, appearances to the contrary notwithstanding.  For this reason, most therapists don't stop treatment immediately once the patient has recovered, but continue it for a while. 
  5. Recurrence.  Even after a patient has completely recovered, there is still some chance that another acute episode can occur later.  If a person has recovered from an acute episode of depression, there is still a chance that another acute episode will occur -- about three times more risk than in the general population.  For this reason, even patients who have completely recovered may want to maintain some degree of maintenance treatment

The first goal of treatment is to achieve a cure. Genuine cures will do more than suppress superficial symptoms: they will eliminate underlying pathology, or reduce it to a clinically significant extent, returning the person's level of functioning to "within normal limits" even after active treatment has stopped. In the absence of a cure, treatment focuses on the amelioration of symptoms, or rehabilitation regimes that permit the patient to cope with a chronic condition.


Custodial Care

Historically, the scientific treatment of mental illness has come in three basic forms.  Up until the 20th century, there were really no active treatments available for mental illness, so intervention focused largely on custodial care -- chiefly the warehousing of the mentally ill in public hospitals or private asylums.  Pennsylvania Hospital, America’s first hospital, was founded by Benjamin Franklin and Thomas Bond in 1751, when Pennsylvania was still a British colony, for the care of the indigent and the mentally ill.  Initially, the asylum was simply a separate ward in the original hospital on Pine Street, later a separate wing, the asylum was moved to the more rural West Philadelphia in 1841, and in 1859 to its ultimate location (at 49th and Market Streets). 

New York Hospital and the Virginia Asylum were also chartered for the care of “Lunaticks” before the Revolutionary War, but did not open their doors until after the War was over.  Still, the patients there were mostly “warehoused”, out of sight of the rest of the community.  After the Civil War, Silas Weir Mitchell gave a more positive spin to “warehousing” by prescribing a “rest cure” for “mental exhaustion – famously depicted in The Yellow Wallpaper, an early feminist classic by Charlotte Perkins Gilman. 

  • For insight into the state mental hospital system, see The Lives They Left Behind: Suitcases from a State Hospital Attic by Darby Penney and Peter Stastny (2007), based on artifacts collected from Willard State Hospital in Romulus, New York. Link to the accompanying website: www.suitcaseexhibit.org.
    • Beginning in December 2013, this exhibition can be viewed at the Exploratorium in San Francisco.
  • For a biography of Thomas Kirkbride, see the Art of Asylum-Keeping: Thomas Story Kirkbride and the Origins of American Psychiatry by Nancy Tomes.
  • Sometimes, the mentally ill were confined at home, in back bedrooms and attics, as in the character of Mrs. Rochester in Charlotte Bronte's Jane Eyre (1847) -- the original "madwoman in the attic".

Beginning in the late 19th century, various forms of psychotherapy were introduced.  These interventions, one way or another, were intended to alter the patients' pathological mental states -- their abnormal beliefs and feelings and desires, which were thought to underlie the various symptoms of mental illness.  By changing these abnormal, maladaptive mental states, mental health professionals sought to change the maladaptive patterns of behavior that brought patients to the attention to the professionals in the first place.  Beginning in the 20th century, a wide variety of biological treatments were introduced for mental illness.  These began with procedures like electroconvulsive therapy, and even psycho-surgery where physicians operated on various brain centers that were presumed to be implicated in the patient's problems.  But more recently, a wide variety of medications have been introduced for the treatment of various metal illnesses, beginning with schizophrenia.  These medications have largely supplanted psychosurgery and ECT, though both still have a place in selected cases. 

  • For histories of the beginnings of mental-health treatment in America, see:
    • American Nervousness: Its Causes and Consequences by George M. Beard (1881 -- note the date!);
    • American Nervousness, 1903 by Tom Lutz (1991).
    • Before Prozac (2008) and How Everyone Became Depressed: the Rise and Fall of the Nervous Breakdown (2013) by Edward Shorter, the pre-eminent historian of the psychiatry in America.
  • Beginning in the mid-20th century,a wide variety of psychotropic medications  were introduced for the treatment of various mental illnesses, beginning with schizophrenia. Mental health has been a full participant in the "pharmaceutical revolution" that has swept modern medicine.  
The use of biological treatments for mental illness flows from the assumption that mental illnesses have biological causes -- that they are, in effect, diseases of the nervous system.  For an excellent history of the search for biological causes of and treatments for mental illness, see The  Mind Fixers: Psychiatry's Troubled Search for the Biology of Mental Illness by Patricia Harrington, a historian of medicine at Harvard (reviewed by R.J. McNally, a clinical psychologist, in the Wall Street Journal, 05/04/2019; by Jerome Groopman, a physician, in the New Yorker, 05/27/2019; by Gary Greenberg, a psychotherapist and author of The Book of Woe: The DSM and the Unmaking of Psychiatry, in The Atlantic, 04/2019; and by Galvin Francis in the New York Review of Books, 01/14/2021). 
  • Harrington points out that the very first biological theory of mental illness was for syphilis, a venereal disease whose late stages were marked by dementia and delusions, as well as motor problems -- the "general paresis [paralysis] of the insane".  By the late 19th century, this diagnosis was applied to as many as 20% of new patients entering insane asylums. Richard von Krafft-Ebing (yes, that Krafft-Ebing) proved that syphilis was the specific cause of general paresis -- the first time that a specific biological cause had been identified for any form of mental illness.
  • Identifying the pathogenesis -- the medical term for a biological cause -- of general paresis stimulated similar searches for the pathogenesis of other forms of mental illness, but these were largely unsuccessful.  The reason, Harrington thinks, was that physicians focused mostly on anatomy, looking for lesions and other pathologies in brain tissue.  Advances in medical bacteriology eventually led some biologically oriented psychiatrists to argue that mental illnesses were caused by microbes arising from other parts of the body -- leading to experimental treatments involving the removal of teeth, ovaries and testes, and parts of the digestive system.  None of these treatments were effective, of course.  Theories based on pathological anatomy were revived when Egas Moniz received the 1949 Nobel Prize in Physiology or Medicine by inventing the lobotomy, in which the prefrontal cortex was effectively disconnected from the rest of the brain (most notoriously by Walter Freeman, an American neurosurgeon, using a tool resembling nothing more than an ice pick inserted into the brain through the patient's eye socket).  
  • At the same time, more psychologically oriented physicians, such as Jean-Martin Charcot, Pierre Janet, and most famously Sigmund Freud, argued that some forms of mental illness, such as hysteria (as it was called then), could arise from psychological as well as biological causes -- especially sexual trauma of various kinds.  Hysteria was labeled a "functional" disorder, The apparent success of Freudian psychoanalysis caused many psychiatrists to turn away from physiology toward psychology.  Psychoanalysis -- as both a theory of and a treatment or mental illness -- dominated psychiatry for most of the 20th century.  After World War II, , during what W.H. Auden famously called "The Age of Anxiety", neo-Freudian psychoanalysts focused focused on real-life anxiety, and the need for emotional security, rather than the repression of sexual fantasies and experiences as the cause of neurosis -- while conceding that psychosis probably had biological causes.  Then again, it was around this time that "schizophrenogenic" mothers were blamed for causing schizophrenia in their children by placing them in a "double bind" of conflicting messages; and "refrigerator mothers" were blamed for their children's autism. 
  • Some psychiatrists tried to form a middle way between biology and psychology.  For example, Adolph Meyer advocated a "psychobiological" approach that acknowledged both physiology and biology, and a "common sense" approach to mental illness that eschewed dogmatic positions in either direction.
  • The 1960s saw a number of attacks on institutional psychiatry:
    • Thomas Szasz, in The Myth of Mental Illness (1961) argued that mental illness was just that -- a myth;
    • Erving Goffman, in Asylums (1961), compared mental hospitals to concentration camps and other "total institutions" that deprives their inmates of any personal autonomy;
    • Michel Foucault, in Madness and Civilization (1961; this was a very big year for critiques of psychiatry) portrayed the mentally ill as an oppressed group and psychiatrists as their oppressors;
    • feminists such as Betty Friedan (in The Feminine Mystique, 1963) criticized psychiatry for putting the blame for mental illness on mothers. 
    • In the 1970s, the Insane Liberation Front and other activists argued that "mental illness" was a label applied to nonconformists in order to deprive them of their freedom.  
  • The crisis of psychiatry came to a head in 1972, when the American Psychiatric Association voted to remove homosexuality from its official list of mental illnesses.  Ironically, the decision was the correct one: homosexuality is no more "pathological" than heterosexuality.  But science doesn't put things to a vote: it has objective standards for discerning truth from falsehood.  The fact that this issue had to be put to a vote revealed that there were no objective standards for what constituted a mental illness, and undermined claims that psychiatry was a science-based discipline. 
  • The dominance of psychoanalytic psychiatry was broken by two events -- first, the development of new drugs for the treatment of anxiety (e.g., Valium), depression (e.g., Elavil), and schizophrenia (e.g., Thorazine); and second, by the invention of behavior therapy (and later cognitive behavioral therapy).  The first stimulated a new generation of biological theories of mental illness -- if mental illness could be treated with a pill, then its pathogenesis must lie in physiology, if not anatomy.  The second provided an alternative to psychoanalysis -- and one that was more effective than psychoanalysis, to boot. 
  • Psychiatry's scientific status was restored, to some extent, with the publication in 1973 of the 3rd edition of the Diagnostic and Statistical Manual of Mental Disorders, which eliminated all reliance on psychoanalytic theory and focused on a purely descriptive classification of various forms of mental illness in terms of their characteristic symptoms such as anxiety, depression, or delusions. 
  •  The apparent success of anti-anxiety, anti-depressant, and antipsychotic medications stimulated a new round of biological theories of mental illness.  The logic went like this: Elavil increases levels of norepinephrine, and helps alleviate depression; therefore, depression must be caused by diminished levels of norepinephrine.  It's bad logic, of course.  And despite years of searching, there's no good evidence that norepinephrine levels are related to depression.  A similar argument, similarly flawed both logically and empirically, was employed with selective serotonin reuptake inhibitors (SSRIs) were introduced for depression.  Despite widespread adoption of medication as a treatment of for mental illness, there is still no definitive evidence that chemical imbalances cause mental illness.  They remain "empirical" treatments -- they work, apparently, but there's no theory that explains why they work. 
  • And even when it comes to purely empirical treatments, the pharmaceutical industry appears to be at a dead end.  The latest fads in biological psychiatry involve genes, gut bacteria, and the immune system.  Still, Harrington quotes Steven Hyman, once the director of the National Institute of Mental Health, that "no new drug targets or therapeutic mechanisms of real significance have been developed for more than four decades". 
  • Empirical treatments aren't bad: medicine offers lots of them, and they really work.  But understanding the nature of mental illness -- well, that's still a puzzle.  It's a puzzle that psychology tries to help solve.  At the end of her book, Harrington expresses the hope that psychiatry will "overcome its persistent reductionist habits and commit to an ongoing dialogue with... the social sciences and even the humanities".  That's where psychology -- which, remember, is both a biological science and a social science, all wrapped up in a single package -- can help.


Biological Treatments

Where biological causes are the primary factor in mental illness, strictly biological treatments may effect a complete cure. Of course, such biological cures require an understanding of the biological basis of some form of mental illness.

Biological cures are exemplified by treatments available for two specific forms of intellectual disability.

  • Phenylketonuria is a metabolic disorder that interferes with the myelinization of neural tissue. If a child with phenylketonuria is put on a strict low-protein diet until about age 6, this form of intellectual disability can be prevented entirely. Unfortunately individuals with PKU must stay on a somewhat restricted diet for the rest of their lives. In 2007, the Food and Drug Administration approved Kuvan, the first drug treatment for the disorder, while will allow for a somewhat more relaxed dietary regimen, allowing affected individuals to eat things like cheese and pizza (if only in moderation).
  • Cretinism, another form of intellectual disability, is caused by thyroxine deficiency, a hormone imbalance due to a lack of iodine in a pregnant woman's diet. This illness can be prevented entirely by treating the infant with thyroid extract during the first year of life (it can also be prevented entirely by giving the mother the iodine she needs during pregnancy).

There are many other biological treatments for mental illness, including many different medications but also psychosurgery and electroconvulsive therapy. These are often very effective, but they do not reach the level of a cure, because the illness returns when the treatment is discontinued.

Psychosurgery, especially prefrontal lobotomy (the destruction of the prefrontal cortex of the brain) was once rather popular: Egas Moniz, the Portuguese neurologist who pioneered the technique of "prefrontal leukotomy", even won the Nobel Prize for Physiology or Medicine in 1949. The technique has now been completely repudiated, and Moniz' prize considered something of an embarrassment.  However, as our understanding of brain function advances more nuanced surgical approaches to mental illness are sometimes proposed. For a history of psychosurgery, see Great and Desperate Cures: The Rise and Decline of Psychosurgery and Other Radical Treatments for Mental Illness (1986) by Elliot Valenstein, a distinguished neuroscientist. Valenstein's other books are also of interest:Brain Control: A Critical Examination of Brain Stimulation and Psychosurgery (1977) and Blaming the Brain : The Truth About Drugs and Mental Health (1998).

Although nobody does lobotomies anymore, psychosurgeries are still performed (of course, brain surgery is frequently performed for strictly neurological disorders like epilepsy).  Most of these operations occur in cases of obsessive-compulsive disorder or depression, and they are pretty rare -- although, with increasing evidence of efficacy and safety, their use is increasing (though they are not risk-free).  Most of these "surgeries" are performed by means of electrical stimulation through microelectrodes, as opposed to with a scalpel (as was the case, for example, with the earlier lobotomies).     

  • Cingulotomy, destruction of a portion of the anterior cingulate gyrus, which disrupts a circuit that connects emotional enters in the limbic system to the frontal cortex.
  • A related procedure, known as capsulotomy, targets the internal capsule, a bundle of white matter that connects midbrain and forebrain structures.
  • One new form of surgery involves deep brain stimulation -- surgically implanting electrodes in the brain, using much lower levels of current to stimulate specific brain areas -- sort of like the pacemakers that are implanted to treat certain forms of heart disease.  Originally developed as a treatment for Parkinson's disease, and later extended to Tourette's syndrome (both more properly regarded as neurological syndromes with behavioral consequences than as mental illnesses per se), deep brain stimulation is now being performed in some cases of depression and obsessive-compulsive disorder as well.
    • For depression, the electrodes are typically implanted in the subcallosal cingulate gyrus (Brodmann's area 25).
      • For a a recent discussion of this technique, which as of 2015 had not yet been approved for routine therapeutic use, see "Treating Depression at the Source" by Andres M. Lozano & Helen S. Mayberg, Scientific American, 02/2015.
      • A variant on deep-brain stimulation for depression, which has been approved for routine use, is transcranial magnetic stimulation, stimulates the brain with short bursts of magnetic pulses focused on the left prefrontal cortex -- a little like electroconvulsive therapy, though without the convulsive seizures or transient confusion and memory loss. 
      • Both DBS and TMS have been proposed for use with depressed patients who are not responsive to conventional psychotherapeutic or pharmacological treatments.
    • For obsessive-compulsive disorder, the electrodes are implanted in the nucleus accumbens.
  • A newer technique, which doesn't employ electrical stimulation at all, is gamma knife surgery, derived from a radiological treatment for cancer, which focuses a large number of beams of radiation on a very small area of brain tissue.  Each beam is, itself, completely benign, but the cumulative effect of all the beams is sufficient to destroy tissue precisely at the point of focus.

The treatments are strictly experimental, and they can have powerful and unpleasant side-effects, so they are typically performed only under a "humanitarian device exemption", where no other treatment has worked.  For this reason, deep brain stimulation has not been subject to the kinds of controlled clinical trials that are required for approval of new medications and other medical treatments.

Electroconvulsive Therapy (ECT) is sometimes used as a treatment for severe depression. ECT employs a brief electrical current applied to the scalp to induce a brief seizure somewhat similar to that seen in epilepsy. ECT has a somewhat unsavory history, having been misused in the past, but the fact is that a short course of ECT, judiciously applied, can often produce rapid remission of depressive symptoms. ECT is often avoided because of its presumed side-effects, which (when misapplied) can include brain damage. However, ECT does not hurt (because the patient is unconscious during the treatment). ECT applied bilaterally (with electrodes placed on both sides of the head) can produce an amnesia similar to that which occurs following a concussive blow to the head; but memory impairment can be reduced substantially if the treatment is applied unilaterally, so the seizure is confined to one (usually the non-dominant) hemisphere. Biologically, ECT increases levels of norepinephrine and serotonin in the brain, which is interesting in light of the "monoamine" hypothesis of depression. 

  • ECT and TMS are sometimes referred to as "jump-starting the brain", but we don't really have a good idea why they work -- when they do.

By far the most popular and effective biological treatments for mental illness involve psychotropic or psychoactive medications: beginning in the 1950s, psychiatry has been a full participant in the "pharmaceutical revolution" in medicine.

A variety of antipsychotic medications are used in the treatment of schizophrenia. Most common, perhaps, are the phenothiazines, which appear to decrease dopamine levels in the brain. Typical brand names are Thorazine, Stelazine, Prolixin, and Mellaril. Haldol (a butyrophenone), and Navane (a thioxanthene) are also widely used. There is also a group of "atypical" antipsychotics like Clozaril, Resperidal, Zyprexa, and Abilify, which are as effective as the earlier antipsychotic agents, but cause fewer side-effects.

To be honest, though, most of the drugs used in the treatment of schizophrenia are really nothing more than major tranquilizers, which calm the patient, reducing the tendency to report and act on hallucinations if not to have them.Setting aside the dopamine hypothesis of schizophrenia, nobody thinks that these drugs act directly on the patient's underlying pathology. However, these drugs do make patients more manageable, and their introduction made everybody's life better, staff and patients, inside mental hospitals -- and made it possible for many people with schizophrenia to be released from mental hospitals to live with their families or elsewhere in the community. But, as with many psychiatric drugs, you've got to keep taking them. And once released from direct supervision, this is something that many schizophrenics just aren't inclined to do.

A major revolution in the treatment of depression came with the introduction of antidepressant drugs.

  • The tricyclic antidepressants, drugs like Elavil, Tofranil, and Sinequan, increase levels of norepinephrine and serotonin.
  • Another group of antidepressants, known as the MAO inhibitors -- Nardil and Parnate are examples -- inhibit monoamine oxidase, a substance which, in turn, deactivates norepinephrine and serotonin. Both the tricyclics and the MAO inhibitors act by increasing the release of norepinephrine and serotonin into the synapse
  • Recently, these earlier generations of drugs have begun to be replaced by a newer generation of selective serotonin reuptake inhibitors(SSRIs) such as Prozac, Zoloft, Paxil, Celexa, and Lexapro. As their name implies, these drugs act selectively on serotonin, and have little or no effect on norepinephrine; and rather than increasing the release of serotonin, they act to prevent its reuptake by the presynaptic neuron -- thus, effectively, increasing the levels of serotonin available at the synapse.
  • There is also a class of drugs called selective serotonin and norepinephrine reuptake inhibitors, or SNRIs, such as Cymbalta, which act on both serotonin and norepinephrine -- thus accomplishing the same effect as the MAO inhibitors, but through a different mechanism.
  • And, of course, there are now selective norepinephrine reuptake inhibitors, also known as an NRIs, including Strattera

The SSRIs and SNRIs are as effective as the earlier tricyclics and MAO inhibitors -- a fact that has led to the revision of the general monoamine hypothesis of depression by the more specific serotonin hypothesis of depression.

For a discussion of recent advances in the drug treatment of depression, see

  • Lifting the Black Cloud" by R.M. Henig,Scientific American, 03/2012.

Also within the category of mood disorders,lithium carbonate has proved to be a very effective treatment for bipolar disorder, also known as "manic-depressive illness". Treatment with lithium reduces or eliminates episodes of mania in as many as 70% of cases, and also reduces episodes of depression. However, because lithium is toxic, its use must be carefully monitored. Lithium is another one of those psychiatric drugs which seem to work -- in fact, it works very well, and works only for bipolar disorder -- but we have no idea why it works, or what bearing its effectiveness might have on our understanding of the nature of mania or bipolar disorder.  Certainly, mania isn't caused by a lack of lithium in the body. 

Just like Coca-Cola once actually contained cocaine, so 7-Up, another popular soft drink once contained lithium.  This just one of the fun facts included in Lithium: A Doctor, a Drug, and a Breakthrough (2019) by Walter Brown, a psychiatrist.  Discovered in 1949 John Cade, an Australian psychiatrist, lithium was the very first psychiatric drug to approach anything like a cure, as opposed to mere symptom relief (remember, before this time, all psychiatric medications were just tranquilizers).  And the research which first demonstrated its curative effects, carried out by Mogens Schou, a Danish psychiatrist, in 1954, were the first randomized, controlled clinical trials of any psychiatric drug.

Similarly, early drug treatment of anxiety disorder focused on sedatives such as the barbiturates (e.g. Nembutol and Seconol), propanediols (e.g., Miltown, Equanil), and benzodiazepines (e.g., Librium, Valium, and Xanax). The 20th century was known as the "Age of Anxiety", and popular literature and movies concerned with American middle-class in the 1950s and 1960s are full of references to these drugs. These drugs are also, essentially, tranquilizers -- though far less potent than those used in the management of schizophrenia. Specifically, they increase the activity of the neurotransmitter GABA, which in turn suppresses activity in the hypothalamic-pituitary-adrenal axis (HPA).

As with the antipsychotic medications, there is also a group of atypical anxiolytics, such as Buspirone, which are as generally effective as the earlier sedatives but with fewer side-effects.

Whether "typical" or "atypical", all psychotropic drugs in a particular class have the same basic pharmacological mechanism of action:

  • The antipsychotics, like Thorazine, decrease levels of dopamine.
  • The antidepressants, like increase serotonin, dopamine, or norepinephrine.
  • The anxiolytics increase GABA.

Newer drugs within a class are sometimes called "me-too" drugs, because they all have the same basic effects -- put another way, the newer drugs are little more than copycats of the older ones.  The newer drugs may have fewer side-effects, and be somewhat safer and easier to tolerate.  But the basic underlying pharmacological action is no different.  So why are new drugs introduced?  Mostly because the patents are running out on the old ones.  And, in fact, the pharmaceutical industry isn't devoting that much research to the development of truly new psychotropic drugs, because -- despite their popularity -- there isn't that much to be gained financially from them.  There is much ore money to be made from drugs that treat cancer, heart disease, or diabetes.  

Because of the co-morbidity between anxiety and depression, anxiety disorder is sometimes treated with SSRIs such as Paxil. The rationale for this strategy isn't completely clear -- maybe it's to help patients feel less depressed about being anxious!

In many instances, specific drug treatments are developed based on a specific somatogenic theory of the illness in question. In theory, at least, psychotropic medications work because they address the biological bases of mental illness. Thus, according to the dopamine hypothesis of schizophrenia, drugs that alter the processing of dopamine should be effective treatments for schizophrenia but not depression; and according to the amine hypothesis of depression, drugs that alter the processing of monoamine neurotransmitters should be effective treatments for depression but not for schizophrenia. The SSRIs, which act specifically to enhance the availability of serotonin in depressed patients, are perhaps the clearest expression of this connection between theory and treatment.

But sometimes drug treatments are simply empirical, meaning that they are prescribed simply because they are known to help, and not because their efficacy is predicted by any theory of the illness in question.

A good example of such an empirical treatment is the prescription of amphetamine, such as Ritalin, for the treatment of ADHD.

There's no good reason why Ritalin should work in these cases -- and given that amphetamine is a central nervous system stimulant, there would be every good reason to think that it would make things worse. Such "off-label" use of medications is a fairly common practice in medicine: physicians may "experimentally" prescribe drugs for conditions other than those indications approved by the Food and Drug Administration following controlled studies of safety and efficacy). But exactly how it occurred to anyone to try Ritalin in the first place isn't clear. Perhaps it was simply an act of desperation after everything else failed. But it does work, at least for many patients with ADHD, and is now the standard of care for both children and adults carrying the diagnosis -- ADHD is now a "labeled" indicator for Ritalin and similar drugs, rather than an "off-label" use. But there's still no theory that explains the paradoxical effects of Ritalin on ADHS. One theory is that amphetamines activate brain centers for the control of attention which are relatively inactive in patients with ADHD -- but this is just a theory, and a rather post-hoc one at that, and as yet there's no evidence supporting it.

Actually, there is a double paradox here, because the evidence for the efficacy of amphetamines in the treatment of ADHD is somewhat ambiguous.  Laboratory tests show that Ritalin (and similar drugs) improves performance on laboratory tests of attention and working, and also has positive effects on brain centers involved in these cognitive functions.  But other studies show that there is no corresponding improvement in student academic performance, as measured by grade-point averages, achievement-test scores, or even the likelihood of repeating a grade. of elementary and secondary-school.  So, whatever is going on in the laboratory, and the brain, doesn't seem to translate into the real life of the classroom.  One possibility is that these children, and their families, come to rely too heavily on the drugs.  In the absence of proper study skills, and a home environment conducive to and encouraging of study, academic performance won't improve just by taking a pill.


Notes on Cosmetic Neurology

Advances in neuroscience, pharmacology, and gaming have conspired to usher in a new age of what can only be called cosmetic neurology (Chatterjee, 2004, 2007) -- that is, the use of neuroscientific and pharmaceutical techniques to enhance performance in healthy people -- much like cosmetic surgery is intended to enhance physical appearance in healthy clients who have "normal" physical traits.  Cosmetic neurology raises a host of ethical issues, but the first issues they raise are scientific and medical -- namely:

  1. Do these techniques work?
  2. Do they do any harm?
  3. What are the cost-benefits ratios attached to them?

Let's look at some cases.

1.  "Study Drugs"

Actually, a number of "stimulant" drugs are now prescribed for ADHD, including methylphenidates such as Ritalin, Focalin, and Concerta, and amphetamines such as Adderall and Vivanse. All have the "paradoxical" effect of focusing the concentration of patients with ADHD. But maybe these "paradoxical" effects are not paradoxical at all, and that these and other drugs may improve focus and concentration in anyone.

Other psychostimulants include:

  • Modafinil (trade name Provigil), used in the treatment of narcolepsy and excessive sleepiness.
  • Donepezil (trade name Aricept), used in the treatment of Alzheimer's disease.

In fact, these alleged "brain boosters"have been increasingly used by high-school and college students who do not have ADHD as "study drugs" -- that is, as pharmaceutical aids for studying and test-taking. Students have been known to fake the symptoms of ADHD in order to get a prescription, and there is apparently a vigorous black market in the drugs.  A 2007 survey by the Centers for Disease Control and Prevention found that 2.7 million children and adolescents were taking prescribed medication for ADHD.  Some of that medication, perhaps as much as 20%, is "diverted" to others for non-prescribed use.  In addition, some students persuade their family physicians to prescribe stimulants to help them get through academic exercises such as the SAT.

It should be noted, first of all, that these drugs are controlled substances (in Schedule II, right up there with cocaine and morphine): possession and use without a prescription is illegal, and even giving them away can lead to felony charges.

It should also be understood that, like all psychoactive drugs, there is a danger of addiction.  Habitual overuse of stimulants can lead to episodes of psychosis, and even suicide.  So stimulants should never be taken unless actually prescribed by a physician; and the medication regime should be continually monitored. N For a cautionary tale, see "Drowned in a Stream of Prescriptions" by Alan Schwartz (New York Times, 02/03/2013) -- the cautionary tale of Richard Fee, the pre-med president of his college class, who faked ADHD to get a prescription for Adderall, and hanged himself when his prescription ran out.

These are powerful drugs.  And this is your brain, and you only get one of it.

From a scientific point of view, it has to be said that, until recently, claims about the effectiveness of Ritalin and similar drugs as study drugs for "normals" are largely anecdotal. A formal review of this literature by Elizabeth Smith and Martha Farah (Psychological Bulletin, 2011)They found that there was little evidence that stimulants improved the cognitive performance of normal, healthy individuals.  There may be some positive effects, but they seem to be small and variable, and do not necessarily translate into improved academic performance.

From a medical standpoint, however, it should be understood that, like all psychoactive drugs, these substances have negative side effects. Chief among these are that they are highly addictive. It is very easy for students to become dependent on these drugs, and for their prolonged use to lead to the addictive cycle of tolerance and withdrawal.

So, the word from here is: as tempting as it may sometimes be,don't do it; don't even start. Just study hard and do your best!

For a discussion of the scientific and ethical issues involved, see:
  • "Turbocharging the Brain" by Gary Stix,Scientific American, October 2009.
  • ADHD Nation: Children, Doctors, Big Pharma, and the Making of an American Epidemic (2017) by Alan Schwarz, based on a series he wrote for the New York Times in 2012.  Schwarz is clear that ADHD is a legitimate diagnosis, and he acknowledges that methamphetamines can be an effective treatment for the disorder.  But he also thinks that the syndrome is vastly over-diagnosed, and that Ritalin and similar drugs are vastly over-prescribed, leading to a serious epidemic of drug abuse.

2. "Brain Training"


"Smart drugs" may not be the only way to enhance cognitive performance (if, in fact, that's what they do).  Lately, some entrepreneurial psychologists have begun to promote "brain training" programs, ostensibly based on cognitive neuroscience.  These are, essentially, adult video games, aimed especially at baby-boomers (like yours truly), and intended to stave off the cognitive decline that comes naturally with age -- not to mention Alzheimer's disease.  Also to promote things like "neuroplasticity", "fluid intelligence", and "working memory"

These products go by a number of brands., most of which is some variant on "Brain Fitness".  According to a report by Sharp Brains, an industry group, more than $1 billion was spent on brain-fitness programs of various sorts, mostly software, in 2012 alone -- and the industry is expected to reach $6 billion by 2020.  I don't intend to promote or criticize any particular product, but I do want to offer some cautions about the whole enterprise of "brain training".

  • One leading product is BrainHQ, sold by Posit Science, co-founded by Michael Merzenich, a neuroscientist at UC San Francisco.
  • Another is Lumosity, offered as a subscription service, with new "brain games" every month.
  • There are lots of others, some with UC or Stanford connections.

The rationale for these programs is simple enough.  We know from the literature on brain plasticity that mental exercise can stimulate the growth of neural connections, if not neurogenesis itself.  And these exercises do activate brain regions, especially in the prefrontal cortex, that are known to be involved in working memory, attention, and other executive functions.  Therefore, it only makes sense that these games would result in improvements in brain function.

But, at least as of 2013, none of the commercially available programs has received anything like the kind of approval that new drugs must receive from the Food and Drug Administration.  Most of the research demonstrating their effectiveness is proprietary, and has not been subjected to peer review.  And while the published studies may report statistically significant increases in brain activity, or gains in performance on cognitive tasks, they don't necessarily show that these statistically significant improvements lead to clinically significant improvements outside the laboratory, in the ordinary course of everyday living.  Nor, for the most part, have the published studies shown that using the (fairly expensive) games produces improvements in performance over and above any other kind of mental exercise, such as doing a crossword puzzle, watching Jeopardy! and Wheel of Fortune -- or, for that matter, engaging in physical exercise for 30 minutes a day, five days a week.  

The rationale for "brain games" appears to be be based on the idea that the brain is a muscle, which is strengthened by use and weakened by disuse (a principle you'll remember from Thorndike's Law of Exercise, translated into neuroscientific terms).  And it's rue that practice on a task will make you better at that task -- and with enough practice (like the "10 Thousand Hour Rule"), even automatize the underlying processes such that task performance occurs automatically, and consumes few or no cognitive resources -- thus freeing up those resources to be devoted to some other task.  But the analogy is inexact. We know from the Doctrine of Modularity that the brain isn't like a muscle: if anything, it's like a whole collection of muscles, each corresponding to a module.  The human body contains about 650 different skeletal muscles, after all (maybe more, depending on how you count, and we're not counting involuntary muscles, such as the ones in the heart).  There's no reason to think that exercising the triceps brachii of the upper arm has any effect on the triceps surae of the leg.  You've got to exercise them both, if you want to increase your total body strength. 

Another, related point: practice with any game will make you get better at that game.  That's just learning.  But there is little evidence that getting better at any game actually improves cognitive abilities or overall brain function.

So, as with any other treatment, drug, or psychotherapy, or brain-fitness software, caveat emptor -- which, freely translated from the Latin, means check out the efficacy studies.

And, in fact, as of early 2017, there was precious little evidence that these "brain-training" programs actually accomplish their goals.  Playing a particular game may enhance a person's ability to perform the tasks required by that particular game (no surprise there!), but there is little evidence that the skills acquired in one game generalize to other cognitive skills -- let alone prevent Alzheimer's Disease. 

  • In October 2014, a large group of cognitive psychologists and cognitive neuroscientists, organized by the Stanford Center for Longevity and the Max Planck Institute for Human Development issued "A Consensus on the Brain-Training Industry from the Scientific Community" arguing that claims that "brain games" can "reduce or reverse cognitive decline" are unsupported by scientific evidence from controlled experiments.  In response, another 100 cognitive psychologists and cognitive neuroscientists published an open letter attempting to make the case that cognitive training actually improves cognitive function. 
  • In December of that same year, however, a large international group countered that "a substantial and growing body of evidence shows that certain cognitive-training regimens can significantly improve cognitive function, including in ways that generalize to everyday life".  
  • On January 5, 2016, the Federal trade Commission fined the creators and marketers of Lumosity, a popular "brain-training" program, $2 million for deceptive advertising (actually, the initial fine was for $50 million, but after the FTC discovered that Lumosity had no hope of ever paying it, the fine was reduced to something more manageable).  The FTC determined that Lumos Labs, the makers of Lumosity, had systematically deceived its 1,000,000+ subscribers, who paid $14.95/month -- you do the math -- by making "unfounded claims that Lumosity games can help users perform better at work and in school, and reduce or delay cognitive impairment associated with age and other serious health conditions".
  • A comprehensive review by Daniel Simons and his colleagues concluded that there was "extensive evidence that brain-training interventions improve performance on the trained tasks, less evidence that such interventions improve performance on closely related tasks, and little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance."  They also found "that many of the published intervention studies had major shortcomings in design or analysis that preclude definitive conclusions about the efficacy of training, and that none of the cited studies conformed to... the best practices... essential to drawing clear conclusions about the benefits of brain training for everyday activities" ("Do 'Brain-Training' Programs Work?", Psychological Science in the Public Interest, 2016).

So what are the problems with brain-training?  Chiefly, purveyors of brain-training games, like Lumosity, have not conducted proper research, modeled after clinical trials, to demonstrate that their products actually are effective. 

  • True, customers who practiced their games, like one in which they were asked to remember the location of colored squares on a grid, got better at those games, and other games very much like them (what is known as "near transfer"); and they might have felt, subjectively, that their cognitive skills had improved. 
  • But there is no evidence that these gains generalize beyond the games themselves ("far transfer") -- much less that they stave off dementia or even normal age-related cognitive decline.
  • Moreover, most of the available studies have employed inadequate control groups, such as "no treatment" controls, who do nothing while the experimental group is playing the games.  The best study would have a placebo control, like a drug trial.

Cognitive training has potential, so it would be too bad if actions like the FTC's stopped research and development dead in its tracks.  But at the same time, the claims of the brain-training industry far exceed the available evidence.  Worse, it seems like the purveyors of brain-training games haven't even tried very hard to document the effects of their products.  They have, however, been very eager to take consumers' money.

For an interesting first-hand account of some of these programs, see "Mentally Fit: Workouts at the Brain Gym" by Patricia Marx, New Yorker, 07/29/2013.

See also "Can You Train Your Brain" by Simon Makin, Scientific American, July/August 2015.

3. TDCS

Another fad has been Transcranial Direct Current Stimulation (TDCS), in which a low electrical current is applied to certain brain areas -- e.g., the left prefrontal cortex -- by means of electrodes attached to a power source (often, nothing more than a 9-volt battery!) and an interface which controls the amount of current and the duration of stimulation.  Sort of like EEG in reverse, but not nearly as strong as Transcranial Magnetic Stimulation.  You can buy these things on the market (e.g., the Foc.us), or make them at home, and a recent review shows that it might actually improve performance on some cognitive tasks (Chrysikou et al., 2013).  But as of 2013 the device was not approved by the Food and Drug Administration, and because the device is so new there is no data on any harmful effects of long-term use -- and there is every reason to think that the long-term effects can be very harmful indeed. 

4. "Baby Apps"

While we're on the subject, there has been a proliferation of baby apps for smartphones and tablets, advertised as enhancing infants and children's cognitive skills. The progenitors of these were the 'Baby Einstein" videos marketed in the days before smartphones and tablets.  These days, "baby apps" are promoted as ways to teach infants motor and spatial skills, numbers, and language. 

Again, on first blush, the idea seems plausible.  But in fact, there is very little evidence that these "baby apps" do what is claimed for them.  The published research is paltry and ambiguous, and even some of the software developers admit that they don't have the research to back up their claims.  These apps may entertain and distract children, but there is no good evidence that they actually teach them anything.  Mostly, it seems, they make parents feel better about using screens to distract their kids during dinner or cocktail hour. 

And there are some reasons to think that they may actually be harmful, by consuming time and effort that might actually be devoted productively to creative play, person-to-person interactions with parents and older siblings, and the like.  The American Academy of Pediatrics recommends that children not be exposed to "screen media" of any kind, including television, for at least the first 30 months (2-1/2 years).

In 2013, the Campaign for a Commercial-Free Childhood, an advocacy organization, filed a complaint with the Federal Trade Commission intended to put the brakes on the "baby genius industry".  Either they provide evidence to back up their claims, or they stop making the claims. The group was successful in an earlier effort against the "Baby Einstein" videos, forcing the Walt Disney Company to offer refunds to consumers who bought them for their educational value.

So, to repeat: As with any other treatment, drug, or psychotherapy, brain-fitness or educational software, or indeed any innovation of this sort, caveat emptor -- which, freely translated from the Latin, means What's the evidence?

The pharmaceutical revolution in mental health has been a genuine revolution, providing a degree of symptom relief that simply was not available previously, and enabling the de-institutionalization of large numbers of patients from mental hospitals, which offered mostly custodial care, back to their homes and into the community; it also improved the lives of a large number of individuals who were being treated on an outpatient basis by psychiatrists and other mental health professionals.

The effectiveness of these drugs also has theoretical implications for our understanding of the biological mechanisms involved in certain forms of mental illness.

  • The effectiveness of the phenothiazines supports the dopamine hypothesis of schizophrenia.
  • The effectiveness of the tricyclics supports the monoamine hypothesis of depression, just as the effectiveness of the SSRI supports the revised serotonin hypothesis.
  • The effectiveness of the benzodiazepines supports the GABA hypothesis of anxiety.

In theory, these drugs attack the biological bases of the symptoms of these disorders -- the biological bases of their underlying psychological deficits. But note that the reasoning here is somewhat circular: How do we know that dopamine is implicated in schizophrenia? Because the phenothiazines, which act on dopamine circuits in the brain, are effective in the treatment of schizophrenia. Why do the phenothiazines work so well? Because excess dopamine is the cause of schizophrenia. What we really need is independent evidence that these neurotransmitters are specifically involved in these forms of mental illness.

Getting Past the Blood-Brain Barrier


Psychotropic drugs are typically delivered through pills or injection, but either way they have to get to the brain to be effective.  Technically, that's a problem because the brain has a built-in defense against foreign substances, known as the blood-brain barrier (BBB).  The blood vessels in the brain are structured somewhat differently than those found elsewhere in the body.  They are lined with a tightly packed sheath of lipid (endothelial) cells that prevent pretty much anything except oxygen and glucose from getting through to the neurons.  Various pathogens and ions, as well as some proteins that can harm neural cells, are filtered out.  And so are many potentially useful psychotropic drugs -- which is one reason why psychotropic drugs are so hard to develop.  For a thorough discussion of this problem, see "A Barrier to Progress: Getting Drugs to the Brain" by Rachel Brazil, Pharmaceutical Journal, 05/15/2017, from which the image is taken. 

Fortunately, there are ways around -- or, more accurately, through the BBB. 

  • Some molecules pass through because they're soluble in water: they can cross the BBB on their own, through a process known as paracellular transport.  This is very rare, because the junctions between epithelial cells is very tight. 
  • Others are small enough that they are soluble in lipids.  Since the BBB is made up of lipid cells, they pass through the epithelial cells themselves -- diffusion.  These drugs include some antidepressants and many other psychotropic drugs, including many addictive drugs such as alcohol, nicotine, morphine, and heroin.
  • Other substances, such as biologics, which involve lager molecules, can be carried through the epithelial cells through specialized protein transporters, such as amino acid chains, in what is sometimes known as the "Trojan Horse approach".  There are a number of variants on this "hitchhiker" strategy, but the pharmacochemistry involved is much more than is required at the level of an introductory psychology course.  This approach is especially popular for delivering chemotherapy in the treatment of brain cancer.
  • Finally,  there are drugs that disrupt the BBB itself, briefly opening a window that allows the molecule through the epithelial cell. 

This last technique offers clues to the pathology of Alzheimer's disease (AD) and other dementing illnesses.  The whole point of the BBB is to prevent toxins and other harmful substances from reaching the brain.  If you disrupt the BBB, then you open up the opportunity for such substances to get in.  There is already evidence that the BBB deteriorates with age.  This can allow proteins such as albumin, which ordinarily cannot cross the BBB, to get through, initiating a sequence of events that can result in the brain damage seen in dementia -- like the plaques that are characteristic of AD.  So, one approach to treating, or perhaps preventing, AD is to find a way to "plug" the leaks in the BBB, stopping the cascade to dementia before it can start or gain momentum.



For more on the role that a "leaky" BBB might play in AD and other forms of dementia, see "Holes in the Shield" by D. Kaufer and A. Friedman, Scientific American, 05/2021, from which this image is taken.



Setting aside the positive effects of these drugs, it's also the case that pharmacotherapy has some problems:

  • In many instances, the drugs just don't work. A large portion of patients for whom antidepressant drugs are prescribed do not respond to them fully, so their depression never really lifts. In other cases, the patients will relapse into depression even though they remain on a drug that seemed to work at first.
  • It is not easy to predict which of the available drugs will work for a particular patient. The result is that many patients must go through an extended period of "adjusting their medication" until their therapist stumbles on the one that works for them. If depression were simply a matter of making more serotonin (or norepinephrine) available at the synapse, we would expect all of the antidepressant drugs to have pretty much the same effect.
  • Many medications have undesirable side effects, such as the "Parkinsonism" (mimicking the symptoms of Parkinson's disease) and tardive dyskinesia that frequently accompanies the use of antipsychotic medications.
  • There is a certain lack of specificity in the actions of drugs -- as when Paxil, an SSRI, is used to treat anxiety as well as depression.
  • Along these lines, there is good evidence that the effects of many psychiatric drugs, including the SSRIs, are heavily loaded with placebo effects. According to one estimate, fully 75% of the effect of antidepressant medication is attributable to placebo, rather than any specific pharmacological action.
    • Link to a segment of 60 Minutes, interviewing Prof. Irving Kirsch and others on placebo effects in the drug treatment of depression, broadcast on CBS 02/19/2012.
  • But that doesn't mean that antidepressants are just placebos.  There is good evidence that they are especially effective for patients with severe or very severe episodes of depression -- less so in patients with mild to moderate depression (Fournier et al., 2010).
    • Patients with mild to moderate depression may be helped just as much, if not more, by psychotherapy as opposed to medication.
  • Psychiatric drugs provide symptom relief, but they do not provide a cure, in the sense of reversing underlying biological and psychological deficits. Patients on medication will often relapse if their medication is discontinued/ 
  • The prospects of relapse are likely to be diminished if medication is combined with psychotherapy (see below).
  • On the other hand, there is some evidence that medication can actually interfere with psychotherapy (Forand et al., 2013).

 

The Great Antidepressant Debate




Antidepressants are probably the most commonly employed psychiatric medications, but the last 10 years or so has seen a vigorous debate over their actual effectiveness.  The bottom line is that they are effective, but there are also some caveats.

What might be called "the Great Antidepressant Debate" began with a provocative article by Kirsch and Sapirstein (1998) entitled "Listening to Prozac But Hearing Placebo".  K&S conducted a meta-analysis of 19 published studies in which an antidepressant medication (such as Prozac) was compared to placebo.  They found that the median effect size (D) for the drug groups was 1.55 standard deviations (SDs), compared to D = 0.37 for the untreated controls, which sounds pretty good -- until they found that D for the placebo was 1.16 SDs -- approaching the value obtained for the patients who received the active drugs.  On the basis of their results, K&S concluded that spontaneous remission (natural history) accounted for about 24% (.37/1.55), while placebo accounted for approximately 51% ((1.16 - .37)/1.55); their conclusion, then was that the active ingredients in antidepressant medications accounted for only about 25% of the outcome in the drug treatment of depression (100-24-51).

As you might imagine, this claim drove psychiatrists, who generally favor drugs over psychotherapy, and even some psychologists (who sometimes envy psychiatrists' ability to prescribe drugs), quite crazy.  Kirsch's title was, in fact, a deliberate play on Listening to Prozac (1993), a pean to the drug by psychiatrist Peter D. Kramer. 

But wait, there's more.  K&S relied on published clinical trials, but it is well known that both scientists and scientific publication favors positive over negative results.  If you've got a new drug, nobody cares if it doesn't work.  Pharmaceutical companies, who finance the vast bulk of research on psychiatric medications, conduct a lot of studies during the drug-development process, but all the Food and Drug Administration cares about is that there are at least two independent studies showing that the new drug is significantly better than placebo.  So there might be lots of unpublished studies where the drug didn't prove to be better than placebo -- what is known in the trade as the file-drawer problem

Fortunately, while the drug companies don't have to submit negative results to the FDA, they do have to register all trials in order to secure eventual FDA approval.  Turner et al. (New England Journal of Medicine, 2008) obtained the results of these trials, involving 12 different antidepressant drugs, through the Freedom of Information Act (FOIA).  Of the 74 registered studies. 
  • Half of these studies (37/74) were published, and all of them reported positive outcomes; only 1 positive study was unpublished.
  • Of the remaining 36 studies, 22 were not published at all, and another 11 were published in such a way as to obscure the negative findings; only 3 negative studies actually saw publication.  So there is a clear positivity bias in publication.
  • Based on the 51 published studies, the overall effect size for the antidepressant drugs was .37 (Hedges' g), which is interpreted as a small-to-moderate effect.
  • But if you pooled all 74 studies together, the overall effect size for the drugs fell to g = .15, which doesn't even qualify as a small effect.


In addition to revealing a tendency toward selective reporting on the part of researchers (and the drug companies who supported them), the Turner study suggests that, all things considered, antidepressant medications might be less effective than would be apparent from the published studies alone.  Now, there are a lot of reasons that studies don't get published, and some of the negative studies might have been poorly designed or executed.  But still, there's a bias in the publication system that potentially exaggerates the effectiveness of these drugs (and probably other drugs as well -- and not just psychiatric ones!).

At the same time, Kirsch et al. (PLoS, 2008) examined many of these studies, both published and unpublished (thanks again, FOIA!) from a different perspective.  The FDA standards require only that the new drug be significantly better than placebo; but, as we'll discuss again later, there is a big difference between a statistically significant difference and a clinically significant one.  With a large-enough N, even very small differences become statistically significant; and these changes may be too small to result in any real change in the patient's status.  In the studies reviewed by Kirsch et al., the average difference between the drug and placebo groups on the chief outcome measure, improvement on the Hamilton Rating Scale of Depression, was less than 2 points.  This is statistically significant, given the more than 5,000 patients involved.  But that figure was below the minimum of 3 points established by the National Institute of Clinical Excellence, a British group, for a clinically significant difference. The drug-placebo difference was greatest for those patients with more severe levels of depression.

Reviewing the 2008 Turner et al. and Kirsch et al. studies, Ioannidis (2008) concluded that "the use of many small randomized trials with clinically non-relevant  outcomes, improper interpretation of statistical significance, manipulated study design, biased selection of study populations, short follow-up, and selective and distorted reporting of results has built and nourished a seemingly evidence-based myth on antidepressant effectiveness", and suggested that "higher evidence standards, with very large long-term trials and careful prospective meta-analyses of individual-level data may reach closer to the truth and clinically useful evidence".

Kirsch presented additional analyses in a book, The Emperor's New Drugs: Exploding the Antidepressant Myth (2011), that was highly critical of the use of antidepressants.  In response, Kramer recently published a new book, Ordinarily Well: The Case for Antidepressants (2016), which conceded that antidepressant medication works best for the most severely depressed patients, but argues that a combination of medication and psychotherapy is good for everyone.

In the latest salvo in this ongoing dispute, Cipriani et al. (2018) performed a meta-analysis of an expanded set of more than 500 published and unpublished clinical trials of antidepressant medication, involving more than 100,000 patients receiving 21 different first- and second-generation antidepressant medications (or placebo).  All the drugs proved to be more effective than placebo, though head-to-head comparisons showed that some drugs were, on average, more effective than others.  In the drug-placebo comparisons, the average effect size was characterized as "modest".

The point of all this is not to diminish the value of antidepressants: a modest effect is not nothing.   The meta-analyses are convincing, but Kramer's response is probably correct: when people are depressed, the combination of drugs and psychotherapy can be very helpful.  The point is that even powerful psychoactive drugs have substantial placebo components, and this is likely to be true for the major and minor psychedelics as well.  As Kirsch et al. (2002) point out, "Placebo alcohol produces effects that are not observed when alcohol is administered surreptitiously...". 

.

 

Perhaps the most important consequence of the pharmaceutical revolution in mental health has been to give practitioners, and therapists, a means for managing chronic mental illnesses. In this way, the pharmacological treatment of major mental illness is analogous to the use of insulin to treat diabetes. Patients with schizophrenia will still have schizophrenia, and patients with depression will still have depression, but with medication they can better deal with their illnesses, and lead more productive lives. That's a nontrivial benefit, but genuine cures for mental illness are going to have to wait for further pharmaceutical advances -- or, perhaps, another approach entirely.

Clinical Trials


Pharmaceutical companies can't just market any old drug for any old purpose. In the United States, the Food and Drug Administration (FDA) must approve specific formulations for specific purposes, with specific warnings about contraindications and side-effects, based on research known as a clinical trial.  Clinical trials take place across a sequence of several phases:

  • Preclinical Research, including the discovery of a new pharmaceutical compound, development of large-scale manufacturing capability, and testing on nonhuman animals.
  • Phase I Clinical Trials, small-scale human studies on healthy non-patients, intended to establish the maximum dose of the drug that is safe to administer.
  • Phase II Clinical Trials, larger-scale pilot studies of actual patients (e.g., individuals carrying a diagnosis of major depressive disorder), intended to establish "end points" (dependent variables, e.g., reduction in scores on a depression scale), provide estimates of effective doses and duration of treatment, and determine what kinds of patients (e.g., mild, moderate, or severe depression) should receive the drug.
  • Phase III Clinical Trials, full-fledged efficacy studies involving very large numbers of patients, intended to establish both clinical efficacy and reveal important side-effects of treatment.
  • Phase IV  Clinical Trials constitute continued research, after a new drug has been approved, licensed, and marketed, to refine the standards for its use.

Typically, the FDA requires two separate trials showing a statistically significant difference between treatment with the investigational new drug (IND) and placebo.  Sometimes the IND is compared to the standard of care -- which is often an older drug. Only about 20% of IND applications result in actual approval by the FDA

In meeting the FDA standards, it does not matter how many failed trials have been conducted -- that is, trials which fail to yield a significant difference between the IND and placebo.  Given the standard of p < .05, meaning that a statistically significant difference between two conditions would be expected to occur less than 5 times out of 100 just by chance, in principle this means that a pharmaceutical company could conduct as many as 50 or 100 studies, and just report the studies that happened to yield significant results just by chance!  In fact, something like this seems to have happened in the case of some antidepressants: some pharmaceutical companies reported only the results of positive trials -- a phenomenon known as the file drawer problem (for more details, see The Emperor's New Drugs by Irving Kirsch).  In other cases, the company will conduct a meta-analysis, combining many small trials, only some of which have yielded a statistically significant result, into a much larger study that shows an overall positive effect.  For this reason, the FDA now requires drug companies to register all their clinical trials in advance, so it can be determined how many negative studies were left "in the file drawer".

After the Phase III clinical trial has been successfully concluded, and the drug has been approved for use, research still isn't done.  FDA review and approval is then followed by post-marketing surveillance intended to identify additional adverse reactions, other side-effects, and contraindications that did not show up during the formal clinical trials. This information may result in withdrawal of FDA approval, or a requirement to provide additional warnings or other information on the drug's label.

A similar sequence of clinical trials is required for the approval of non-pharmaceutical treatments, such as heart pacemakers, or new surgical techniques. 

No research at all is required for the marketing of herbal remedies, such as St. John's Wort.  This is because these substances occur naturally, in nature, and are classified as "foods" rather than "drugs".  For this reason, herbal remedies are often marketed with no, or very poor, research to back up the marketer's claims.

Once a pharmaceutical firm has developed a drug, the clinical trials have to be run by physicians with patients who might benefit from them.  Unfortunately, there are often strong financial ties between these physicians and the drug companies -- ties that are so strong that they might bias the physicians' evaluations of the drugs.  To help allay this problem, in 2013 Congress passed the "Physician Payments Sunshine Act", which requires pharmaceutical firms and medical-device manufacturers to disclose most of their financial relationships with the physicians who perform research for them -- most, that is, but not all.

For more details, see "Understanding Clinical Trials" by Justin A. Zivin, Scientific American, April 2000).  See also "Is Drug Research Trustworthy?" by Charles Seife (Scientific American, 12/2012), which discusses in detail the problem of financial ties between pharmaceutical companies and the medical researchers who actually perform clinical trials.


The Life History of a Psychotropic Drug

The process of drug approval can take years and be very expensive.  Consider the example of Cymbalta, a popular SSRI (as described by Sarah Amandolare in "Life of a Drug", Scientific American Mind, September-October 2013).

  • In the early 1950s, researchers discovered that iproniazid, a drug used in the treatment of tuberculosis, also had positive effects on patients' mood.
  • Pharmacologists also discovered that iproniazid had specific effects on serotonin, norepinephrine, and dopamine.
  • In 1974, researchers working at Eli Lilly developed fluoxetine, the first SSRI, which was later marketed as Prozac.
  • In the 1980s, researchers started trying to develop SNRIs as well -- including duloxetine, which inhibited the reuptake of norepinephrine as well as serotonin, later to be marketed as Cymbalta.
  • Eli Lilly applied for a patent for duloxetine (not yet known by its trade name of Cymbalta) in 1986, and received it in 1990.
  • Only at that point could Eli Lilly begin Phase I clinical trials.  these were concluded in 2000.
  • In 2001, Eli Lilly applied for FDA approval of Cymbalta.  This application was unsuccessful, requiring the company to engage in additional Phase I and Phase II trials
  • In 2003, the FDA again denied approval of the drug, citing (among other problems) liver disease as a potential side effect..
  • Eli Lilly completed additional Phase III clinical trials.
  • Finally, in 2004, the FDA approved Cymbalta for the treatment of depression, enabling the company to market the drug for that purpose..
  • Post-marketing surveillance continues, as well as new clinical trials aimed at other disorders.  But...
  • Eli Lilly's patent on duloxetine, originally received in 1990, will expire in 2013, enabling other manufacturers to offer generic versions of the drug.  At that point, Cymbalta will no longer be particularly profitable, and the the company may be forced to lay off employees who have been working on this particular drug -- or assign them to new duties developing and studying new INDs.

 

Psychotropic Medication: A Guide for the Prospective Consumer


Prof. F. Scott Kraly, author of Brain Science and Psychological Disorders: New Perspectives on Psychotherapeutic Treatment (2006), offers the following questions for those who are considering a prescription for psychotropic medication:

  • When You Are the Patient:
    • Given my diagnosis, is psychotropic medication necessary, or would counseling or psychotherapy be as, or more, effective?
    • Is there published scientific evidence that supports the use of this mediation for my diagnosis?  If not, what is the justification for going off-label?
    • What percentage of patients using this medication are likely to benefit?
    • If this medication does not improve my symptoms, or if I find the side effects intolerable, what is the alternative plan for my treatment?
    • What are the most likely side effects?
    • When can I expect to stop using the medication?  When that day comes, what will I be advised to do to avoid a relapse?
    • What can I read to better understand my situation?
    • And, ask yourself these questions:
      • Now that I've been advised on exactly how to use the drug, will I be committed to follow those instructions faithfully?  (If not, why am I being a bad patient?)
      • Does the drug produce a side effect that I might find so intolerable that I would quit using it, or ask my doctor to prescribe a different medication?
  • When your child is the patient
    • Given the diagnosis, is it absolutely necessary and in the best interests of our child to expose his/her brain to a drug?  Might behavior therapy or psychotherapy be a reasonable alternative?
    • Can the duration of time our child uses medication be shortened if we support the drug therapy with behavior therapy or psychotherapy?
    • Is there published scientific evidence from clinical trials in children that supports the use of this medication for this diagnosis?  If not, what is the justification for the off-label prescription?
    • What potential drug-induced side effects should we be vigilant about detecting?
    • What questions should we ask our child regarding his or her feelings abut the drug's effectiveness or side effects?
  • Keep in mind the factors and principles of pharmacology that can determine the effectiveness of drug therapy:
    • No drug has only one effect; side effects are inevitable.
    • Compromise on benefits and risks is a realistic goal.
    • Psychotropic medication is often best used together with psychotherapy.
    • The main effects and side effects of a drug depend upon the dosage.
    • Age, sex, genetics, drug history, and ethnicity can affect effectiveness.
    • A drug can have enduring effects upon the brain.
    • A drug can alter the development of a young, maturing brain.
    • The FDA cannot ensure that a drug will be effective and safe for every individual.
    • Herbal remedies and dietary supplements may not be effective or safe.
    • Off-label usage of a drug is not based upon scientific evidence.
    • Avoid polypharmacy, if possible, because some drug interactions can be potent, unpredictable, and harmful.

But Just Because He Takes Zoloft...

...doesn't mean that he's depressed.  Zoloft is a powerful SSRI antidepressant; but, like all drugs, it can be prescribed for purposes other than the treatment of depression. For example, some antidepressants are also effective in the treatment of migraine headaches. So you can't diagnose "backwards", inferring from a drug that someone takes what mental illness he might have.  He might not have any.  So, the next time you're rummaging around in someone's medicine cabinet, don't jump to hasty conclusions!

The FDA approves drugs for certain conditions, based on the results of controlled clinical trials, and these indications are indicated on the drug's label.  But for a variety of reasons, physicians can also prescribe drugs "off-label".

  • There may be good evidence that a drug is effective for a certain condition, even though the required clinical trials have not yet been completed. Clinical trials can cost a lot of money, and sometimes pharmaceutical companies simply don't want to make the investment.
  • There may be no FDA-approved drug for a particular treatment.
  • No approved drug is effective in a particular instance.
  • A drug may be prescribed for children, even though it has not been approved by the FDA for use in young patients.

For more information, see "The Unadvertised Uses of Drugs", Scientific American Mind, may-June 2013.


Psychotherapy

PsychotherapyPharmacotherapy is a relatively new approach to the treatment of mental illness. Historically, the active treatment of mental illness was limited to various forms of psychotherapy, a "talking cure" in which a therapist engaged in activities that were intended to change the contents of the patient's mind: to alter patients' beliefs, feelings, desires -- and thus their behavior. Although there were important precursors, the birth of psychotherapy is commonly given as 1893, when Sigmund Freud and Joseph Breuer began publishing the articles that were eventually collected as their Studies in Hysteria.

  • Pharmacotherapy attempts to alter mental functions indirectly, by altering the chemistry of the brain.
  • Psychotherapy attempts to alter the mind directly, through various sorts of learning experiences.

There are literally hundreds of different psychotherapies, but they can all be classified under three major headings:

  • Psychodynamic, Insight-Oriented Psychotherapy, such as Freudian psychoanalysis. In this technique, the therapist helps the patient gain insight into unconscious conflicts that presumably lie at the root of his or her symptoms. For Freud, these conflicts involved primitive, unconscious sexual and aggressive urges, which gave rise to anxiety, which the patient reduced by engaging in repression and other psychological defenses, which in turn caused symptoms to occur. Psychoanalysis was intended to help the patient become aware of these conflicts, and acknowledge his primitive urges, so that the defenses would no longer be needed and the symptoms would disappear.
    • "Neo-Freudian" psychoanalysis retained the emphasis on unconscious conflict, but de-emphasized biological drives having to do with sex and aggression, and focused instead on conflicts the patient encountered in the "real world". Classic psychoanalysis was a 5-times-per-week affair; what is known as psychoanalytic psychotherapy is less intense, but follows much the same rationale as the classic form.
  • Behavior Therapy began in the 1950s as a behavioristic reaction to the "mentalism" of psychoanalysis. Rather than resolving the unconscious conflicts that supposedly underlay the patient's symptoms, behavior therapists like Joseph Wolpe sought to modify the symptoms themselves, directly, by means of techniques derived from learning theory. From their point of view, symptoms were not caused by disease; rather, the symptoms were the disease. In some cases, such as phobias and obsessive-compulsive behaviors, the assumption was that the symptoms were learned behaviors that could be unlearned; even if the symptoms were not acquired through learning, however, it was assumed that they could be modified by learning (some forms of behavior therapy were called behavior modification).
      • Cognitive Therapy: Later, in the aftermath of the "cognitive revolution" in psychology, which supplanted behaviorism,behavior therapy was supplanted by a cognitive therapy which attempted to alter the patient's behaviors, whether overt or covert), by changing the patient's cognitions; early proponents of cognitive therapy were Aaron (Tim) Beck, known for his cognitive theory of depression, and Albert Ellis who practiced what he called rational-emotive psychotherapy. In 2006, Beck received the prestigious Lasker Award for clinical research -- the first ever given to a psychiatrist for research on treatment. The chairman of the award jury noted that cognitive therapy "is one of the most important advances -- if not the most important advance -- in the treatment of mental diseases in the last 50 years" (New York Times, 09/17/06).
      • Cognitive-Behavioral Therapy: Even with the new "mentalism" of cognitive psychology, the goal of cognitive therapy was to change the patient's behavior, so the hybrid term cognitive-behavioral therapy (CBT) became popular. Whereas psychodynamic therapy focuses on the patient's past, especially his childhood, CBT focuses on the "here and now" of the patient's life.
    • Humanistic Psychotherapy emerged as a reaction to both psychoanalysis and behavior therapy. In both kinds of therapy, the therapist was extremely directive; but -- either keeping the patient focused on unconscious conflicts, or Both , which were perceived as much too directive. Carl Rogers introduced a client-centered therapy in which the patient set the therapeutic agenda, and the therapist helped create an environment of unconditional positive regard in which the patient could achieve self-actualization (a term introduced by another humanistic psychologist, Abraham Maslow). Rogers' language is very revealing here: "patients" are passive recipients of the action of "agents"; but "clients" hire people, like lawyers, to work for them.


Psychodynamic Psychotherapy

Contemporary psychodynamic psychotherapy has its origins in classical Freudian psychoanalysis. But just as "neo-Freudian" theories of personality were de-biologized and de-sexualized, so has contemporary psychodynamic psychotherapy.

Jonathan Shedler (2010) has summarized the main principles of contemporary psychodynamic psychotherapy as follows:

  • Focus on the experience, expression, and discussion of emotion (in contrast to the focus of cognitive-behavioral therapy on thoughts and behaviors).
  • Exploration of the patient's attempts to avoid, resist, and defend against distressing thoughts and feelings.
  • Identification of recurring themes and patterns in the patient's relationships.
  • Discussion of past experiences, especially the ostensibly formative experiences of childhood.
  • Focus on interpersonal relations and attachments.
  • Focus on the therapeutic relationship between patient and therapist, which may reflect repetitive themes in the patient's relationships with others outside of therapy.
  • Exploration of fantasy life, including dreams and fantasies, but generally allowing the patient to give free expression to whatever is on their minds, instead of following an agenda set by the therapist.

Shedler points out that modern psychodynamic therapy, while rooted in Freud, has shed most of the trappings of classical psychoanalysis.  There is little attention paid to the Oedipus conflict and castration anxiety, for example1  Patients don't necessarily lie on a couch and free associate for 50 minutes a session, five sessions per week.  Rather, modern psychodynamic psychotherapy is practiced as an open-ended vehicle for self-examination -- which means that it might be useful even for people who don't suffer from depression, anxiety, or some other form of mental illness. 

For a first-person account by a writer who spent most of her adult lifetime (and, for that matter, a considerable amount of her childhood) in psychoanalysis, see "My Life in Therapy" by Daphne Merkin (New York Times Magazine, 08/08/2010) -- or, for a longer treatment,Mockingbird Years: A Life In and Out of Therapy (2000), a book-length memoir from Emily Fox Gordon. For some patients in long-term psychodynamic psychotherapy (and here we think of Woody Allen), the treatment -- especially if it includes hospitalization at an institution like Austen Riggs, or the MacLean Hospital, or the Menninger Clinic -- takes on a kind of Romantic feeling, not unlike the tuberculosis sanitarium depicted in Thomas Mann's novel,The Magic Mountain.


Cognitive-Behavioral Therapy

Especially where maladaptive social learning lies at the heart of the patient's illness, psychotherapy can achieve a full-fledged cure through what are essentially re-education techniques -- that is, by arranging new learning experiences that undo, or modify, the effects of old ones. Even when the patient's symptoms are not acquired through learning, at least in the usual sense, cognitive-behavioral therapy can help patients to acquire new modes of thought, and new behaviors, that will counteract the effects of their illness.


Exposure Therapies

Cognitive-behavioral therapy was first introduced for the treatment of anxiety disorders, especially phobias and obsessive-compulsive disorder. Among the most popular techniques of CBT are:

  • Systematic Desensitization, involving a progressive, graded exposure to a phobic stimulus; and
  • Flooding, also known as Implosion Therapy (there are technical differences between these), in which the patient is immediately immersed in the most frightening situation -- either direct contact with the phobic object, or rehearsal of an obsessive thought and prevention of compulsive behavior.

In both cases, the patient can experience complete alleviation of anxiety, which will never return -- the very definition of a cure -- through the extinction of fear, aversion, or other response, and the acquisition of more appropriate, adaptive behaviors. In the treatment of phobias, systematic desensitization and flooding are equally effective. Flooding is more efficient, but it is also somewhat more dangerous: if done improperly, the patient will be left worse off than when he started, so don't try this at home.


Relaxation Therapies

For the psychophysiological (psychosomatic) disorders, the best treatment is to eliminate the stressor from the patient's environment. However, this is not always possible, in which case the therapy involves modifying the patient's response, including physiological responses mediated by the autonomic nervous system, to the stressor.

  • Relaxation Training seeks to achieve a general reduction in the patient's emotional reaction to the stressor, essentially modulating the general adaptation syndrome (see the lecture supplements on the Biological Bases of Mind and Behavior).
  • Biofeedback permits the patient to gain some voluntary control over the functioning of some element of the autonomic nervous system, such as heart rate or blood pressure. Because ANS activity is not accessible to conscious awareness, the therapist must use special physiological monitoring devices, such as the EKG or EMG, to provide information to the patient about his or her internal physiological state. In biofeedback, the apparatus signals the patient's level of physiological activity, and the patient learns, through a feedback process similar to instrumental conditioning, to control his own physiological state.


Cognitive Restructuring

Other aspects of mental illness, particularly anxiety,depression, and the delusions of schizophrenia and bipolar disorder, appear to reflect maladaptive knowledge or belief acquired through learning.

  • According to Beck's original cognitive theory, depression is caused by a particular set of beliefs, as well as a cognitive style that maintains those beliefs. Similarly, following the learned helplessness model of Seligman, Abramson, and Alloy, depression may be caused by the patient's belief that important, particularly aversive events, are uncontrollable.
  • Similarly, anxiety may be caused by the patient's belief that such events are unpredictable.
  • Delusions may reflect the patient's inappropriate attempts to explain the anomalous experiences associated with schizophrenia and the manic-depressive mood swings of bipolar disorder.

In these cases, the goal of cognitive therapy is to change the patient's underlying cognitive structures, or schemata:

  • Confront the patient with schema-incongruent information that will stimulate schema change.
  • Provide a more adaptive way of construing the patient him- or herself, others, various social situations, the past, present, and future;
  • Change the patient's interpretation of the situation, and construal of experience;
  • Alter the patient's expectations of the future.

Beck's cognitive therapy for depression entails altering the patient's depressogenic schemata -- the "depressogenic triad" of negative beliefs concerning oneself, the world, and the future. Cognitive therapy also seeks to alter the patient's tendency toward arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization that maintain these schemata once established.

Based on the learned helplessness theory of depression, Seligman, Abramson, Alloy, and their colleagues have suggested that depression may be alleviated by changing the individual's "depressogenic" attributional style that leads to feelings of helplessness and hopelessness, so that the patient will make more realistic, adaptive causal attributions about events.

CBT for Insomnia

A good example of how behavioral and cognitive therapy can be combined is Cognitive-Behavioral Therapy for Insomnia

(CBT-I).  I go into detail on this treatment because sleep is such an issue for college students -- and for so many other people, too.

CBT-I has five major components:

  • Stimulus Control: Strengthening the association between the bed and sleeping.
    • Go to bed only when you are actually tired.
    • Do nothing in bed except sleep and have sex.
    • Get out of bed at the same time every morning.
    • If you do not fall asleep within 10 minutes of going to bed, get up, move to another room, and do something relaxing until you feel sleepy.
  • Sleep Hygiene: Altering the environment to make it more conducive to sleep.
    • Within 4 to 6 hours of going to bed, limit the intake of substances such as caffeine, nicotine, and alcohol that interfere with sleep.
      • Take a light snack, such as milk or peanut butter, instead.
    • Avoid stimulating activity prior to sleep
      • Get rid of all distractions: No TV, no computer games; Turn off the cell phone!
      • Read a book, write an e-mail, take a warm bath.
  • Sleep Restriction: Control the time you spend in bed, to maximize "sleep efficiency" and restore "sleep homeostasis" -- that is, the biological need to sleep.
    • Sleep Efficiency (SE) = Total Sleep Time (TST) / Time in Bed (TIB), with the ratio expressed as a percentage.
      • Increase TIB if SE > 90%
      • Decrease TIB if SE < 80%
    • Sleep Restriction involves paradoxical intention, a concept first articulated by Viktor Frankl, founder of existential psychiatry, in 1959.  It also played a prominent role in the techniques of an American maverick psychotherapist, Milton Erickson, as detailed by an early disciple, Jay Haley (Strategies of Psychotherapy, 1963; Uncommon therapy, 1973).
  • Relaxation Training is a set of techniques, imported from systematic desensitization and stress-reduction therapies, intended to promote physical relaxation.
  • Cognitive Therapy includes educating the patient about sleep, changing any dysfunctional attitudes or beliefs that the patient may have about sleep, and controlling the patient's worries about losing sleep. 



Social Skills Training

Many behavioral disorders and annoying problems in living often result from the individual's inadequate social skills.
Public-speaking anxiety, while not necessarily rising to the level of a full-fledged social phobia, can be extremely debilitating, but can be relieved simply by giving the person practice in public speaking in a controlled, friendly environment. Many people have a great deal of trouble saying "no" to people, including their parents (or children) or spouses. In this case,assertiveness training can help people make, and respond to, demands more adaptively. People's sexual problems are not limited to impotence or frigidity. Often, the difficulties that people have in maintaining sexual arousal, or achieving orgasm (in oneself or one's partner) reflects a simple lack of knowledge about how to make love (sex is a biological function, and in that sense natural, but the pleasure-giving and -receiving aspects of lovemaking don't come naturally -- as almost anyone who remembers his or her first sexual experiences can attest. Many people learn to make love through practice, but many people need instruction in the form of sex therapy -- not necessarily from a sexual surrogate, but sometimes just some friendly advice.


Cognitive-Behavioral Therapy and Social Intelligence

Whereas pharmacotherapy attempts to alter the disordered mind by altering the chemistry of the brain, psychotherapy attempts to alter the disordered mind directly through learning experiences.

Much of cognitive-behavioral therapy seeks to alter the patient's declarative knowledge about what to believe and expect in various situations, social skills training, like relaxation training and biofeedback, seeks to alter the patient's procedural knowledge about what to do in those situations. Taken together, the cognitive-behavioral therapies work by altering the individual's social intelligence -- his or her fund of knowledge about self and others, and repertoire of interpersonal skills -- that he or she uses to navigate in the social world (see the lecture supplement on Personality and Social Interaction).


Outcomes of Psychotherapy

The notion of psychotherapy is fine in principle, and it's made a healthy living for several generations of psychiatrists, clinical psychologists, clinical social workers, and other mental health professionals. But does it really work? Psychotherapy is plagued by what might be called the Woody Allen Bugaboo -- after the characters played by the actor-director, who go through years, decades, of psychoanalysis but never seem to change.

In fact, psychotherapy came to a crisis in the 1950s when Hans Eysenck, a British psychologist, reviewed the literature and claimed that psychotherapy was ineffective -- that people who received psychotherapy had no better outcomes than those who received no treatment at all. Along with the evident success of pharmacotherapy, and lingering doubts such as expressed by the Woody Allen Bugaboo, therapists were challenged to demonstrate, scientifically, that psychotherapy really helps people with mental illness.

Since Eysenck's original study, a large body of empirical research shows that, contrary to his conclusions, psychotherapy can be an effective treatment for mental illness. For example, a classic study by Smith, Glass, and Miller (1980) employed a technique called meta-analysis (see the lecture supplement on Statistics and Methods) to combine the results of a large number of studies that compared adult patients who received various forms of psychotherapy to control patients who were untreated. Quantifying psychotherapy outcome isn't easy, but it can be done, and it's necessary if we're going to analyze the effects of psychotherapy statistically. Typically, the control patients were not denied treatment, but were merely put on a "waiting list" until a therapist became available. So, the question can be reformulated along these lines: given X months or years since their diagnosis, have patients who received psychotherapy improved more than those who did not? The answer is yes: in the Smith et al. analysis, that the median patient receiving psychotherapy did better than about 75% of the control patients.

Another finding of the Smith et al. study was that patients who received psychotherapy did better than those who did not, regardless of what form of therapy they received. This finding led some commentators to conclude that all forms of therapy are equivalent. It doesn't matter what the therapist does, so long as the patient sees one.  Saul Rosenzweig (1937) an early leader clinical psychology, has expressed this point of view as the Dodo Bird Verdict, after an episode in Lewis Carroll's Alice's Adventures in Wonderland (1865). Lester Luborsky (1975), a psychoanalyst and prominent psychotherapy researcher at the University of Pennsylvania, popularized the term.  

When Alice discovered that she would not get out of the rabbit hole, she was engulfed in a pond of her own tears, which also drenched a number of animals, such as The Mouse. The Dodo Bird suggested that the animals run a Caucus Race to get dry. The animals ran around until they ran out of breath and stopped. When Alice asked who had won the Caucus Race, the Dodo Bird replied that "everyone has won and all must have prizes".

The Dodo Bird verdict has been a source of comfort to some psychotherapists who prefer to use those psychodynamic forms of therapy that have come under attack by modern scientific psychology, as well as those who believe that the therapeutic relationship between therapist and patient is more important than anything the therapist does (e.g., Frank, 1961; Wampold, 1997, 2001)). Put another way, the "tie-score effect" is taken as indicating that the various forms of therapy have more in common than appears on the surface (see, for example, the Great Psychotherapy Debate: Models, Methods, and Findings by Bruce E. Wampold, 2001). But the Dodo Bird Verdict is troubling, too, because is suggests that the effects of psychotherapy are nonspecific -- that is, that there is no particular "active ingredient" that makes therapy effective. Put another way, it suggests that psychotherapy is simply a kind of placebo. And since psychiatric medications are approved for use precisely because they have been shown to be better than placebo, that suggests that psychotherapy is inferior to pharmacotherapy.

Fortunately, the Smith et al. study contained an analysis that showed that the Dodo Bird Verdict is not quite right: some psychotherapies work better than others. This is already evident in the data presented earlier, which showed that patients who received cognitive-behavioral forms of therapy did even better, compared to controls, than patients who received psychodynamic or humanistic forms of therapy. To examine this issue further, Smith et al. computed measures of effect size (see the lecture supplement on Methods and Statistics) for each of the studies included in their analysis.

The effect size d is a measure of the difference in mean outcomes between two treatments, expressed in standard deviation units. Thus, an effect size of 1.0 means that the average subject in the experimental group scored 1 standard deviation higher than the average subject in the control group. According to Cohen (1977), effect sizes in behavioral and social research can be classified as follows:

  • d = .20: a small effect;
  • d = .50: a moderate effect;
  • d = .80: a large effect.

When Smith et al. calculated the average effect size for different types of therapies, all forms of therapy were shown to have at least moderate-sized effects, consistent with the Dodo Bird Verdict. However, the effect sizes associated with the cognitive and behavioral therapies were much larger than those associated with the psychodynamic and humanistic forms.

In the Smith et al. meta-analysis, the effect of psychotherapy overall, without regard to the form of therapy or the condition being treated, was quantified as an "effect size" of 0.85 -- which is generally considered a "large" effect in medical, psychological, and social-science research.

A later review by Lipsey and Wilson (1993) -- actually, a mega-analysis, or meta-analysis of published meta-analyses (18 in all), obtained a median effect size of .75 -- which is a substantial effect by any standard.

Psychotherapy
              Comparison RevisitedSimilar results were obtained in a meta-analysis of psychotherapy for children and adolescents by Weiss and Weisz (1995).

Other studies have also demonstrated the general superiority of cognitive-behavioral therapies:

  • Chambless and Ollendick (2001) reviewed studies of treatment of the anxiety disorders, and also of depression and behavior problems in children and adolescents.
  • Tolin (2010) reviewed studies of the treatment of anxiety and mood disorders.

Based on these studies, and others like them, it appears that all psychotherapies are not created equal: as a rule, cognitive and behavioral therapies are more effective than psychodynamic and humanistic therapies.

Other considerations also suggest that the Dodo Bird Verdict is wrong:

  • For specific illnesses, some forms of therapy are more effective than others: in general the cognitive and behavioral disorders are more effective than insight-oriented therapies in the treatment of a wide variety of mental illnesses, especially the anxiety disorders.
  • Moreover, the cognitive-behavioral therapies are more efficient than the insight therapies, achieving their results in less time, and therefore with less expense to the patient or insurance companies.
  • And some forms of therapy may even be harmful.  For example, crisis debriefing does not appear to help patients suffering from post-traumatic stress disorder, and may even make their problems worse. 
    • Therapies can be harmful when they are based on the wrong theory of illness -- as, for example, when a patient's symptoms are attributed to "repressed memories" of childhood sexual abuse.

For that reason, cognitive-behavioral treatments are quickly emerging as the "standard of care" in the psychological treatment of mental illness and problems in living.

This does not mean, however, that psychodynamic psychotherapy is not effective. Although classical psychoanalysis, which is what Eysenck studied in the 1950s, does not seem to be more effective than no treatment, better results have been obtained with contemporary forms of psychodynamic therapy.

For example, Shedler (2010) cited a 2006 meta-analysis of psychodynamic therapy that yielded an overall effect size of 0.97.

The real question, though, is not the comparison of overall effect sizes. Questions about the efficacy of psychotherapy are better framed more specifically:

  • What kind of treatment works best for each particular disorder?
  • And when two different treatments are effective, which one is more efficient, and less expensive?
  • What are the particular mechanisms by which a treatment achieves its successful outcomes?What are the particular mechanisms by which a treatment achieves its successful outcomes?

As a rule, psychodynamic therapy appears to be based on the theory that mental illness is rooted in unconscious conflicts -- not necessarily conflicts over sex and aggression like the Oedipus conflict, but unconscious conflicts nonetheless; and that uncovering these conflicts is the key to successful therapy. However, there is very little scientific evidence that unconscious conflict lies at the root of major forms of mental illness, such as schizophrenia, anxiety disorder, or depressive disorder. And there is no scientific evidence that the primary goal of psychodynamic psychotherapy, which is to bring such conflicts into the daylight of consciousness, is the key to successful treatment. Rather, the evidence seems to indicate that the success of psychodynamic psychotherapy, where it achieves such successes, is produced by the same sorts of techniques employed by cognitive-behavioral psychotherapists -- namely, a focus on the here and now, as opposed to the there and then.

There is also the matter of utility. If psychodynamic psychotherapy achieves its positive outcomes by the same mechanisms as cognitive-behavioral therapy, but takes longer and costs more, then cognitive-behavioral therapy is to be preferred on grounds of cost-effectiveness. CBT is not just based on more scientifically valid conceptualizations of mental illness -- it also delivers more bang for the buck.

Here's an example of the kind of comparison I have in mind: a study of treatment for eating disorder (specifically, bulimia, with its characteristic cycle of binge eating and purging),  which compared psychoanalytic psychotherapy and cognitive-behavioral therapy (Poulsen et al., Am. J. Psychiat, 2014).  Patients were randomly assigned to one treatment or the other.  The psychoanalytic treatment encouraged patients to talk about threatening, repressed, feelings and desires, and how they might be related to eating disorder.  The CBT treatment challenged the patients' beliefs that their self-esteem was determined by their body weight and size, and promoted healthier eating patterns.  The psychoanalytic treatment continued for up to two years of weekly sessions, while the weekly sessions of CBT lasted only 5 months.  Nevertheless, CBT delivered clearly superior outcomes:  At the end of the 5 months, 42% of patients ceased binging and purging; treatment; the comparable figure for the psychoanalytic treatment, after the same amount of time, was only 6%.  After the full two-year treatment, only 15% of the patients in the psychoanalytic group showed remission.  There was some relapse in the CBT group, after 19 more months, but even so 44% of patients remained in remission -- a numerical increase over the outcome at 5 months. There was no untreated control group -- which is unfortunate, as spontaneous remission has been known to occur even in the case of eating disorder (Vandereycken, Eating Disorders, 2012), but even so there was a clear advantage for CBT over psychoanalysis.  Not only was CBT more effective, it was also more efficient: it took less time, and it was delivered by therapists who had less formal training than the psychoanalysts.  The study is remarkable in that the senior authors, Stig Poulsen and Susanne Lunn, were themselves psychoanalysts, and had devised the psychoanalytic treatment regime they tested.  Their treatment came out on the short end of the comparison, but they published it anyway. 

On those grounds, it seems that CBT still has an edge over psychodynamic psychotherapy. But again, a precise answer to the question will depend on the particular disorder being treated.


Psychotherapy vs. Medication

How does psychotherapy stand up, in comparison to medication?  This question has often been investigated in the context of depression.  In a provocative article entitled "Listening to Prozac But Hearing Placebo", Kirsch & Sapirstein (Prevention & Treatment,1998) conducted a meta-analysis of 19 published studies in which an antidepressant medication (such as Prozac) was compared to placebo, and another 19 studies comparing psychotherapy with a wait-list or no-treatment control.  Putting the four conditions together, they provided a comprehensive overview of the outcomes of various treatments for depression.

So, some depressed patients get better on their own, without any treatment -- a phenomenon called spontaneous remission.  Apparently, over the natural history of an acute episode, depression sometimes goes away all by itself.  Active treatment improves the prospects of a good outcome considerably, but the outcome with psychotherapy alone is about the same as the outcome with drugs alone.  And so is the outcome with placebo medication -- though, frankly, if you think about it, the placebo group is really another form of psychotherapy.  The patients who received placebo received all the attention, social support, and the like that the drug patients received -- only without the active drug (it's findings like this that give rise to the Dodo Bird Verdict).  On the basis of their results, K&S concluded that the active ingredients in antidepressant medications accounted for about 25% of the outcome in the drug treatment of depression; spontaneous remission (natural history) accounted for another 24%, and placebo accounted for approximately 51%.

The point of all this is not to diminish the value of antidepressants: there's that 24%, which is not nothing.   Kirsch's data is convincing, but Kramer's response is probably correct: when people are depressed, the combination of drugs and psychotherapy can be very helpful.  The point is that even powerful psychoactive drugs have substantial placebo components, and this is likely to be true for the major and minor psychedelics as well.  As Kirsch et al. (2002) point out, "Placebo alcohol produces effects that are not observed when alcohol is administered surreptitiously...". 

If pharmacotherapy is effective, and psychotherapy is effective, what about the combination of the two?Psychotherapy and Drugs for DepressionHere the data is somewhat in flux, but it appears that the combination of drugs and psychotherapy is rather promising. In a study by Keller et al. (2000), depressed patients who received cognitive-behavioral therapy did about as well as those who received Serzone (Nefazodone), an SSRI, but those who received them both did especially well.
Here's another example.  Obsessive-compulsive disorder, like many other anxiety disorders, is commonly treated with SSRIs -- even though there are perfectly good cognitive-behavioral therapies for this problem.  But SSRIs aren't always effective, or at least they aren't always as effective as we'd like them to be.  In a randomized clinical trial, Simpson et al. (JAMA 2013) studied a group of patients with OCD who did not respond positively to a course of treatment with SSRIs.  One subgroup got an additional medication, the antipsychotic Risperidone (remember, "antipsychotic" drugs are really little more than major tranquilizers); another got a placebo pill; and a third group got a course of cognitive-behavioral therapy emphasizing exposure and response prevention (i.e., a variant on flooding).  The outcome was measured with the Yale-Brown Obsessive-Compulsive Scale, a standardized instrument used in the diagnosis of OCD.  The risperidone didn't help much: only about 23% of the patients got better 13% were essentially "cured"), compared to 15% (5% "cured") for placebo.  But the cognitive-behavioral therapy helped a lot: 80% of the patients in this treatment group got significantly better, and 43% were essentially "cured".  One wonders what would have happened if these patients had simply gotten the CBT, without any drugs at all.  Or, whether patients who responded well to the SSRI would have done just as well, or even better, if they received CBT as well.  Or, if those patients had just gotten CBT, without any drugs at all.

The fact of the matter is that, for people with mild or moderate cases of depression, anxiety, and many other mental illnesses, psychotherapy -- especially some form of cognitive-behavior therapy -- works about as well as medication.  Medication really boosts treatment outcome only in severely ill patients.  For people with mild to moderate illnesses, psychotherapy alone reduces side effects (obviously, because patients don't get medication in the first place) as well as the risk of relapse as the treatment proceeds and the patient is improving and recurrence of a new acute episode after full remission.

There is also evidence that psychotherapy is associated with lower relapse rates than medication. In a study of depression, patients were administered either an antidepressant SSRI or cognitive-behavioral therapy. Patients in both groups responded equally well to treatment. But when the patients were followed up some time after their medication was discontinued, or therapy terminated, the patients in the medication group were much more likely to have relapsed -- that is, to have experienced another episode of depression. In other words, psychotherapy came closer than drugs to providing a cure for depression.

And a meta-analysis by Swift et al. (Psychotherapy, 2017) found that patents who receive psychotherapy are less likely to drop out of treatment, or to refuse treatment in the first place.  These investigators analyzed studies that compared psychotherapy and pharmacotherapy, alone and in combination.  It is standard practice in these sorts of studies to report the number of patients who refused their assigned treatment, as well as the number who dropped out before completing it.  Patients assigned to pharmacotherapy alone were more likely to refuse treatment than those assigned to psychotherapy alone, or to a combination.  Similarly, patients were more likely to prematurely terminate treatment if assigned to pharmacotherapy alone.  People seeking help for mental and behavioral problems appear to prefer the personal contact that comes with psychotherapy; if they're going to get medication, they want the dialog and social support that comes with a live therapist, too.  So, even if mediation is the primary vehicle for treatment, supplying adjunctive psychotherapy may help patients stay on course.

Combining psychotherapy with medication often seems like the optimal approach to treatment.  Presently available psychiatric drugs offer a fair measure of symptom relief, but not a cure. Psychotherapy gives the patient the knowledge and skills to overcome his illness, or at least to cope with it more effectively. In the Keller study, patients who got Serzone experienced a temporary boost in their mood, and that's not a trivial outcome. But patients who got active psychotherapy learned to deal with their depression on an ongoing basis, and to adjust to the life after the depression went away. More generally, patients can suffer relapses when their drugs are withdrawn, but in a sense new knowledge and skills, acquired through the experience of cognitive-behavioral therapy, never go away. They remain permanently available to the patient, as part of his repertoire of social intelligence.

It's probably for this reason that patients who receive psychotherapy, regardless of whether or not they also receive drugs, are less likely to relapse before full remission, or to experience a recurrence of another acute episode.  On the other hand, there is some evidence that medication can actually interfere with psychotherapy (Forand et al., 2013).  For mild to moderate cases, psychotherapy does about a well as drugs -- with fewer side-effects and less risk or relapse or remission.  Drugs are most effective for the most severe cases -- and even then, psychotherapy can help, by giving the patient new knowledge and skills. 

And of course, effective psychotherapy avoids the unpleasant and harmful side-effects of medications.  A good example is insomnia, a fairly common sleep disorder, and one of the prominent symptoms of depression.  Patients with insomnia are often treated with prescription medications such as Ambien and Lunesta, as well as sedative drugs like the benzodiazepines, and over-the-counter "sleep aids" such as ZzzQuil.  But all of these drugs carry a risk of dependence, if not addiction; they can make the patient feel groggy even during the daytime; they can disrupt the REM (dreaming) stage of sleep.  But an effective psychological treatment, Cognitive-Behavioral Therapy for Insomnia (CBT-I), achieves the same success rate as medications, without the side effects (Morin et al., 1994, 2006).

  • Interestingly, a course of CBT-I is also effective in lifting depression, roughly doubling the effectiveness of conventional treatments.  What's interesting about this is that we usually think of insomnia as a symptom of depression -- that is, as something that is caused by an underlying mental illness. But it's also possible that the relation between depression and sleep is bidirectional -- that is, that insomnia may exacerbate the individual's depression, and treating the insomnia will also cause the depression to life, at least a little.  At least the patient won't be depressed about not getting enough sleep!
    • Or, it may be that conventional treatments don't do anything for the disordered-sleep component.
    • Or, it may be that gaining control over sleep may improve the patient's sense that he can get control over the other symptoms, as well.

The combination of drugs and psychotherapy is probably no less important in schizophrenia than it is for depression.  We know that a wide range of social stressors can be implicated in the onset of schizophrenia, and it makes sense that eliminating or at least modulating these aspects of the environment would promote successful treatment.  Equally important, people with schizophrenia may have to learn how to live with their disability, and these cognitive and social skills must be learned through active rehabilitation programs.  (See "A Social Salve for Schizophrenia" by Matthew M. Kurtz, Scientific American Mind, March-April 2013).

  • Recent reviews show that these programs can, in fact, be successful (Horan et al., 2011; Kurtz & Richardson, 2012). 
  • Identifying people at risk for schizophrenia, and helping them acquire these skills, may even prevent the occurrence of the illness in the first place.

Why combinations
              might work best In the words of the ancient adage (sometimes attributed to Lao Tze in the Tao Te Ching -- though I can't find it in my copy):

Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.

Giving a patient drugs is like giving a man a fish: when they're gone they're gone. But the learning that comes through active cognitive-behavioral psychotherapy stay forever, as a permanent resource for the patient.


The Social Context of Psychopathology

Traditional forms of psychotherapy, including psychodynamic and cognitive-behavioral forms of therapy, tend to treat the individual patient in isolation -- there's the therapist, and there's the patient, and that's about it. This tradition follows from the medical model, in which the patient has some illness that the doctor treats with an antibiotic, or surgery, or whatever. But psychologically speaking, we've already argued that, as the poet John Donne put it "no man is an island". Psychology explains the individual's behavior in terms of his or her individual mental states, and that goes as well for abnormal behavior, as well as for the psychological deficits and maladaptive social learning that account for it. Individuals live their lives in the context of other people, and it would be foolish to assume that the social context has no influence on individual mental patients and the course of mental illness.

In fact, we've already seen how certain anxiety disorders, such as phobias and obsessive-compulsive disorder, can be acquired through social learning, as well as through direct experience. And also how expressed emotion -- how other family members view and behavior toward the patient -- can influence the prospects for recovery and relapse in patients with schizophrenia.

The role of social factors in psychopathology can be seen in the various "epidemics" of mental illness:

  • Multiple Personality Disorder in the 1980s
  • Attention deficit Hyperactivity Disorder in the early 21st century.


Group and Family Therapy

Some therapists have gone so far as to assert that it is not enough to treat the individual patient, precisely because, in some psychologically real sense, it's not just the patient who is mentally ill. And if it's not just the patient who is mentally ill, then it's a mistake to treat the individual patient as if he or she were the only person that mattered. If the real problem is with the patient's relationship with other people, then it's the relationship that has to be treated. At the very least, other people have to be enrolled somehow in the treatment process.

This is obviously the case with many problems in living, such as marital difficulties, that are often treated by psychotherapists. If a marriage is in trouble, you can't hope to fix it by working on only one partner. Both partners have to be in the therapy, together.

Many patients are treated in groups in addition to, or instead of, individual therapy sessions.  Group therapy has obvious economic advantages, and psychological advantages as well.  Patients can learn that other people have problems like theirs, and learn how others deal with them.  Individual patients can find social support and encouragement for their own efforts to get better, and models for improvement.  Some patients' problems are best observed in a group context.  And the group provides a "safe place" where patients can try out new ideas, feelings, and behaviors.

Alcoholics Anonymous is an informal setting , created and maintained by recovering alcoholics themselves, which provides many of the benefits of group therapy.

Among the earliest and most vigorous proponents of this idea was Salvador Minuchin (Minuchin et al., 1974), a psychiatrist who has specialized in the treatment of eating disorders in adolescents. Minuchin argued that mental illness should not be construed as "contained within the individual"; nor should the mental patient be viewed as a "passive recipient of noxious environmental [or biological] influences. Rather, Minuchin argued that family (and other social) interactions may be responsible for certain forms of mental illness; and that these family interactions are truly interactional in nature, in that the patient plays a role in shaping the environment to which he or she, in turn, responds.

Minuchin's open systems model of psychopathology and psychotherapy "[broadens] the focus from the sick child to the sick child within the family" and "redefines the nature of pathological disorder and the scope of therapeutic change" (Minuchin et al., 1974). The open systems model postulates that:

  • The way the family is organized is triggers the development and maintenance of the child's symptoms.
  • The child's symptoms themselves help maintain that very same family organization.

Minuchin et al. conclude: "Therefore, therapy must be directed toward changing the family processes that trigger and maintain the child's... symptoms and toward changing the use of these symptoms within the family." Obviously, that can't be done with the child alone; and it can't be done by working on one family member at a time. The whole family has to be enrolled in the treatment of the child, and the whole family has to change.

Minuchin et al. go on to describe a pathological family organization in terms of several family transactional characteristics:

  • Enmeshment, or a high degree of responsiveness and involvement with the child, so that any change in one family member will "reverberate throughout the family system".
  • Over-protectiveness, in which family members' concerns for each others' welfare go far beyond the bounds of any individual's illness.
  • Rigidity, such that family members are committed to maintaining the status quo in the family, to such an extent that, quite literally, they prevent the child from getting better -- precisely because any change in the child would disrupt the family's organization.
  • Lack of conflict resolution, which prevents families from acknowledging and negotiating various problems.

Minuchin et al. devised a form of family therapy that was expressly designed to identify, challenge, and break down these four characteristics, and thus create a family environment in which the child is permitted, and encouraged, to get well. For example, it wasn't just the individual child who was hospitalized for treatment; the whole family had to stop what it was doing and mobilize for treatment. Minuchin's system was further developed by a group of therapists at the Maudsley Hospital in London (yes, that's the old "Bedlam", now much reformed and one of the world's leading centers for research on psychopathology and psychotherapy), and is now known as the "Maudsley model" for family treatment of adolescent eating disorders.

In their initial 1974 study, Minuchin et al. reported about 86% success in treating 48 children with "superlabile" diabetes, "intractable" asthma, or anorexia nervosa, but they did not have a comparable group that received traditional individual therapy. The comparison of family vs. individual therapy has been carried out mostly by the Maudsley group, who largely confirmed Minuchin's findings.

  • For example, Russell et al. (1987) reported that, after 1 year of treatment, family-based treatment (FBT) for eating disorder (both anorexia and bulimia) produced significantly better outcomes than individual treatment (IT), especially for patients whose illness had began in childhood or adolescence.
  • Interestingly, a 5-year followup of these patients by Eisler et al. (1997) found no difference between the two groups, indicating that the patients who had received individual treatment eventually got better as well. But that doesn't mean that the FBT and were equally effective. Most patients with eating disorder will, eventually, "grow out" of their disorder in the natural course of time. But in the meantime their lives, and their families' lives, are hell, and there are significant risks to the patients' health while the illness runs its course. So anything that gives the natural course of the illness a boost is a good thing -- and FBT is a much better booster than IT.


Chronic Disease Management and Rehabilitation

What about instances of mental illness where a cure is impossible? There are lots of such disorders, including:

  • the organic brain syndromes (brain damage is, for all intents and purposes, irreversible);
  • intellectual disability (most syndromes can't be reversed or prevented);
  • schizophrenia (the tranquilizers are tranquilizers, not cures);
  • autism (no cure yet, though some behavioral treatments can effectively convey important social skills);
  • relapsing mood or anxiety disorders (relapse is likely when the drugs are discontinued).

After the acute phase of mental illness, after efforts at treatment have gone as far as they can, the patient may move into a chronic phase. Such circumstances call for rehabilitation programs to help patients and others cope with their chronic disability, get out of the institution, back to their families and communities, and make an optimal social adjustment despite their illness.


Mental Hospitals

Before the 19th century, there was little by way of active treatment or rehabilitation. Psychology wasn't yet a science, nor was psychiatry a branch of medicine -- and never mind that medicine wasn't all that scientific, either!

BedlamFor the most part, mental patients, when they became too much trouble, were simply warehoused -- often, kept in prisons along with criminals. Sometimes, they were housed in special "insane asylums", separate from convicts. A good example of an 18th-century insane asylum is the Royal Bethlehem Hospital in London, founded in 1337 as a religious charity, then taken under royal auspices in the 16th century. But even these hospitals could offer little more than custodial care, and conditions in most of them progressively deteriorated. Which is how the Royal Bethlehem Hospital got the nickname "Bedlam". In fact, the middle and upper classes used to pay a fee to visit the hospital and watch the antics of the patients as a form of Sunday-afternoon entertainment.

"Bedlam" and The Rake's Progress

Bedlam and the Rake's ProgressThe word bedlam has come to mean "a state of uproar or confusion", but the word has its origins as the popular name of the Royal Hospital of St. Mary of Bethlehem, in London. Bedlam, originally founded as a charity hospital in the 14th century, had by the 18th century become a notorious madhouse, and was depicted in William Hogarth's A Rake's Progress, a series of eight paintings (also published as engravings, with the images reversed) produced in 1733-1735. In the sequence, Tom inherits a fortune from his father, abandons his finance, Sarah Young, who is pregnant with his child, and moves to London to live the high life. He leads a life of increasing dissolution, falls into debt, and marries a rich but ugly spinster, but he gambles away her fortune as well. He is incarcerated in debtors' prison, but goes mad (perhaps suffering from dementia caused by syphilis) and is consigned to Bedlam.

The Bedlam scene has been described as an authentic representation of the interior of the hospital as it existed in the 18th century -- with individual cells, men's and women's quarters separated by an iron grate, hospital staff, potentially suicidal patients (like Tom) chained to the walls, sightseers who paid a tuppence to view the patients' antics, and Sarah kneeling beside Tom.

See "A Rake's Progress: 'Bedlam'" by James C. Harris,Archives of General Psychiatry, 2003). Hogarth's series of engravings was the inspiration for The Rake's Progress, an opera by Igor Stravinsky, with libretto by W.H. Auden and Chester Kallman (1951).

Even though Esquirol distinguished between the mentally ill (in his terms, the insane), the intellectually disabled (in his terms, the mentally deficient), and mere criminals, they were still all housed together (except for insane members of upper-class families, who were more likely to be consigned to the attic, as in Charlotte Bronte's Jane Eyre). Things began to change when Philippe Pinel (1745-1826), became the superintendent of the Bicetre, an asylum in Paris, and later the Salpetriere, a large mental hospital (which you can still see when you visit the city). Along with his mentor, Jean-Baptiste Pussin (1745-1811), Pinel pioneered the "moral" treatment of those who, although mentally ill, still deserved respect as "citizens". Pinel is also credited with freeing the mentally ill from their chains -- although it was actually Pussin who did this (and he replaced the chains with straitjackets!).

Mental Hospital
              ReformThe moral, humane treatment of the mentally ill quickly spread to England and America. Bethlem Hospital was reformed. In 1792, Benjamin Rush (1745-1813), a physician who had been one of the signers of the Declaration of Independence, founded a division of the Pennsylvania Hospital (itself the first hospital in America, after New York's Bellevue), devoted to the moral treatment of the insane.  Rush's treatise, Medical Inquires and Observations Upon diseases of the Mind (1812) was the first textbook of psychiatry published in America.  Another book, On the Construction, Organization, and General Arrangements of Hospitals for the Insane (1854), by Thomas Kirkbride, the first superintendent and physician-in-chief of the Institute, remained influential even into the 20th century. (I worked at The Institute as a graduate student at the University of Pennsylvania). 

For more on Benjamin Rush, see "Rush's Remedies" by Susan Frith, Pennsylvania Gazette, 07-08/2012, and "A New Founding Mother" a sketch of Rush's wife, Julia, by Stephen Fried, Smithsonian, 2018..  Also Fried's biography of Rush, Rush: Revolution, Madness, and the Visionary Doctor Who Became a Founding Father (2018).

Sidis
              Institute at Maplewood FarmsThere was also an extensive system of private mental hospitals, catering mostly to the wealthy. An early example was the Sidis Institute at Maplewood Farms, a large estate in Portsmouth, New Hampshire. Established by Boris Sidis, a friend of William James, and leader of the "Boston School" of psychotherapy (and father to William James Sidis, at the time the youngest person ever to enter Harvard College, at age 11), the Institute offered all the latest treatments, including psychotherapy in an environment of "beautiful grounds, private parks, rare trees, greenhouses, sun parlors, palatial rooms, luxuriously furnished private baths, private farm products". By 1916, there were more than 20 such institutions in Massachusetts alone.Link to the Sidis Institute website.

Still, some of these asylums were awful places, little better than bedlam.  In 1845, the British government set up the independent Lunacy commission to set and enforce standards for private mental hospitals.  For a history of mental-hospital reform in Victorian England, see Inconvenient People by Sarah Wise (2013).

By the late 19th century publicly supported mental hospitals were a feature of virtually every state health system. These were often glorious structures, architecturally distinctive  -- following the precepts laid down by Thomas Kirkbride (see Asylum: Inside the Closed World of State Mental Hospitals by Christopher Payne (2010).

  • Binghamton State Hospital, near where I grew up in New York State, was a neo-Gothic structure, built in 1858, and situated on a hill overlooking the city like some medieval castle. It's on the National Register of Historic Places.
  • Napa State Hospital, erected in Northern California in 1872, also a neo-Gothic structure, included farming operations designed to make the hospital self-sufficient, and also to provide a kind of occupational therapy for the residents.
  • Oregon State Hospital, in Salem, was the setting for Ken Keesey's novel, One Flew Over the Cuckoo's Nest -- location for the film made from the book.  Its classic building has now been re-purposed as a Museum of Mental health, which tells the story of public, state-funded mental hospitals.

Here are some more classic state mental hospitals.

The Trans-Allegheny Lunatic Asylum, built between 1858 and 1881, in Weston, West Virginia.  Now closed, it still hosts guided tours.  See "Getting Into the Spirit" by John Searles, New York Times 10/13/2013.
The South Carolina State Mental Hospital, in Columbia, was designed in Italian Renaissance Revival style by the same architectwho designed the SC State Capitol.  Built beginning in 1857, it replaced the earlier South Carolina Lunatic Asylum (1828), which was the first public mental hospital in the South, and only the third in the nation.  Decomissioned in 1990, it remained an important feature in downtown Columbia, and was slated to be converted into luxury apartments before it was destroyed by fire in September 2020.
Greystone Park Mental Hospital, in New Jersey. Like the Trans-Allegheny Asylum, Greystone was designed as a "Kirkbride building", following the principles of "moral treatment" of the insane promoted by Thomas Story Kirkbride, who trained, and later ran, the Institute of the Pennsylvania Hospital in Philadelphia.  Woody Guthrie, the American folksinger, was a patient here from 1956 to 1961 (when Guthrie told the staff that he had written 8,000 songs, he was diagnosed as having "grandiose ideas" and as lacking in "judgment" and "insight".  As of 2015, Greystone Park was slated to be demolished, although a group of preservationists were trying to save it as an important historical landmark.  See "Preservationists Fight to Save a Former Mental Asylum in New Jersey" by Dan Hurley, New York Times, 04/03/2015).

But as good and humane as these hospitals were intended to be, they still offered little more than custodial care until the beginning of the 20th century, when advances in psychology and psychiatry began to afford the possibility of active treatment of the mentally ill. In the 20th century, reflecting our progressively increased understanding of mental illness, public and private mental hospitals offered active treatments as well as custodial care: biological treatments like psychosurgery, ECT, and later drug treatments of various kinds, as well as psychotherapy by psychologists and social services by social workers. 

A 1946 expose by Life magazine described many American asylums as "little more than concentration camps".

All that came to a screeching halt with the de-institutionalization movement that began in the 1960s, when the mental hospitals began to be emptied and their residents discharged back to their families and communities. Partly this was a result of the early successes of the pharmaceutical revolution, which made many schizophrenics more manageable, and afforded symptom relief to many with patients suffering from depression and anxiety disorder. In addition, most public mental hospitals suffered from overcrowding, and budget difficulties led to a lack of properly trained and supervised staff, and corresponding scandals of the sort exposed by Life magazine.  But there were also other contributing factors:

  • There arose an anti-psychiatry movement which questioned not only the existence of mental hospitals, but the whole idea of the mental-health professions.
    • Thomas Szasz (1920-), influenced by a radically libertarian political philosophy, argued in his book,The Myth of Mental Illness (1960) that, aside from actual neurological disorders, most of what we call "mental illnesses" were simply "problems in living" and psychiatry ought to mind its own business.
    • Theodore Sarbin (1911-2005), a social psychologist (who taught at Berkeley before moving to the then-new UC campus at Santa Cruz), also argued that mental illness was a myth, born of a mistaken metaphor with physical illness. For Sarbin, mental illnesses was a role imposed on individuals whose behavior deviated from prevailing social norms.
    • R.D. Liang (1927-1989), influenced by the 60's passion for psychedelic drugs, argued that, far from being symptoms of some kind of medical disorder, what we call "mental illness" was really an episode of transformation not unlike what Native Americans experience in their "vision quests".
        Any way these and other writers looked at it, then, the mentally ill didn't belong confined in mental hospitals -- or even treated as if they were really ill.
  • The burgeoning civil rights movement of the 1950s and 1960s expanded into a movement for the rights of disabled people -- including the mentally disabled. Under a doctrine of "least restrictive conditions", the law came to the view that disabled people had a right to live with the least restrictions possible. For people in wheelchairs, that meant that public accommodations had to build ramps, and widen their doorways. For people with mental illnesses, that meant no more confinement in mental institutions.
    • In 1967, the New York Civil Liberties Union established its project on Civil Liberties and Mental Illness.
    • In 1971, Frank Johnson, a federal district court judge in Alabama, decided (in Wyatt v. Stickney) that patients in a state mental hospital had a constitutional right to treatment, not mere confinement.
    • In 1972, a similar case in New York, concerning the residents of the Willowbrook State School for what was then called the "mentally retarded" (New York ARC v. Rockefeller), required the state to facilitate community placement of intellectually disabled individuals
    • In 1973, the American Civil Liberties Union followed suit, with its Mental health Law Project.
    • In 1975, the United States Supreme Court unanimously decided (in O'Connor v. Donaldson, also argued by Ennis) that patients involuntarily confined to mental hospitals were constitutionally entitled to effective treatment.  Agreeing with a lower court decision, the Court declared : "a State cannot constitutionally confine... a non-dangerous individual who is capable of surviving safely in freedom by himself or with the help of willing and responsible family members or friends...". 
    • In 1990, the Americans with Disabilities Act (ADA) required that individuals with physical and mental disabilities be allowed to live in the least restrictive setting possible; and, wherever possible, to be integrated with non-disabled individuals.
    • In 1999, the requirement that mentally ill people be able to live in the least restrictive settings possible was affirmed by the United States Supreme Court in Olmstead vs. L.C.
    • In 2010, the Affordable Care Act (ACA, aka Obamacare) listed mental health and substance abuse services among 10 "essential health benefits", and mandated parity between physical and mental illnesses.  The ACA also included federal subsidies permitting individuals under 65 years of age to receive community mental health services (individuals over 65 were already covered in this respect under Medicare), and expanded mental-health coverage for the poor under Medicaid.
  • Anne Harrington, in The Mind Fixers (2019; described earlier) argues convincingly that de-institutionalization, while aided by the availability of various psychiatric drugs, was also promoted by Freudian psychoanalysts and other psychotherapists . 
  • And then, of course, there was the economy. Hospitalization is expensive, and as states looked for ways to tighten their budgets, public mental hospitals were an easy target.

The federal government encouraged de-institutionalization as well, for both economic and legal reasons. But the funds promised to support community and family treatment of de-institutionalized mental patients never were forthcoming -- with the result that homeless people with mental illness and substance abuse are found on the streets of every major city (if you don't believe me, check out Peoples' Park in Berkeley, or Civic Plaza in San Francisco).


For excellent journalistic coverage of the de-institutionalization movement, see:
  • No One Cares About Crazy People: My Family and the Heartbreak of Mental Illness in America by Ron Powers (2018), both of whose sons suffered from schizophrenia -- and who, he argues, would have received better treatment in a mental hospital than in the community.
  • Insane: America's Criminal Treatment of Mental Illness by Alisa Roth (2018), who argues that jails and prisons have become dumping grounds for the mentally ill, who receive little or no treatment for their illnesses while they are incarcerated.

De-institutionalization was not a bad idea. There were, admittedly, lots of people confined to mental hospitals who didn't need to be there. At the same time, it's not at all clear that the homeless mentally ill are better off on the streets of Berkeley and San Francisco than they would be within the walls of Binghamton or Napa State Hospital. Moreover, mental hospitals could have created an environment for the active rehabilitation of the mentally ill, preparing them for lives in the community and with their families.  Such environments are going to look increasingly attractive as an aging population of Baby Boomers increases the number of individuals with Alzheimer's Disease and other forms of dementia, and individuals on the autism spectrum outlive their parents and other family members who have cared for them.

And community mental health treatment isn't always all it's cracked up to be, either.  All too often, substandard care in large state institutions has been replaced by substandard care, barely better than custodial care, in small group homes, run on tight budgets, by individuals who, however well-intentioned they may be, often lack proper training and supervision.  And then again, many of these group homes are run on a for-profit basis, further limiting the care and support that the non-institutionalized mentally ill receive.  Many homeless mentally ill and up in local jails, for want of any other way of housing them -- a step backward, I suppose, toward the moral model of mental illness.

For this reason, some prominent psychiatrists have begun to advocate for a revival of the state mental hospital system for the care of the chronically mentally ill (see "Improving Long-Term Psychiatric Care: Bring Back the Asylum" by D.A. Sisti, A.G. Segal, and E.J. Emanuel, Journal of the American Medical Association, 2015).  The same legitimate concerns that led to de-institutionalization in the first place will probably oppose any such move (see, for example, "Under Lock & Key: How Long?" by Aryeh Neier & David J. Rothman, New York Review of Books, 12/17/2015).  But if re-institutionalization occurs, we can hope that it avoids the problems that cropped up in the 20th century.  That will take money, for proper facilities and proper staffing; and strict enforcement of the legal principle of "least restrictive treatment".  The mental hospital may have its place in public policy, but nobody should be sent there who can't receive safe, effective treatment in the community.

For better and for worse, de-institutionalization is a fact of life in the United States, and many other developed countries as well.  But in underdeveloped countries, institutionalization remains the norm, and the institutions themselves are often little more than warehouses for the mentally ill.  In 2013, the World health Organization introduced a "global mental health plan" to move from centralized mental hospitals to community-based care.  But this takes money -- money that is hard to come by even in a rich country like the United States, and even harder to come by for the underdeveloped countries of the Third World.  (For more information, with a special focus on developments in Guatemala, see "Where Mental Asylums Live On" by John Rudolf, New York Times, 11/05/2013.)


Token Economies

Even in the case of the organic brain syndromes, developmental disorders, and functional psychoses, behavioral treatments such as token economies, employing the principles of instrumental or operant conditioning, can facilitate the acquisition of new, more adaptive behaviors in individuals with actual or presumed brain pathology. In token economies, patients receive tokens, such as poker chips, contingent on their performance of certain actions, such as cleaning their living area or dressing themselves; these tokens may then be exchanged for goods at the hospital canteen or other privileges. In the language of instrumental conditioning, then, the tokens are secondary reinforcers. Perhaps as much as medication, these training procedures help chronic mental patients return to their families, and live in the community, by providing them with the repertoire of behaviors they need to live outside the confines of the mental hospital. (Token economies can also serve as laboratory models of national economies, but for some reason behavioral economists haven't taken much interest in them.)


Social Interventions for Autism

Autism is one of the most challenging chronic mental illnesses.  Psychotropic medication can help manage some of the secondary symptoms of autism, such as the mood swings, temper tantrums, and irritability that autistic children display, but have essentially no impact on the child's primary deficits in communication and other aspects of social interaction.  For that, the best hope lies in psychosocial interventions (though injections of oxytocin have been recommended!).

One intervention technique, derived from instrumental conditioning, is applied behavior analysis (ABA), first devised by O. Ivar Lovaas for the treatment of children with autism. This treatment program involves an intensive regime, perhaps 40 hours a week (i.e., a full-time job for both the child and his therapists), beginning very early (as young as 3 years of age). ABA involves an extensive system of rewards and punishments intended to shape and reinforce desirable behaviors, such as making eye contact and sitting quietly, and to eliminate or discourage undesirable ones, like yelling or head-banging. Early applications of ABA sometimes used punishments such as slapping or even electrical shocks (delivered by an instrument that looked disconcertingly like a cattle prod), resulting in considerable criticism. Yet Lovaas contended that his methods delivered results when other strategies did not, and he eventually gave up punishment in favor of a regime based entirely on positive reinforcement (in this respect, following Skinner's dictum that positive reinforcement is much better than punishment for controlling behavior). In 1987, Lovaas reported that his treatment had a 50% success rate (a 1993 follow-up showed that most of the patients had maintained their treatment gains). Subsequent studies have not reported about a 30% success rate, which is still better than most other treatments for autism. ABA is phenomenally expensive, unless you consider the alternatives, and it is now a standard treatment for autism.

A number of other interventions have combined techniques from ABA with more cognitive approaches derived from the study of social development and the "theory of mind".  For example, the Early Start Denver Model (ESDM) focuses on getting the child to pay attention (and respond appropriately) to social cues such as facial expressions, gestures, and -- last but not least -- speech. Studies have shown that ASDM is effective, but it isn't easy, one study employing more 2000 hours of therapy delivered over the course of two years.

Some variant of ABA is the most popular, and effective, approach to treatment currently available for autism spectrum disorders (ASD).  There is no medication available, other than sedatives.  And although we generally think of autism as a "chronic" mental illness, from which a patient will never fully recover, in fact the prospects for improvement are not trivial.  As with schizophrenia, it is possible for an autistic individual to make a substantial-enough recovery that he or she will no longer qualify for ASD. 

  • Fein and her colleagues (2013) identified a group of 34 individuals who had been diagnosed with autism in childhood but who, as adolescents or young adults, no longer qualified for the diagnosis.  Not all of these individuals had received ABA, or any formal behavioral treatment, but most of them, when they did carry the diagnosis, had milder social symptoms than a comparison group with "high-functioning" autism.
  • Lord and her colleagues (2014) followed a group of patients who had been diagnosed with autism at age 2 (about as early as the diagnosis can be made).  By age 22, about 10% of this group no longer qualified for the diagnosis.  Interestingly, this subset had higher IQs than those who did not make an "optimal" recovery.

One conclusion from these studies is that, as with schizophrenia, prognosis is related to premorbid personality as well as to active treatment.  Those patients who do better, achieving an optimal outcome of their illness, may be those who have better resources at the start: better social skills, higher intelligence, and the like.


Cognitive Restructuring -- Again

As discussed earlier, a number of theorists have proposed that paranoid delusions reflect schizophrenic patients' inappropriate attempts to explain the anomalous experiences they have as a result of their illness. Accordingly, another aspect of rehabilitation may be to give delusional patients more appropriate and adaptive explanations for what is happening to them.

When I visited China in 1985, as part of a delegation of mental health specialists, the mental-health authorities we meat stressed that the incidence of schizophrenia in the People's Republic of China was no different than that in other countries -- which is what we would expect if schizophrenia was due largely to genetic and biochemical factors. However, we were also informed that the incidence of paranoid schizophrenia was lower in China than in other countries. I have no independent knowledge of whether this is so, but the attributional account of delusions offers an explanation for this fact (assuming it is true): in China, social organization is such that the mentally ill are detected very early in the acute phase of their illness, and are brought to local mental hospitals for treatment. At least at the time of my visit, it was routine for these acute patients to be given antipsychotic medication followed by a series of lectures on the nature of their illness. Perhaps, when schizophrenics in the acute stage of their illness are given correct information about what is happening to them, they have no need or opportunity to develop delusional explanations for themselves.

RehabilitationThe Burden of Mental Illness is an important aspect of mental-health treatment because, at least for the present, most serious mental illnesses, such as schizophrenia, autism, and the more serious forms of mood disorder, are chronic diseases: psychiatric medications provide only symptomatic relief, and psychotherapy can only go so far. For mental health, as for the rest of medicine, in the face of incurable illness we do not simply throw up our hands and walk away from the patient. Nor, for that matter, do we continue fruitless attempts to cure the patient's illness. Rather, in chronic illness the goal of treatment shifts from cure to rehabilitation -- to help the mentally ill become at least partly self-sufficient, to live on their own or with their families, or in halfway houses and other protected living environments. In mental health, as in the rest of medicine, permanent hospitalization is the last resort.


Stigma, Stereotypes, and the Self-Fulfilling Prophecy

Unfortunately, the prospects of successful treatment of mental illness -- whether success comes in the form of a cure or rehabilitation -- is hampered by the social context in which mental illness occurs.

  • People frequently overemphasize the statistical or social criteria for mental illness, labeling as "sick" behavior that is simply unusual, infrequent, or nonconforming. An extreme example of this tendency is the use of psychiatric diagnosis as a means of social control -- as in the former Soviet Union, where political dissidents where frequently diagnosed as mentally ill, and incarcerated in mental hospitals, simply on the basis of their disagreement with government policies. 
  • There is also a tendency to embrace the moral rather than the medical model of mental illness, so that the mentally ill are perceived as socially undesirable -- as bad, immoral, or even evil; and as somehow responsible for their own problems (or, at least, their failure to overcome them). As a result, people impose a "criminal" role on mental patients, instead of the "sick" or "impaired" role, emphasizing restraint and punishment as opposed to cure or rehabilitation. A familiar example of this tendency is the stereotypical association of mental illness with violence and criminal behavior, as well as the tendency to NIMBYism (as in Not In My Back Yard) when it comes to the establishment of halfway houses and other facilities for the community care of the mentally ill.
  • And, finally, there is a tendency to identify people with their illnesses -- particularly their mental illnesses, by referring to people as schizophrenics or depressives, rather than people with schizophrenia or people with depression

More subtle, perhaps, is the frequent occurrence of stereotyping when it comes to the mentally ill, as well as the dominance of first impressions -- psychiatric diagnoses, once made, tend to stick so that the person never sheds the label of "schizophrenic", "manic-depressive", etc., or for that matter the sick or impaired role. Generally, there is a popular refusal to admit the possibility of a good outcome in mental illness -- a successful treatment, whether cure or rehabilitation, that would allow the person to leave the sick role and assume his or her proper role(s) in society.

The Stigma of
              Mental IllnessErving Goffman, a sociologist, analyzed what he called the stigma of mental illness. For Goffman, a stigma is "an attribute that is deeply discrediting" -- which turns a "whole person" into a "tainted, discounted one". Many physical stigmata (that's the plural of stigma) are immediately apparent. But others, like the stigma of mental illness, are not readily apparent. The mentally ill only become stigmatized when their mental illness becomes known to others. Before their conditions become known, the mentally ill, with their secret stigma, are discreditable; after their condition becomes known, the mentally ill are actually discredited.

Following Goffman, Jones and his colleagues analyzed the stigma of mental illness in terms of a number of different dimensions.

  • Concealability: Some stigmata (that's the plural of stigma) can be concealed, while others cannot. In an earlier time, being black was stigmatized, but light-skinned African-Americans were often able to "pass" for white, and be accepted by white society. The stigmata of mental illness are not immediately apparent, allowing mentally ill people can "pass" for "normal".
  • The course of the mark refers to the extent to which the stigma can be concealed over time. It might be possible for a mentally ill individual to conceal his condition for a period of time, but the more time he spends with other people, the more likely it is that he will inadvertently reveal his stigma.
  • Disruptiveness has to do with the extent to which the stigma can impair the individual's social interactions.
  • Aesthetics has to do with other people's reactions to the stigma.
  • The origins of the stigma may be congenital (present at birth) or acquired; if the latter, it may have been acquired accidentally or deliberately, perhaps due to some misbehavior on the part of the individual.
  • Peril has to do with the danger, or the apparent danger, that the stigma poses to other people.

Link and Pheelan (2001) offered a different perspective on the components of the stigma of mental illness.

  • Social selection is the process by which people identify and label the differences that are important to them. Most people don't care whether someone has a physical illness (unless, perhaps, it's contagious). But Link and Pheelan argue that people are very much concerned about mental illness. If someone knows that you've been diagnosed with schizophrenia, that makes a difference to them in a way that knowing that you've been diagnosed with asthma does not.
  • Stereotyping is the process by which the person's label, as mentally ill, is linked to a whole list of undesirable characteristics. If you've got heart disease, it doesn't matter. But if you've got a mental illness, or so people think, maybe you shouldn't be around children, or you're the wrong person for a particular job.
  • Social selection inevitably leads to a separation between "Us" and "Them", the stereotyping group, and the group that gets stereotyped.
  • The distinction between "Us" and "them" inevitably leads to discrimination against "Them", and Their loss of status. This is the discrediting that Goffman talked about. The discrimination can be direct ("the mentally ill need not apply") or structural (built into the structure of society, as when mental hospitals are set apart from other hospitals). If halfway houses for the mentally ill are located in less-desirable areas of towns and cities (as they often are), that carries the implication that the mentally ill are also undesirable. And when stigmatized individuals incorporate their stigma into their self-concept, they may begin to view themselves as undesirable as well.
  • And then there is the exercise of power: medical patients are supposed to follow their doctors' orders, but they can question them, and challenge them, and seek second opinions, and do what they want. Mental patients, because they are presumed to be incompetent mentally, don't have this same kind of countervailing power.

Considerations of stigma and stereotyping lead us back to our earlier discussion of the various construals of deviance.

  • Applying the statistical and social standards for abnormality, we can label unusual or nonconforming behavior as "sick", leading to an inappropriate diagnosis of mental illness. In the former Soviet Union, people who were opposed to communism were often diagnosed as mentally ill and confined to mental hospitals. This still happens in the People's Republic of China, even today.
  • Applying the moral vs. the medical model, it is easy to view the mentally ill not just as socially undesirable, but also as responsible for their own afflictions. It is this emphasis on the "criminal role" that led, in the 18th century, to the confinement of the mentally ill with paupers and criminals in asylums like Bedlam.

Also subtly keeping the mentally ill "sick" is the self-fulfilling prophecy (discussed in the lecture supplements on Personality and Social Interaction):

  • the diagnosis of mental illness creates expectations concerning the patient;
  • these expectations lead to behavior on the part of others that elicits abnormal rather than normal behavior from the patient;
  • they also lead the patient's normal or ambiguous behavior to be interpreted by others as "abnormal";
  • in either case, the patient's behavior is taken as confirming the diagnosis of mental illness.

Such a process can lead patients to define themselves as incurably ill, diminishing their motivation for therapeutic change. It can also lead those who care for mental patients to substitute custodial care and medication for active treatment that might return patients to their normal role(s) in society. If there are few or no attempts at cure or rehabilitation, we can virtually guarantee that mental patients will never get well.

A good example comes from schizophrenia, which is generally thought to have a poor prognosis.  In fact, people with schizophrenia can show a remarkably good recovery, with treatment, so long as they get the right treatment, in the right environment  -- and, perhaps, have a "better" premorbid personality to begin with.

Yet another approach to the stigma of mental illness is simply to deny it.  Some individuals with autism, for example, deny that autism is a mental illness which should be treated an eliminated.  Instead, they argue that autism exemplifies neurodiversity.  In this view, autistic individuals are not mentally ill -- they just have brains that operate differently than most other people.  And while they may need help and accommodation in some respects, they assert that autism isn't something to be eliminated.  Rather, they incorporate autism into their personal identities. 

  • Temple Grandin is a case in point.  Diagnosed with autism as a child, she grew up to get a PhD and have a substantial career as a specialist in animal behavior, on the faculty at Colorado State University.  She argues that her autism gives her a special ability to understand the interior lives of livestock and other domestic animals.
I referred earlier to Patricia Harrington's history of biological psychiatry, The Mind Fixers: Psychiatry's Troubled Search for the Biology of Mental Illness.  Helen Thomson, reviewing Harrington's book in the New York Times ("From Schizophrenia to Megalomania, Three New Books on Mental Illness", 07/07/2019) cites an anecdote about Shekhar Saxena, director of the mental health unit of the World Health Organization:
[A]sked where he'd prefer to be if he were diagnosed with schizophrenia, he said a city in Ethiopia or Sri Lanka, rather than New York or London.  In the developing world, he explained, he had the potential to find a niche for himself as a productive, if eccentric, member of a community, whereas in the modern, Western cities he was far more apt to end up stigmatized and on the margins of society.



The "Pseudopatient" Study

The Pseudopatient StudyThe deleterious effects of the social context on the treatment of the mentally ill are illustrated by a controversial study reported by David Rosenhan, a professor of psychology and law at Stanford University, in 1973. In this study, Rosenhan, as well as some colleagues and students, sought and gained admission to a number of different mental hospitals, public and private, by falsely reporting some symptoms commonly associated with mental illness. All the "pseudopatients" were in their mid-30s (except perhaps for Rosenhan himself), gainfully employed, with no prior history of psychopathology, and during the admission interview all the patients told the truth about themselves except for two matters:

  • they did not reveal that they were academic researchers, or that they worked in an academic organization;
  • they claimed to be experiencing auditory hallucinations in which voices spoke such words as "empty", "hollow", and "thud" (a voice said thud? -- never mind).

Given that hallucinations are serious symptoms, it is not surprising that all of the pseudopatients were admitted to the hospital for observation. What is surprising is what happened next.

Immediately upon their admission, the pseudopatients ceased their simulation, and behaved normally in every way (except for identifying themselves explicitly as simulators). Other patients on the ward frequently noticed the change, but by and large the professional staff did not. In fact, Rosenhan reports that the pseudopatients were largely ignored by the staff.

  • Most were given a diagnosis of schizophrenia, and their behavior was interpreted in terms of the diagnosis. For example, all the pseudopatients kept journals of their experiences. In the chart of one pseudopatient, this was described as "patient engages in writing behavior" -- although if the staff had bothered to read what the patient was writing, his deception would have been uncovered immediately.
  • The pseudopatients were given mostly custodial care, including medications of various sorts averaging 14 capsules per day (they tongued" these pills, and then disposed of them when they could do so unobserved).
  • The pseudopatients stayed in the hospital an average of 19 days, at which time most were discharged with the diagnosis of "schizophrenia in remission" -- notice how the diagnostic label stuck?

A possibly apocryphal story: In one particular episode, Rosenhan spent a sabbatical quarter as a pseudopatient. At the end of the term, when Rosenhan was obliged to return to his teaching duties, he informed the attending psychiatrist that he was Prof. David Rosenhan of Stanford University. The response was "Oh,sure you are!". Rosenhan's wife had to secure a legal write of habeas corpus to get him discharged.

And another one. In another episode, one of Rosenhan's colleagues, an international authority on depression, while masquerading as a pseudopatient, noticed that a particular patient was being treated for depression with an inappropriate drug. The colleague, while staying in his pseudopatient role, approached one of the ward psychiatrists to discuss the matter -- identifying himself merely as someone who had read a lot about depression. After the discussion, the psychiatrist made a notation on the pseudopatient's chart that he displayed "grandiosity", and increased his medication!

A cautionary note:  In 2019, Susannah Cahalan published a book, The Great Pretender: The Undercover Mission that Changed Our Understanding of Madness, which was a journalistic inquiry into the pseudopatient study.  Cahalan's previous book, Brain on Fire: My Month of Madness (2016), was an account of her own psychotic episode, resulting from a neurological autoimmune disorder -- a form of encephalitis initially misdiagnosed as a mental illness.  Going through Rosenhan's papers (he died in 2012), Cahalan was  largely unable to track down the other pseudopatients, or the raw data for the study he published in Science, and she strongly implied that he may have fabricated his data. 

That Rosenhan and his collaborators were admitted to mental hospitals is unremarkable. Anyone who reports auditory hallucinations deserves some further investigation. But after their admission, the treatment of the pseudopatients can only be described as gross negligence. There was little or no investigation of the "presenting complaints" that brought them to the hospital in the first place, and the clinical staff failed to notice that their symptoms had "remitted". There was little or no active attempt at cure or rehabilitation, or apparently any consideration that active treatment was possible.

Although the pseudopatient study is often cited as an example of the negative effects of the medical model in psychiatry, it would be more accurate to say that the problems encountered by the pseudopatients occurred precisely because the clinicians failed to adhere to the medical model. If the psychiatrists and others had acted in accordance with the medical model, they would have discovered much sooner that the patients' symptoms had disappeared; they would have been more observant of their behavior; and they would not have been so quick to dispense medication to patients who did not need it.

Nellie Bly, Nellie Bly...

Nellie BlyRosenhan and his colleagues were perhaps inspired by Nellie Bly (the pen-name -- taken from the song by Stephen Foster -- of Elizabeth Jane Cochrane Spearman, 1864?-1922), a pioneering ("daredevil" woman investigative journalist. Bly began her career with the Pittsburgh Dispatch, but became famous on the staff of the New York World, published by Joseph Pulitzer (he of the prizes). In 1888, she feigned insanity to gain admission to New York's infamous Blackwell's Island insane asylum to gather material for an expose of patient mistreatment that led to a number of important reforms in the mental-health system. The series was subsequently published as a book,Ten Days in a Mad House (1888).



But Blackwell's hadn't always been that way.  Blackwell's Island, later renamed Roosevelt Island, and it hosted a hospital (and later a tuberculosis sanitarium), prison, a workhouse for the poor, and an asylum for the mentally ill, all in separate institutions.  The whole enterprise was established in 1839 following passage of a state law requiring that "lunatics" be housed separately from criminals.  But the facilities soon became overcrowded, and its staffing infected with cronyism, nepotism.  Almost inevitably, and in violation of at least the spirit of the law, prisoners were employed to take care of the mentally ill.  Charles Dickens visited Blackwell's Island as early as 1842, long before Nellie Bly's expose, and was appalled by the conditions he observed there.  

For a history of the public "lunatic"asylum in New York City, see Damnation Island: Poor, Sick, Mad, and Criminal in 19th-Century New York by Stacy Horn (2018). 

But this was not by any means Bly's only accomplishment. In 1890, she beat Jules Verne's fictional record by traveling around the world in less than 80 days (72 days, 6 hours, 11 minutes, and 14 seconds, to be exact). After retiring from journalism to run her deceased husband's companies, she introduced a number reforms for the treatment of industrial workers, including the provision of "managed" health care. On vacation in Europe when World War I broke out, Bly returned to journalism as a war correspondent for the New York Evening Journal. (By the way, Bly took her pen name from the popular song by Stephen Foster, not the reverse.)

Following the Rosenhan study, a Dutch psychiatric hospital actually commissioned a consulting firm to plant pseudopatients in its own wards, as a check on staff behavior and other conditions ("The Doctors Were Real, the Patients Undercover" by Douglas Heingartner, New York Times, 12/01/2009).

The issue of how we label people with mental illness has come to the fore with the "disability rights" movement, and the objection of people who have various disabilities to be identified with their disabilities (a similar issue has been raised in racial, ethnic, and sexual minority communities as well).

One important question is how to refer to people with various disabilities.  Put bluntly, should we say that "Jack is a schizophrenic or "Jack is a person who ha schizophrenia"?  Or substitute any other diagnostic label, including neurotic, depressive, or autistic

Dunn and Andrews (American Psychologist, 2015) have traced the evolution of models for conceptualizing disability -- some of which also apply to other ways of categorizing ourselves and others.  The current debate offers two main choices:

  • A "person-first" approach -- as in, "Jack is a person with a disability".  In this social model (Wright, 1991), disability is presented "as a neutral characteristic or attribute, not a medical problem requiring a cure, and not a representation of moral failing" (p. 258) -- or, it might also be said, as a chronic condition requiring rehabilitation.  Instead, disability itself is seen as a sort of social construction -- or, at least, a matter of  social categorization.
  • An "identity-first" approach -- as in, "Jack is a disabled person".  While this might seem a step backward, this minority model (Olkin & Peldger, 2003) "portrays disability as a neutral, or even positive, as well as natural characteristic of human attribute" (p. 259).  Put another way, disability confers minority -group status: it connotes disabled people, with their own culture, living "in a world designed for nondisabled people".
So it all depends on how you think about minority-group status -- that of other people, if you're the member of the majority; or your own, if you're a member of the minority (any minority). 


Mental-Health Policy

Until recently, the treatment of the mentally ill was left pretty much in the hands of physicians, with minimal regulation. Historically, legislatures and courts have not intruded on issues of medical treatment -- relying, implicitly on physicians' Hippocratic Oath to "do no harm" -- and also out of respect for the sapiential authority of physicians, who are assumed to have more expertise in matters of diagnosis and treatment than laypeople do.

This situation changed sharply in 1971, in Wyatt v. Stickney, a landmark class-action case brought against the Alabama state mental hospitals in 1971. Ricky Wyatt (1954-2011) had a record of youthful misbehavior, as a result of which his juvenile probation officer arranged to have him committed to Bryce State Hospital at the age of 14 -- the youngest patient there (the procedures for such institutional commitment were pretty lax at the time). Despite the fact that he never actually received a psychiatric diagnosis, Wyatt was "treated" with large doses of Thorazine and other antipsychotic medications, and suffered many other indignities. The Federal judge in the case, Frank Johnson, ruled in favor of the plaintiffs and placed the entire state hospital system under federal receivership (where it stayed until 2003). He also issued a set of guidelines for the proper treatment of mental patients, now known as the "Wyatt Standards", that are now applied nationwide. Chief among these is the concept of least restrictive treatment -- that if mentally ill or intellectually disabled persons must be institutionalized, they have a right to as much freedom as practicable. They also have a right to human treatment, sufficient staffing, and individualized treatment plans, plus certain minimal standards for diet and nutrition. It is no accident that the same judge who ruled in Wyatt had also earlier placed Alabama's schools and prisons under federal receivership. The case is a landmark of civil rights law, and the Wyatt Standards are sometimes known as the Mental Patients' Bill of Rights.

Another major change in mental-health policy occurred in 1999, with the White House Conference on Mental Health and the issuance of the Surgeon General's Report on Mental Health. The Surgeon General argued that "mental health is fundamental to health" -- that a sound mind is part and parcel of a sound body. It also stressed that "mental health disorders are real health conditions", not figments of someone's imagination or excuses for not working. It asserted that "the efficacy of mental health treatments is well documented" -- a real change from the Eysenck study of the 1950s, and the Woody Allen Bugaboo. And it noted that "a range of treatments exists for most mental disorders", including both biological treatments (like drugs) and psychotherapy. This was the first time that federal policy formally recognized the problem of mental illness.


Mental Health Parity

Before this time, mental illness was treated quite differently from physical illness. While many consumers has insurance policies like Blue Cross and Blue Shield to help pay medical bills, they had to pay for psychotherapy out of their own pockets. And even "Cadillac" health-insurance plans imposed annual or lifetime dollar limits on expenditures for treatment for mental illness and substance abuse. For example, my own health policy at the University of California, pays for only 28 days of inpatient mental-health treatment, and outpatient psychotherapy had to be authorized in a way that outpatient medical treatment did not.

Now, however, by federal law, such as the Mental Health Parity Act of 1996, there is parity between "medical" and "mental" illness.

  • Health insurance must cover mental health.
  • Mental health benefits must be subject to the same annual and lifetime dollar limits as medical and surgical benefits.
  • There must be the same schedule of deductibles and co-payments.
  • And coverage cannot exempt treatments for behavioral disorders such as alcoholism, substance abuse, and chemical dependency.

Still, there remained important gaps between mental health and other medical services.  In particular, the MHPA control issues like cost-sharing, limits on number of therapy visits or days of inpatient hospitalization, and the like.  As a result, employers and insurance companies were able to circumvent the intention of the Act by increasing patient co-pays for mental-health services, and imposing limits on the number of visits or the days of coverage.    

The Mental Health Parity and Addiction Equity Act of 2008 was intended to strengthen parity even further, by closing the loopholes in the MHPA.  It required that all financial requirements, including co-payments and caps on visits or days of treatment be the same for mental-health services as for medical and surgical services.
So did the Affordable Care Act of 2010 (aka ObamaCare) reinforced parity -- in fact, one Administration official was quoted as saying that it was "kind of the final word on parity" ("Rules to Require Equal Coverage for Mental Ills" by Jackie Calmes and Robert Pear, New York Times, 11/08/2013). 

  • Its final rules, issued in 2013, concerning "essential health benefits" mandated that all insurance policies cover mental-health and substance-abuse treatments.
  • Co-payments, deductibles, and other limits may not be "more restrictive" or "less generous" than those that apply to medical and surgical treatments.
  • Geographical and facility limitations were also equalized. If a California resident is covered for cancer treatment at the Mayo Clinic, in Minnesota, then a Minnesotan is covered for substance-abuse treatment at the Betty Ford center in Palm Springs, California.
  • Whereas the MHPA applied only to group health plans, the provisions of the ACA apply to all forms of health insurance.
  • The precise menu of coverage and services will differ from state to state, and also depending on the level of the individual's plan (bronze, silver, gold, or platinum).  The fact that so much variance is permitted under the ACA does undercut mental-health parity to some extent, as states and plans may skimp on mental-health and substance-abuse services. But at least some coverage is mandatory.

Again, any requirements and restrictions (such as pre-authorization for treatment) must be the same for mental health as for other medical care.  Still, talk (legislation) is cheap: the real issue now is enforcement, in which other budgetary priorities may conspire with the stigma associated with mental illness, and skepticism about the value of mental-health treatments, to prevent mental-health services from actually achieving parity. 

In fact, there are serious challenges to parity for mental-health and substance-abuse services.   In 2017, President Trump and the Republican-controlled House and Senate attempted to "repeal and replace" the Affordable Care Act (ACA, aka "Obamacare").  Every attempt to repeal and replace failed, but by the smallest of margins -- just one vote in the Senate would have meant major changes to the funding of mental-health and substance-abuse treatment. 

Similar issues arise at the state level. 

In 1999, California passed the California Mental Health Parity Act to implement and extend the federal MHPA.

  • In a landmark 2011 case (Harlick v. Blue Cross and Blue Shield of California), the 9th Circuit Court of Appeals ruled that health insurers must cover "medically necessary" inpatient treatment -- not just outpatient treatment -- for nine categories of "severe mental illness": autism, bipolar disorder, (major) depression, eating disorders, obsessive-compulsive disorder, panic disorder, schizophrenia, schizoaffective disorder, and "serious emotional disturbances" in children and adolescents. Up until then, many insurers had refused to cover inpatient treatment for these illnesses, favoring less-expensive outpatient treatment instead. In fact, Ms. Harlick's health-insurance policy did cover outpatient treatment for mental illnesses, as well as short-term inpatient treatment. But the 9th Circuit found that, in many cases, long-term inpatient treatment was "medically necessary", and so must be included in coverage to insure that mental illnesses are covered to the same degree as physical illnesses.
  • California is relatively progressive when it comes to state funding of mental-health and substance-abuse services.  But the residents of other states may not be so lucky.  As noted earlier, the Republican efforts to "repeal and replace" the Affordable Care Act included provisions leaving it up to individual states to determine the list of "essential" health-care services that must be covered by any health insurance policy; and among the most frequently mentioned candidates for removal from that list were mental-health and substance-abuse treatment.

Despite the movement toward parity, many psychiatrists claim that they are not paid enough for their services.  As a result, they do not accept insurance, and require that their patients pay out of pocket.  A 2013 study by Bishop et al. (JAMA Psychiatry) found that only 55% of psychiatrists accepted private insurance, compared with 89% of other physicians, and only 55% of psychiatrists accept Medicare patients, compared to 86% of other specialists, and only 43% accept Medicaid patients, compared to 73% of their colleagues.  Whether these disparities reflect a problem with reimbursement for mental-health services, or the mendacity of some mental-health professionals, isn't clear.  mental-health.  But it makes clear that mental-health parity is only part of the problem of delivering mental-health services.  In addition to balking at (allegedly) low payments, these psychiatrists may also resist the kind of intrusive review that comes with third-party payments.  To be fair, many psychiatrists are solo practitioners, and simply don't have the office staff needed to cope with the paperwork that comes with insurance -- and solo practitioners in all specialties are less likely to take insurance than are physicians in any sort of group practice.  Then again, psychotherapy takes time, and so long as insurers (including Medicare and medicaid) reimburse on a fee-for-service basis, psychotherapists will never be able to move as many patients through their practice, or deliver as many services per unit time, as other physicians (or, for that matter, psychiatrists who do little more than dispense medication).  

Perhaps for this reason, there aren't enough psychiatrists and other mental-health professionals to meet the demand.  A 2012 study by the US Department of health and Human Services found that fully 55% -- that's more than half -- of the nation's 3100 counties have no practicing psychiatrists, psychologists, or social workers. 

And mental-health workers don't earn as much as other medical professionals.  A 2012 survey by Medscape showed that the average income for psychiatrists is only $186,000/year, ranking psychiatry 19th out of 25 medical specialties.  Now, $186,000 is not chump change -- but it's not nearly as much as many physicians make.

So it's a system, and it can't be fixed simply by mandating parity.  Mental-health professionals don't get paid enough, so there aren't enough of them. 

Still, if psychotherapists could show that what they do really works, then insurers would have little choice but to pay for the services they deliver.  Which brings us to the most important feature of the current mental-health environment, which is the move toward evidence-based practices. 


Evidence-Based Practice

But these changes in mental-health policy and practice come with a price, which is that, for the first time, mental-health practitioners have to demonstrate that they know what they're doing -- that their diagnoses are valid, and that their treatments work.

Frankly, up until these changes in policy, people didn't care much what went on in psychotherapy -- largely because they weren't paying for it. People can do what with their own money, the thinking went -- they can buy cigarettes, or speedboats, or psychotherapy. But if they're going to spend my money, whether in the form of insurance premiums or taxes, they better be spending it on something they really need, like a valid diagnosis, and something that really works, like an effective treatment. And also, frankly, "third-party" payers were still suspicious that mental illness was a bogus concept, and psychotherapy a bogus treatment. If people wanted to waste their own money on such self-indulgence, that was fine. But they weren't going to waste mine.

The upshot of all of this is that mental-health parity was a huge boon for mental-health practitioners, because they became eligible for third-party payments from government and insurance companies -- not to mention that parity also helped reduce the stigma of mental illness. But parity came with a price -- which is that mental-health practitioners actually had to prove, for the first time, that their treatments actually worked.

Evidence-Based
              PracticesBut of course, the very question of EBPs suggests that there are some treatments that don't work, and some practices that aren't empirically valid. Could this possibly be true?The simple, straightforward answer is "Yes" -- but this is also true for medicine in general, not just psychotherapy. The medical profession has long cloaked itself with the mantle of science, but until relatively recently physicians had relatively few effective treatments for disease. Mostly, their treatments were palliative in nature, intended to ameliorate the patient's symptoms, and make the patient comfortable, while nature took its course; or else they simply removed diseased organs and tissues through surgery. Scientific medicine really only began with the microbe-hunting of Louis Pasteur and Robert Koch in the 19th century, and successive phases of the pharmaceutical revolution of the 20th century. It is only relatively recently that medical researchers have begun to test medical practices to determine whether they actually work, which ones work better than others, and which are cost effective. Evidence-based medicine is epitomized by the clinical trials that new drugs must go through, to demonstrate their safety and efficacy, before they are marketed for the treatment of specific diseases.

Something similar is now happening to psychotherapy. For a long time, psychotherapists, including psychiatrists and clinical social workers as well as clinical psychologists, have had to operate "in the dark" about whether their treatments were actually effective. Many psychotherapists were loathe to measure the effects of their treatments quantitatively. And so long as patients were paying out of their own pockets, and so long as they believed that they were being helped by their therapists, there was little incentive for psychotherapists to validate their practices scientifically. In the 20th century, as standards for medical practice changed, psychotherapists felt under increasing pressure to demonstrate that their practices, too, "really worked". The pressure increased as responsibility for paying for psychotherapy gradually shifted to "third parties" (patients and their therapists were the first an second parties) such as employers and health-insurance firms. These third-party payees naturally want to make sure that they were getting value for their money: and so they demanded that psychotherapists, like other health professionals, show that their practices were both effective and cost-effective. Although there is an increasing literature on the validity of various assessment practices, such as comparing "objective" and "projective" techniques (see, e.g., "Clinical Assessment" by J.M Wood, H.N. Garb, S.O. Lilienfeld, M.T. Nezworski,Annual Review of Psychology, 2002), most practice research focuses on treatment practices.

And, for that matter, something similar is happening in other fields of public policy.  Under the Obama Administration, the federal Office of Management and Budget has tried to promote experiments structured like clinical trials, with random assignment of subjects (or, for that matter, organizations) to conditions, as a rational basis for policy changes.  This trend is most advanced in the field of education.  Beginning with the passage of the No Child Left behind Act during the George W. Bush (43) Administration, the US Department of Education has posted the findings of "clinical trials" of educational innovations on its What Works Clearinghouse website.  Included among these innovations are many of the teaching and learning strategies discussed in the Exam Information page.

Another name for evidence-based treatments is "empirically supported treatment", or EST. The term "evidence-based practice" (EBP) extends the scope of the EST movement to procedures employed for diagnosis and assessment.

At present, the standards for evidence-based practice in psychotherapy are roughly modeled on the clinical trials required before drugs are marketed (see, e.g., "Empirically Supported Psychological Interventions: Controversies and Evidence" by D.L. Chambless & T.H. Ollendick,Annual Review of Psychology, 2001). In order to qualify as "empirically supported", a treatment must yield outcomes that are significantly better than those associated with an adequate control (typically, patients who receive no treatment at all) in at least two studies, preferably conducted by independent research groups. An ongoing list of treatments that meet current standards is maintained by Division 12 (Clinical Psychology) of the American Psychological Association.

There is a legitimate debate within and between the science and clinical communities about what constitutes proper standards for ESTs.

  • For example, some practitioners argue against any standards at all, on the grounds that therapists should be free to pick whatever treatment they think will be best for the individual patient. Often they argue that psychotherapy is an "art", not a science.  But physicians don't have this freedom: they have to conform their practices to the available evidence -- and where evidence is lacking, to the prevailing standard of care.  When physicians refer to the "medical arts", they refer to the individual practitioner's skill -- but even so, those "arts" are practiced within the bounds of science.
  • Others, including some clinical scientists, believe that the "efficacy" research that provides the basis for ESTs is inappropriate, because the studies are conducted under somewhat artificial conditions that do not represent the problems that are encountered in actual practice. Instead, they propose that ESTs be based on "effectiveness" research, which they argue is more "ecologically valid". But the distinction between efficacy research and effectiveness research seems strained. Research is research. Clinical drug trials are somewhat artificial too, but their artificiality does not prevent physicians from prescribing effective drugs in actual practice. Moreover, "effectiveness" research often doesn't seem to be very good research. In the highly touted Consumer Reports study, for example, the outcome of psychotherapy was measured by patients' self-reported satisfaction with their treatment, instead of objective evidence of actual improvement (see "The Effectiveness of Psychotherapy: The Consumer Reports Study" by M.E.P. Seligman,American Psychologist, December 1995; commentary on Seligman's article was published in the October 1996 issue of the same journal. There were no controls for sampling bias, nor any untreated control group, for example. If the CR study is an example of effectiveness research, then effectiveness research is a step backward, not a step forward.
  • Still other practitioners hold that empirical evidence of efficacy and effectiveness is only part of the equation -- that the choice of treatment should also be based on the clinician's expert judgment, and also on the patient's values. While it's true that clinical expertise is important, personal expertise should not trump scientific evidence. An awful lot of common wisdom in mental health proves, on examination, to be little more than folklore. That is why physicians and surgeons rely on practice guidelines, so that they can choose which of several empirically valid treatments to recommend to their patients. And while patients' values will help determine which treatment they should receive, patient values don't trump empirical evidence any more than clinical expertise does. A cancer patient might want to be treated with avocado extract instead of radiation and chemotherapy, and that's his choice; but no third-party should be expected to pay for a treatment that doesn't work.

Right now, the standards for EST are pretty minimal. They are a good start, but the standards need to be ratcheted up (the opposite of dumbing down, I guess) over time to improve the quality of psychotherapeutic practice.

  • For example, some might wish to drop no-treatment control as an appropriate comparison group, in favor of an appropriate placebo. And should a new treatment simply be comparable to the available standard of care, or should it somehow be better that what is available? These are the same kinds of questions that are raised in drug research.
  • As another example, there is the matter of clinical vs. statistical significance. A statistically significant change in patient status may not be clinically significant in terms of the ordinary course of everyday living. The question is, what are the standards for clinical significance?
  • As another example, two studies out of how many? The current EST standard is modeled on current FDA standards, which require only two positive trials, regardless of how many negative or inconclusive trials there are, raising the file-drawer problem and the issue of selective publication of positive results.
  • As another example, and I think this is the really interesting one, there is the question of mechanism. Since the 19th century, medicine has generally rejected mere "empirical" treatments, which are simply known to work, in favor of treatments whose mechanisms of action can be interpreted within the framework of existing knowledge of anatomy and physiology. Not to say that new treatments can't teach us something new about structure and function (this is one of the reasons that people find hypnosis so interesting), but you do expect a broad consistency. And if a proposed treatment is inconsistent with what we already know, that may be a reason to reject it regardless of whether it works.
    • To take an example from the history of hypnosis (see my paper in the International Journal of Clinical & Experimental Hypnosis, October 2002), Mesmer's animal magnetism wasn't rejected by the Franklin Commission because it didn't work. Everyone agreed that it did work. Animal magnetism was rejected because Mesmer's theory was wrong, and nobody had a good theory to replace it (psychology not having been invented yet). Exorcism might work, empirically, but even if it did medicine would reject it as a legitimate treatment because its underlying theory -- that disease is caused by demon possession -- is inconsistent with everything we know about how the body works. In fact, Mesmer (and everyone else) agreed that Fr. Gassner's exorcisms worked. But Mesmer won the debate with Gassner because (at the time) he offered a materialistic, and thus scientifically acceptable, theory to account for his effects (the Franklin Commission came along later).

It would represent a major "ratcheting up" of the EST requirements if we required not just that the treatment be efficacious, but that the theory on which the treatment is based be scientifically valid as well. But it may be exactly what we need (see below for more on this subject).

These are all points for debate within the field. The actual standards for EST are the result of a political process, and inevitably involve some compromise. But once they are established, the important thing is that (1) they are adhered to and (2) that if they are changed the change is in the direction of tightening not loosening. If practice is to be based on science, and science goes forward, there can be no going back for practice.

It took a Supreme Court case, Virginia Academy of Clinical Psychologists, and Robert J. Resnick, Ph.D. v. Blue Shield of Virginia, (1977), to establish psychologists' legal right to practice, and receive reimbursement, without being supervised by psychiatrists or other physicians.  Clinical psychology owes is autonomy from psychiatry, and its eligibility for third-party payments, to the assumption that its practices rest on a firm scientific foundation. Therefore, clinical psychology, and the rest of the mental-health profession, departs from scientific evidence at its own risk.


Putting Mental Illness in Social Context

Rosenhan's "pseudopatient" study illustrates an important general point about mental illness, and for that matter about normal mental and behavioral functioning as well.

  • We assume (based on Chicago functionalism, described in the lecture supplements on the Biological Bases of Mind and Behavior) that the individual's mental life takes place in a social context.
  • We know from the person-by-situation interaction (described in the lecture supplement on Personality and Social Interaction) that we cannot extricate the individual from his or her surrounding social and cultural context.

What is true for normal mental functioning is true for mental illness as well. We simply cannot treat mental illness successfully by operating on the individual alone. We must also act to change the social environment in which the person lives.

For therapy to succeed, mental-health professionals must do more than address the individual's underlying psychopathology -- psychological deficits, maladaptive social learning, diathesis and stress, biological substrates, and so on.

  • They must also work with the patient's family members, friends, neighbors, employers, and co-workers.
  • They -- and we -- must also work to change social attitudes so that the mentally ill can live in an environment that permits maximum recovery and adaptation.



When Will We Solve Mental Illness?


Note: On 11/19/2018, the New York Times commemorated 40 years of its "Science Times" section by looking at "11 Things We'd Really Like to Know -- And A Few We'd Rather Not Discuss".  One of the essays, by Benedict Carey, looked at progress in understanding the causes and treatment of mental illness.  Note the downplaying of biology, and the underscoring of the role of experience -- even by biological psychiatrists!





Biology was supposed to cure what ails psychiatry. Decades later, millions of people with mental disorders are still waiting.

Nothing humbles history’s great thinkers more quickly than reading their declarations on the causes of madness. Over the centuries, mental illness has been attributed to everything from a “badness of spirit” (Aristotle) and a “humoral imbalance” (Galen) to autoerotic fixation (Freud) and the weakness of the hierarchical state of the ego (Jung).

The arrival of biological psychiatry, in the past few decades, was expected to clarify matters, by detailing how abnormalities in the brain gave rise to all variety of mental distress. But that goal hasn’t been achieved — nor is it likely to be, in this lifetime.

Still, the futility of the effort promises to inspire a change in the culture of behavioral science in the coming decades. The way forward will require a closer collaboration between scientists and the individuals they’re trying to understand, a mutual endeavor based on a shared appreciation of where the science stands, and why it hasn’t progressed further.

“There has to be far more give and take between researchers and the people suffering with these disorders,” said Dr. Steven Hyman, director of the Stanley Center for Psychiatric Research at the Broad Institute of M.I.T. and Harvard. “The research cannot happen without them, and they need to be convinced it’s promising.”

The course of Science Times coincides almost exactly with the tear-down and rebuilding of psychiatry. Over the past 40 years, the field remade itself from the inside out, radically altering how researchers and the public talked about the root causes of persistent mental distress.

The blueprint for reassembly was the revision in 1980 of psychiatry’s field guide, the Diagnostic and Statistical Manual of Mental Disorders, which effectively excluded psychological explanations.

Gone was the rich Freudian language about hidden conflicts, along with the empty theories about incorrect or insufficient “mothering.” Depression became a cluster of symptoms and behaviors; so did obsessive-compulsive disorder, bipolar disorder, schizophrenia, autism and the rest.

This modernized edifice struck many therapists as a behavioral McMansion: an eyesore, crude and grandiose. But there was no denying that the plumbing worked, the lighting was better, and the occupants had a clear, agreed-upon language.

Researchers now had tidier labels to work with; more sophisticated tools, including M.R.I.s, animal models, and genetic analysis, to guide their investigations of the brain; and a better understanding of why the available drugs and forms of psychotherapy relieved symptoms for many patients.

Science journalists, and their readers, also had an easier time understanding the new vocabulary. In time, mental problems became mental disorders, then brain disorders, perhaps caused by faulty wiring, a “chemical imbalance” or genes.

But the actual science didn’t back up those interpretations. Despite billions of dollars in research funding, and thousands of journal articles, biological psychiatry has given doctors and patients little of practical value, never mind a cause or a cure.

Nonetheless, that failure offers two valuable guideposts for the next 40 years of research.

One is that psychiatry’s now-standard diagnostic system — the well-lighted structure, with all its labels — does not map well onto any shared biology. Depression is not one ailment but many, expressing different faces in different people. Likewise for persistent anxiety, post-traumatic stress, and personality issues such as borderline personality disorder.

As a result, the best place for biological scientists to find traction is with individuals who have highly heritable, narrowly defined problems. This research area has run into many blind alleys, but there are promising leads.

In 2016, researchers at the Broad Institute found strong evidence that the development of schizophrenia is tied to genes that regulate synaptic pruning, a natural process of brain reorganization that ramps up during adolescence and young adulthood.

“We are now following up hard on that finding,” said Dr. Hyman. “We owe it those who are suffering with this diagnosis.”

Scientists also foresee a breakthrough in understanding the genetics of autism. Dr. Matthew State, chief of psychiatry at the University of California, San Francisco, said that in a subset of people on the autism spectrum, “the top 10 associated genes have huge effects, so a clinical trial using gene therapies is in plausible reach.”

The second guidepost concerns the impact of biology.

Although there are several important exceptions, measurable differences in brain biology appear to contribute only a fraction of added risk for developing persistent mental problems. Genetic inheritance surely plays a role, but it falls well short of a stand-alone “cause” in most people who receive a diagnosis.

The remainder of the risk is supplied by experience: the messy combination of trauma, substance use, loss and identity crises that make up an individual’s intimate, personal history. Biology has nothing to say about those factors, but people do. Millions of individuals who develop a disabling mental illness either recover entirely or learn to manage their distress in ways that give them back a full life. Together, they constitute a deep reservoir of scientific data that until recently has not been tapped.

Gail Hornstein, a professor of psychology at Mount Holyoke College, is now running a study of people who attend meetings of the Hearing Voices Network, a grass-roots, Alcoholics Anonymous-like group where people can talk with one another about their mental health struggles.

Many participants are veterans of the psychiatric system, people who have received multiple diagnoses and decided to leave medical care behind. The study will analyze their experiences, their personal techniques to manage distress, and the distinctive characteristics of the Hearing Voices groups that account for their effectiveness.

“When people have an opportunity to engage in ongoing, in-depth conversation with others with similar experiences, their lives are transformed,” said Dr. Hornstein, who has chronicled the network and its growth in the United States. “We start with a person’s own framework of understanding and move from there.”

She added: “We have underestimated the power of social interactions. We see people who’ve been in the system for years, on every med there is. How is it possible that such people have recovered, through the process of talking with others? How has that occurred? That is the question we need to answer.”

To push beyond the futility of the last 40 years, scientists will need to work not only from the bottom up, with genetics, but also from the top down, guided by individuals who have struggled with mental illness and come out the other side.

Their expertise is fraught with the pain of having been misunderstood and, often, mistreated. But it’s also the kind of expertise that researchers will need if they hope to build a science that even remotely describes, much less predicts, the fullness of human mental suffering.




This page last revised 08/010/2024.