Home Introduction Cognitive Psychology Cognitive Perspective Social Perception Social Memory Social Categorization Social Judgment Language Automaticity Self Social Neuropsychology Personality Social Intelligence Development Sociology of Knowledge Social Construction Conclusion Lecture Illustrations Exam Information

 

Social Categorization

 

Find an overview of concepts and categories in the General Psychology lecture supplements on Thinking, Reasoning, Problem-Solving, and Decision-Making.

 

Social perception is concerned with the ways in which we use stimulus information -- in the form of trait terms or  more physical features of the stimulus -- to form mental representations -- impressions -- of people and situations.  As we have already seen person perception entails more than extracting information from a stimulus: the perceiver must combine information from the stimulus (including the background) with knowledge retrieved from memory.  Much of this pre-existing knowledge comes in the form of implicit personality theory, but more broadly the act of perception is not completed until the new percept is related to the perceiver's pre-existing knowledge.  Paraphrasing Jerome Bruner (1957), we can say that Every act of perception is an act of categorization.


What Bruner actually said was: 

"Perception involves an act of categorization....  The use of cues in inferring the categorial [sic] identity of a perceived object... is as much a feature of perception as the sensory stuff from which percepts are made."

Perception connects knowledge of the stimulus with knowledge about the kind of object or event the stimulus is.  This conceptual knowledge exists as part of semantic memory.  In contrast with the autobiographical knowledge of specific events and experiences that comprises episodic memory, semantic memory holds abstract, context free knowledge:

Concepts and categories are critical to cognition because they enable us to organize the world -- to reduce the "blooming, buzzing confusion" (James' phrase) of experience to something we can understand and manage.  Categorization is critical to perception, because it enables us to infer properties of an object that we cannot perceive directly.  Once we have categorized an object, on the basis of those properties we can perceive, we can infer that it has other, unseen properties that it shares with other members of its class.

In the social-intelligence view of personality (Cantor & Kihlstrom, 1987), social categorization sorts persons, situations, and behaviors into equivalence classes that are the basis for behavioral consistency.  People behave similarly in situations that they perceive to be similar; and categorization is the basis of perceptual similarity, because instances of a category are broadly similar to each other.



Concepts and Categories

Having now used the terms concept and category interchangeably, it is time to distinguish between them:




  • A category may be defined as a group of objects, events, or ideas which share attributes or features in common. Categories partition the world into equivalence classes.  Oak trees and elm trees belong in the category trees, while the Atlantic and the Pacific belong in the category oceans.
    • Some categories are natural, in that their members are part of the natural world.
    • Other categories are artificial, in that they have been contrived by experimenters who want to know more about how categorization works.
  • A concept is the mental representation of a category, usually abstracted from particular instances. Concepts serve important mental functions: they group related entities together into classes, and provide the basis for synonyms, antonyms, and implications.  Concepts summarize our beliefs about how the world is divided up into equivalence classes, and about what entire classes of individual members have in common.
Generally, we think of our mental concepts as being derived from the actual categorical structure of the real world, but there are also points of divergence:
  • Categories may exist in the real world, without being mentally represented as concepts.
  • Concepts may impose a structure on the world that does not exist there.

Technically, categories exist in the real world, while concepts exist in the mind. However, this technical distinction is difficult to uphold, and psychologists commonly use the two terms interchangeably. In fact, objective categories may not exist in the real world, independently of the mind that conceives them (a question related to the philosophical debate between realism and idealism).  Put another way, the question is whether the mind picks up on the categorical structure of the world, or whether the mind imposes this structure on the world.

Some categories may be defined through enumeration: an exhaustive list of all instances of a category. A good example is the letters of the English alphabet, A through Z; these have nothing in common except their status as letters in the English alphabet.

A variant on enumeration is to define a category by a rule which will generate all instances of the category (these instances all have in common that they conform to the rule). An example is the concept of integer in mathematics, which is defined as the numbers 0, 1, and any number which can be obtained by adding or subtracting 1 from these numbers one or more times.

The most common definitions of categories are by attributes: properties or features which are shared by all members of a category. Thus, birds are warm-blooded vertebrates with feathers and wings, while fish are cold-blooded vertebrates with scales and fins. There are three broad types of attributes relevant to category definition:

  • perceptual or stimulus features help define natural categories like birds and fish; 
  • functional attributes, including the operations performed with or by objects, or the uses to which they can be put, are used to define categories of artifacts like tools (instruments which are worked by hand) or vehicles (means of transporting things); 
  • relational features, which specify the relationship between an instance and something else, are used to define many social categories like aunt (the sister of a father or a mother) or stepson (the son of one's husband or wife by a former marriage).

Of course, some categories are defined by mixtures of perceptual, functional, and relational features.

Still, most categories are defined by attributes, meaning that concepts are summary descriptions of an entire class of objects, events, and ideas. There are three principal ways in which such categories are organized: as proper sets, as fuzzy sets, and as sets of exemplars.

Now having defined the differences between the two terms, we are going to use them interchangeably again.  The reason is that it's boring to write concept all the time; moreover, the noun category has a cognate verb form, categorization, while conceptual does not (unless you count conceptualization, which is a mouthful that doesn't mean quite the same thing as categorization).  

Still, the semantic difference between concepts and categories raises two particularly interesting issues for social categorization:

  • To what extent does the categorical structure of the social world exist in the real world outside the mind, to be discovered by the social perceiver, and to what extent is this structure imposed on the world by the social perceiver?
  • To what extent are social categories "natural", and to what extent are they "artificial"?

Concepts and categories are just about the most interesting topic in all of psychology and cognitive science, and two very good books have been written on the subject.  They are highly recommended:

  • Categories and Concepts by E.E. Smith and D.L. Medin (Harvard University Press, 1981).
  • The Big Book of Concepts by G.L. Murphy (MIT Press, 2002).

Here in Berkeley's Psychology Department, Prof. Eleanor Rosch -- who made fundamental contributions to the "prototype" view of conceptual structure -- gives a wonderful course on the subject.  Prof. George Lakoff, who has also made fundamental contributions to our understanding of concepts and categories, gives a similar course in the Linguistics Department.  


The study of social categorization encompasses a wide variety of social categories:

Mostly, social categorization has been studied in the domains of persons and social groups.

The Karass and the Granfalloon

In his novel Cat's Cradle (1963), Kurt Vonnegut makes a distinction between two types of social categories:

  • The Granfalloon, a recognized grouping of people that have no real relationship with each other.
  • The Karass, a group of people whose relationships with each other are profound but unknown.

Vonnegut's example of a granfalloon is the term Hoosiers, referring to residents of the state of Indiana.

In the novel, Vonnegut invents a religion, Bokonism, that celebrates people's karasses.

With social categories -- with any categories, really, but especially with social categories -- it's important to consider whether the category in question is a karass -- a category that really means something -- or a granfalloon.


Us and Them

Perhaps the most basic scheme for social categorization divides the world into two groups: Us and Them -- or, to use the technical terms of sociology and social psychology, the ingroup and the outgroup.  As Charles Sumner put it (1906, p. 12):




The insiders in a we-group are in a relation of peace, order, law, government, and industry, to each other.  their relation to all outsiders, or others-groups, is one of war and plunder....  Sentiments are produced to correspond.  Loyalty to the group, sacrifice for it, hatred and contempt for outsiders, brotherhood within, warlikeness without -- all grow together, common products of the same situation.

 

The Robbers Cave Experiment

The division of the social world into US and Them is vividly illustrated by one of the earliest examples of experimental social psychology -- the "Robbers Cave" experiment conducted by Muzafer Sherif and his colleagues.  Through extensive pretesting, Sherif et al. identified a group of 22 5th-grade boys from Oklahoma City who were absolutely "average" in every imaginable way.  These children were then offered a vacation at a camp located at Robbers Cave State Park (hence the name).


In Stage 1 of the experiment, the boys were divided into two groups, unbeknownst to each other, and assigned to physically separate campsites.  For one week, each group was engaged in a number of independent activities encouraged to foster intragroup cohesion, and the establishment of a hierarchy of leadership.

In Stage 2, the two groups were brought together for a series of tournaments.  There the researchers observed the development of considerable intergroup competitiveness and hostility; they also observed shifts in leadership within each group.

Beans.JPG (70277 bytes)Aside from ordinary observation, Sherif and his colleagues conducted a number of experimental tests to document the competition and hostility between the two groups.  In one such study, they scattered beans on a playing field, and had the two groups compete to see who could pick up the most (and stuff them in a bag through a very small opening).  Before the actual count, the experimenters showed photographs, ostensibly of the contents of the bags collected by the two groups, and asked the boys to estimate the number of beans in each bag.  In fact, each of the displays contained exactly 35 beans.  Nevertheless, members of the Eagles estimated that they had collected more beans than the Rattlers, and the Rattlers estimated that they had collected more beans than the Eagles.

In Stage 3, Sherif et al. engaged the two groups in noncompetitive, cooperative activity for the good of all -- such as using a rope, previously used in a tug of war, to haul a delivery truck out of a ditch.  In fact, these staged crises were successful in reducing intergroup friction and inducing intergroup cooperation.


The Minimal Group Paradigm

In the Robbers Cave experiment, the two groups achieved a clear group identity before they were brought together, and initially encountered each other in an environment of competition for limited resources -- precisely the circumstances in which Sumner thought that a distinction between Us and Them would emerge.  But it turns out that competition for limited resources is unnecessary for the division into ingroup and outgroup to occur.


A  series of classic experiments by Henri Tajfel and his colleagues (1971; Billig & Tajfel, 1973) employing the minimal group paradigm shows how powerful social categorization can be.  




In his experiments, Tajfel assigned subjects to groups on an essentially arbitrary basis - - for example, based on their expressed preferences for various paintings -- or, in the most dramatic instance, based on the results of a coin-toss.   Members of the two groups did not know other members in either group.  They had no experiential basis for the formation of ingroup and outgroup stereotypes.  And they had no history of group interaction that could lead to the formation of differential attitudes.  Nevertheless, when group members were given the opportunity to distribute rewards to other group members, the subjects consistently favored members of their own ingroup, relative to the outgroup.


Based on this line of research, Tajfel and Turner (1979) formulated social identity theory, which argues that there are two sources of self-esteem: one's own personal status and accomplishments, and the status and accomplishments of the groups of which one is a member.  By boosting the status of their own ingroup, compared to outgroups, individuals indirectly increase their own status and self-esteem.  They also discovered a phenomenon known as basking in reflected glory, by which individual group members receive boosts in self-esteem based on the achievements of their ingroups, even though they themselves had nothing to do with those achievements -- and even when their connection to the group is tenuous. 


An interesting phenomenon of group membership is the outgroup homogeneity effect (Allen & Wilder, 1979).  In their experiment, Allen and Wilder took pre-experimental measures of attitudes toward various topics.  Subjects were then arbitrarily assigned to two groups, ostensibly on the basis of their preferences for paintings by Kandinsky or Klee, as in the original experiment by Tajfel et al.  Then they were asked to predict the responses of ingroup and outgroup members to various attitude statements.  Subjects ascribed attitudes to other group members in such a manner as to decrease the perceived attitudinal similarity between themselves and other ingroup members, increase the perceived attitudinal similarity among members of the outgroup, and also to increase the perceived difference between ingroup and outgroup.  This was true even for attitude statements that had nothing to do with abstract art.

The Outgroup Homogeneity Effect in Literature

Kurt Vonnegut must have read a lot of social psychology.  Another of his novels, Slapstick: Or, Lonesome No More (1976), uses the outgroup homogeneity effect as a kind of plot device.  In this novel, a computer randomly assigns every person a new middle name -- either Daffodil-11 or Raspberry-13.  Almost immediately, the Daffodil-11s and Raspberry-13s organize themselves into interest groups.


So, the mere division of people into two groups, however arbitrary, seems to create two mental categories, Us and Them, with "people like us" deemed more similar to each other than we are to "people like them".

The Us-Them situation becomes even more complicated when you consider how many ingroups we are actually members of, each ingroup containing a corresponding outgroup.  

As an exercise, try to determine how many ingroups you're a member of, and see how many different outgroups those ingroup memberships entail.

The basic division of the social world into Us and them, ingroups and outgroups, is the topic of Us and Them: Understanding Your Tribal Mind by David Berreby (Little, Brown 2005).  In his book, Berry analyzes what he sees as "a fundamental human urge to classify and identify with 'human kinds'" (from "Tricky, Turbulent, Tribal" by Henry Gee, Scientific American, 12/05).

For an interesting take on the Robbers Cave Experiment, and its implications for hyperpartisanship in the post-Trump era of American politics, see "Poles Apart" by Elizabeth Kolbert, New Yorker, 01/03-10/2022.  Kolbert also discusses the implications of Tajfel's "Minimal Group Paradigm".

Arguably, an even more fundamental category is between Self and Other, about which more later.


Categories of Persons

What are the natural categories in the domain of persons?  Here's a list, inspired by the lexicographical work of Roger Brown (1980):




There Are Two Kinds of People

There's an old joke that there are two kinds of people: those who say that there are two kinds of people and those who don't.  Dwight Garner, a book critic and collector of quotations (see his book, Garner's Quotations: A Modern Miscellany), has collected these quotes along the same lines ("Let's Become More Divided", New York Times, 01/31/2021).

Mankind is divisible into two great classes: hosts and guests.

— Max Beerbohm


There are two kinds of people in this world: those who know where their high school yearbook is and those who do not.

— Sloane Crosley, “I Was Told There’d Be Cake”

The world is divided into two types: the idle and the anti-idle. The anti-idle I hereby christen ‘botherers.’

— Tom Hodgkinson, “How to Be Idle”


There are two kinds of people in the world, those who leave home, and those who don’t.

— Tayari Jones, “An American Marriage”


Either you’re a crunchy person or you’re not.

— Marion Cunningham, “The Breakfast Book”


Instead of this absurd division into sexes they ought to class people as static and dynamic.

— Evelyn Waugh, “Decline and Fall”

 

The world, as we know, divides unequally between those who love aspic (not too many) and those who loathe and fear it (most).

— Laurie Colwin, “More Home Cooking”


The world is divided into two classes — invalids and nurses.

— James McNeill Whistler


For me, all people are divided into two groups — those who laugh, and those who smile.

— Vladimir Nabokov, “Think, Write, Speak”


The world is home to two kinds of folk: those who name their horses and those who don’t.

— Téa Obreht, “Inland”

 

Freddie, there are two kinds of people in this world, and you ain’t one of them.

— Dolly Parton, in “Rhinestone”


Perhaps there are two kinds of people, those for whom nothingness is no problem, and those for whom it is an insuperable problem.

— John Updike, “Self-Consciousness”


There are only two kinds of people, the ones who like sleeping next to the wall, and those who like sleeping next to the people who push them off the bed.

— Etgar Keret, “The Bus Driver Who Wanted to Be God” 

 

“Sheep” and “goats”

— The two classes of people, according to Hugh Trevor-Roper


“Cats” and “monkeys”

— The two human types, according to Henry James


“Cleans” and “Dirties”

— The two kinds of writers, according to Saul Bellow

“Hairy” and “Smooth”

— The two kinds of playwrights, according to Kenneth Tynan

 

There are some who can live without wild things and some who cannot.

— Aldo Leopold, “A Sand County Almanac”

 

What he failed to understand was that there were really only two kinds of people: fat ones and thin ones.

— Margaret Atwood, “Lady Oracle”


There are two kinds of people in the world: the kind who alphabetize their record collections, and the kind who don’t.

— Sarah Vowell, “The Partly Cloudy Patriot”


There are only the pursued, the pursuing, the busy, and the tired.

— F. Scott Fitzgerald, “The Great Gatsby”

I divide the world into people who want to control something and those who want to make something.

— Henri Cole, in The Paris Review


The world is divided into two types of fishermen: those who catch fish and those who do not.

— Jacques Pépin, “The Apprentice”


There truly are two kinds of people: you and everyone else.

— Sarah Manguso


There are two kinds of people, and I don’t care much for either of them.

— Eric Idle, “Always Look on the Bright Side of Life”


There may be said to be two classes of people in the world; those who constantly divide the people of the world into two classes, and those who do not. Both classes are extremely unpleasant to meet socially.

— Robert Benchley, in Vanity Fair


Gender Categories

At first blush, the gender categories look simple enough: people come in two sexes, male and female, depending on their endowment of sex chromosomes, XX or XY.  But it turns out that things are a little more complicated than this, so that gender categorization provides an interesting example of the intersection of natural and artificial, and biological and social, categories.

As it happens, chromosomal sex (XX or XY) is not determinative of phenotypic sex (whether one has male or female reproductive anatomy).  As in everything else, heredity interacts with environment, and in this case the hormonal environment of the fetus is particularly important in gender differentiation.  Sometimes due to accidents of genetics, as in Klinefelter's syndrome (XXY) and Turner's syndrome (XO), but mostly due to accidents of the endocrine system, individuals can be born with ambiguous external genitalia.  It is possible, for example, to be chromosomally male but phenotypically female (e.g., the androgen-insensitivity syndrome), or to be chromosomally female but phenotypically male (e.g., congenital adrenal hyperplasia).  

What do do with these cases of pseudohermaphroditism?  (There are no true hermaphrodites, who have the complete reproductive anatomies of both males and females - except in mythology.)  For a long time they were simply ignored.  Then, in an attempt to help people with these conditions to lead better lives, they were often surgically "corrected" so that their external genitalia more closely corresponded to the male or (usually) female ideal -- see, for example, the cases described in Man and Woman, Boy and Girl by J. Money & A. Ehrhardt (1972).  

More recently, however, some authorities have argued that such individuals constitute their own gender categories.  For example, Anne Fausto Sterling (in Myths of Gender, 1985, 1992; and especially in Sexing the Body, 2000) has identified three "intersex" gender categories, where the individuals deviate from the "Platonic ideal":



Rather than force these individuals to conform to the Platonic ideal for males or females, Fausto-Sterling argues that they constitute separate gender categories, and should be acknowledged as such and considered to be normal, not pathological.  According to Fausto-Sterling's account, then, there are really five sexes, not two.  Put another way, the categorization of people into two sexes is a social construction, imposed on the individual by society.  

Fausto-Sterling's argument is provocative, but it is also controversial.  See, for example, "How common is intersex? A response to Anne Fausto-Sterling" by L. Sax, Journal of Sex Research, 2002.

 

Gender Identity

It is one thing to be male or female biologically, and another thing to identify oneself as such.  Most people, even most of those who fall into the "intersex" category, identify themselves as either male or female.  Even transgender

individuals will identify them as "a man trapped in a woman's body" (meaning that they identify themselves as male), or the reverse (meaning that they identify themselves as female).  Gender identity usually corresponds to phenotypic sex, but this is not necessarily the case.  In any event, with respect to social cognition, we are mostly interested in gender identity -- how people identify and present themselves with respect to gender.  In Fausto-Sterling's world, there would be at least a third category for gender identity, intersex.

Transgendered and Transsexual

Sex researchers, endocrinologists, and feminists can debate whether there are five categories of gender, but there's no question that a third category of transgender individuals has begun to emerge.  The definition of "transgender" is a little ambiguous (no joke intended), but generally appears to refer to people who are for whatever reason uncomfortable with the gender of their birth (or, put in terms compatible with social constructivism, their assigned gender).  A transgender male may simply not identify himself as a male; alternatively, he may identify himself as a female, in which case we may speak of a transsexual individual.  Transsexuals may seek to have their bodies surgically altered to conform to their gender identities.  Transgender individuals may not go that far, because they do not necessarily identify themselves as either male or female.  

For an article on transgender and transsexual students on American college campuses, see "On Campus, Rethinking Biology 101" by Fred A. Bernstein, New York Times, 03/07/04).  

 

Gender Role

Beyond subjective matters of gender identity, there is the matter of gender role -- the individual's public display of characteristics associated with masculinity or femininity.  It turns out, that having a male or female gender identity does not necessarily mean that the person will adopt the "corresponding" masculine or feminine gender role.

Although masculinity and femininity would seem to be opposite ends of a single bipolar trait, work by Sandra Bem and Janet Taylor Spence, among others, has shown that masculinity and femininity are in fact independent of each other.  Masculinity does not contradict femininity, and it is possible to be high on both -- or low on both, for that matter.  According to this analysis, we have four categories of gender role:

 

Sexual Orientation

Then there is the matter of erotic and sexual orientation (not "preference", as some would have it -- as if the matter of what turns people on sexually was a matter of choice, as between corn flakes and shredded wheat for breakfast).  Most people are heterosexual, with men falling in love with, and having sex with, women and women falling in love with, and having sex with, men.  But there are other varieties of sexual orientation:

According to the conventional view of gender, everything is given by the genes: XYs become males who identify themselves as such, become masculine, and make love with women; XXs become females who identify themselves as such, become feminine, and make love with men.  In this view, there are only 2 gender categories, male or female, they are dichotomous, and everything else flows from this.

But it turns out that gender categories are more complicated than this.  If there are really

and they really are to some extent orthogonal to each other, then that leaves 240 gender-related categories -- a long way from 2!  Which we choose depends on how we, individually and as a society, think about gender.  In other words, what looks like a natural biological category has some elements of a social construction.

In 2014, Facebook, the social-media giant, announced that members could choose among some 50 options for identifying their gender.  In addition to "Male," "Female," and  "Gender-Neutral", the site offers a number of options, including "Androgyne," "Pangender," "Bi-gender," "Agender," "Trans Woman," "Transsexual," "Trans* Man," "Cis woman," all under the "Custom" option in their profile's "Basic Information" section.


In 2015, the University of California adopted a new system for tracking students gender-related identity.  Previously, all applicants had to do was to indicate whether they were male or female.  But as of 2015, applicants will be asked a series of gender-related questions:


Kinship Categories

People can be classified as male or female (etc.), but they can also be classified by their relationships to each other.

The nuclear family, celebrated by popular television shows of the 1950s such as Father Knows Best and The Adventures of Ozzie and Harriet, consists of four kinship categories:


Of course, there is also an extended family, consisting (depending on how far out it is extended) of additional kinship categories, both paternal (on the father's side) and maternal (on the mother's side):

None of this takes into account the kinship categories created by divorce and remarriage, such as:

Never mind the difficulties created by "foster" families.  We're talking only about blood relations -- relations determined by consanguinity -- here

Again, given that we are talking about relations that are determined by shared blood, it would seem that kinship categories are natural, and are biologically defined:

As such, it would seem that everyone would share the same set of "natural" kinship categories.  But it turns out that this isn't true.  Nerlove and Romney (1967) found wide variance in the kinship categories employed by various cultures.  For example:

To take a particularly interesting example, Hopi sibling terminology has specific terms for:

This constellation of sibling terms makes sense in the context of Hopi culture (Eggan, 1950; Nerlove & Romney, 1967), and this reinforces the point that social categorization may be quite different from biological categorization, and that social categorization serves specifically social purposes.

 

Marital Status Categories

As a variant on kinship categories, we also classify people by their marital status.  

The big category here is married vs. single.

Within the "single" category, there are a number of subcategories, including:

In 2004, a controversy erupted over whether gays and lesbians should have the right to marry (in 2003 the Episcopal Church considered a proposal to solemnize unions between same-sex partners, and in 2004 Gavin Newsom, the Mayor of San Francisco, ordered the City Registrar to issue marriage licenses to same-sex couples), prompting President George W. Bush to call for an amendment to the US Constitution that would restrict marriage to a "union of a man with a woman" (in the language of one proposed amendment).  Other arrangements would be called civil unions or some-such, but they would not necessarily have the same legal status as a marriage.  Setting aside discussion of the wisdom of this proposal, it seems to be an attempt to apply a classical, proper-set view of concepts to the concept of marriage.  That is, the union of "a man with a woman" would be the singly necessary and jointly sufficient feature to define the concept of marriage.  But as one lesbian participant in the San Francisco same-sex-marriage marathon noted, "I live in the suburbs, I have two kids, and I drive an SUV -- why shouldn't I be able to get married" (or words to this effect).  Clearly, she is defining marriage in terms of non-necessary features.  Perhaps she thinks of marriage as a fuzzy set, where perhaps the union of "one man with one woman" is the prototype, but other kinds of marriages are possible. 

In 2015, the Supreme Court rendered this debate moot by deciding, by a 5-4 vote, that there is a civil right to same-sex marriage.

 

Age Categories

Age is another "natural", "biological" variable: we're born at age 0, and die some time later.  If we're lucky, we pass through infancy and childhood, puberty, adolescence, adulthood, and old age.  In strictly biological terms, more or less, infancy starts at birth, puberty marks the boundary between childhood and adolescence, and death marks the end of adulthood; where the boundary is between adolescence and adulthood is uncertain, as even causal observation indicates.  

But where are the boundaries between infancy and childhood, and between adolescence and adulthood?  Although Brown identified age as a "natural category" of persons, it is also clear that, at least to some extent, even age categories are social conventions.

Moreover, even if certain age categories are biologically "natural", societies seem to invent subcategories within them.

 

Freud's Stages of Psychosexual Development

Freud divided childhood into a succession of five stages of psychosexual development:

For Freud, all instincts have their origins in some somatic irritation -- almost literally, some itch that must be scratched. In contrast to hunger, thirst, and fatigue, however, Freud thought that the arousal and gratification of the sexual instincts could focus on different portions of the body at different times. More specifically, he argued that the locus of the sexual instincts changed systematically throughout childhood, and stabilized in adolescence. This systematic change in the locus of the sexual instincts comprised the various stages of psychosexual development. According to this view, the child's progress through these stages was decisive for the development of personality.

Properly speaking, the first stage of psychosexual development is birth, the transition from fetus to neonate. Freud himself did not focus on this aspect of development, but we may fill in the picture by discussing the ideas of one of his colleagues, Otto Rank (1884-1939).

Rank believed that birth trauma was the most important psychological development in the life history of the individual. He argued that the fetus, in utero, gets primary pleasure -- immediate gratification of its needs. Immediately upon leaving the womb, however, the newborn experiences tension for the first time. There is, first, the over-stimulation of the environment. More important, there are the small deprivations that accompany waiting to be fed. In Rank's view, birth trauma created a reservoir of anxiety that was released throughout life. All later gratifications recapitulated those received during the nine months of gestation. By the same token, all later deprivations recapitulated the birth trauma.

Freud disagreed with the specifics of Rank's views, but he agreed that birth was important. At birth, the individual is thrust, unprotected, into a new world. Later psychological development was a function of the new demands placed on the infant and child by that world.

From birth until about one year of age, the child is in the oral stage of psychosexual development. The newborn child starts out as all id, and no ego. He or she experiences only pleasure and pain. With feeding, the infant must begin to respond to the demands of the external world -- what it provides, and the schedule on which it does so. Initially, Freud thought, instinct-gratification was centered on the mouth: the child's chief activity is sucking on breast or bottle. This activity has obvious nutritive value: it is the way the child copes with hunger and thirst. But, Freud held, it also has sexual value because the child takes pleasure in sucking; and it has destructive value because the child can express aggression by biting.

Freud pointed out that the very young child needs his or her mother (or some reasonable substitute) for gratification. Her absence leads to frustration of instinctual needs, and the development of anxiety. Accordingly, the legacy of the oral stage is separation anxiety and feelings of dependency.

After the first year, Freud held, the child moves into the anal stage of development. The central event of the anal stage is toilet training. Here the child has his or her first experience with the external regulation of impulses: the environment teaches him or her to delay urination or defecation until an appropriate time and place. Thus, the child must postpone the pleasure that comes from relieving tension in the bladder and rectum. Freud believed that the child in the anal stage acquired power by virtue of giving and retaining. Through this stage of development, the child also acquired a sense of loss, and also a sense of self-control.

The years from three to five, in Freud's view, were taken up with the phallic stage. In this case, there is a preoccupation with sexual pleasure derived from the genital areas. It is at about this time that the child begins to develop sexual curiosity, exhibits its genitalia to others, and begins to masturbate. There is also an intensification of interest in the parent of the opposite sex. The phallic stage revolves around the resolution of the Oedipus Complex, named for the ancient Egyptian king who killed his father and married his mother, and brought disaster to his country. In the Oedipus complex, there is a sexual cathexis toward the parent of the opposite sex, and an aggressive cathexis toward the parent of the same sex.

The beginnings of the Oedipus Complex are the same for boys and girls. Both initially love the mother, simply because she is the child's primary caretaker -- the one most frequently responsible for taking care of the child's needs. In the same way, both initially hate the father, because he competes with the child for the mother's attention and love. Thereafter, however, the progress and resolution of the Oedipus complex takes a different form in the two sexes.

The male shows the classic pattern known as the Oedipus Complex. The boy is already jealous of the father, for the reasons noted earlier. However, this emotion is coupled with castration anxiety: the child of this age is frequently engaged in autoerotic activities of various sorts, which are punished when noticed by the parents. A frequent threat on the part of parents is that the penis will be removed -- and Freud noticed that this threat would be reinforced by his observation that the girls and women around him, in fact, do not have penises. As the boy's love for his mother intensifies into incestuous desire, the risk is correspondingly increased that he will be harmed by this father. However, the father appears overwhelmingly powerful to the child, and thus must be appeased. Accordingly, the child represses his hostility and fear, and through reaction formation turns them into expressions of love. Similarly, the mother must be given up, and the boy's sexual longing for her repressed. The final solution, Freud argued, is identification with the father. By making his father an ally instead of an enemy, the boy can obtain, through his father, vicarious satisfaction of his desire for his mother.

A rather different pattern, technically known as the Electra Complex after the Greek heroine who avenged her father's death. The Electra Complex in girls is not, as some might think, the mirror-image of the Oedipus Complex in boys. The young girl has the usual feelings of love toward her mother as caretaker, Freud believed, but harbored no special feelings toward her father. Girls, Freud noted, were not typically punished for autoerotic activity -- perhaps because they did not engage in it as often, perhaps simply because it is less obvious. Eventually, Freud believed, the girl discovers that she lacks the external genitalia of the boy. This leads to feelings of disappointment and castration that are collectively known as penis envy. She blames her mother for her fate, and envies her father because he possesses what she does not have. Thus the sexual cathexis for the mother is weakened, while the one for the father is simultaneously strengthened. The result is that the girl loves her father, but feels hatred and jealousy for her mother. The girl seeks a penis from her father, and sees a baby as a symbolic substitute. In contrast to the situation in boys, girls do not have a clear-cut resolution to the Electra Complex. For them, castration is not a threat but a fact. Eventually, the girl identifies with her mother in order to obtain vicarious satisfaction of her love for her father.

It should now be clear why Freud named this the "phallic" stage, when only one of the sexes has a phallus. In different ways, he argued, children of both sexes were interested in the penis. The first legacy of the phallic stage, for both sexes, is the development of the superego. The child internalizes social prohibitions against certain sexual object-choices, and also internalizes his or her parents' system of rewards and punishments. (Because girls are immune to the threat of castration, Freud thought, women had inherently weaker consciences than men.) The second legacy, of course, is psychosexual identification. The boy identifies with his father, the girl with her mother. In either case, the child takes on the characteristic role and personality of the parent of the same sex.

The phallic stage is followed by the latency period, extending approximately from five to eleven years of age. In this interval, Freud thought that the sexual instincts temporarily subsided. In part, this was simply because there is a slowing of the rate of physical growth. A more important factor in this state of affairs, however, are the defenses brought to bear on the sexual instincts during and after the resolution of the Oedipus Complex. During this time, however, the child is not truly inactive. On the contrary, the child is actively learning about the world, society, and his or her peers. 

Finally, with the onset of puberty at about age 12, the child enters the genital stage. This stage continues the focus on socialization begun in the latency period. The coming of sexual maturity reawakens the sexual instincts, which had been dormant throughout the latency period. However, the sexual instincts show a shift away from primary narcissism, in which the child takes pleasure in stimulating his or her own body, to secondary narcissism, in which the child takes pleasure in identifying with his or her ego-ideal. Thus, sexuality itself undergoes a shift from an orientation toward pleasure to one oriented toward reproduction, in which pleasure is secondary. The adolescent's attraction to the opposite sex is, for the first time, coupled with ideas about romance, marriage, and children. When the adolescent (or adult) becomes sexually active, events in the earlier stages will influence the nature of his or her genital sexuality -- for example, in those body parts which are sexually arousing, and in preferences for foreplay.

 

Erik Erikson's Eight Ages of Man

Erik Erikson is the most prominent disciple of Freud who lived after World War II (in fact, he was psychoanalyzed by Anna Freud) -- and after Freud himself, perhaps, the psychoanalyst who has had the most impact on popular culture. Erikson focused his attention on the issue of ego identity, which he defined as the person's awareness of him- or herself, and of his or her impact on other people. Interestingly, this was an issue for Erikson personally (for a definitive biography of Erikson, see Coles, 1970; for an autobiographical statement, see Erikson, 1970, reprinted 1975).


Erikson has described himself as a "man of the border". He was a Dane living in Germany, the son of a Jewish mother and a Protestant father, both Danes. Later his mother remarried, giving Erikson a German Jewish stepfather. Blond, blue-eyed, and tall, he experienced the pervasive feeling that he did not belong to his family, and entertained the fantasy that his origins were quite different than his mother and her husband led him to believe. A similar problem afflicted him outside his family: the adults in his parents' synagogue referred to him as a gentile, while his schoolmates called him a Jew. Erikson's adoptive name was Erik Homburger. Later he changed it to Erik Homburger Erikson, and still later just Erik Erikson -- assuming a name that, taken literally, meant that he had created himself.

Erikson agreed with the other neo-Freudians that the primary issues in personality are social rather than biological, and he de-emphasized the role of sexuality. His chief contribution was to expand the notion of psychological development, considering the possibility of further stages beyond the genital stage of adolescence. At the same time, he gave a social reinterpretation to the original Freudian stages, so that his theory is properly considered one of psychosocial rather than of psychosexual development.

Erikson.JPG (105796
              bytes)Erikson's developmental theory is well captured in the phrase, "the eight ages of man". His is an epigenetic conception of development similar to Freud's, in which the individual must progress through a series of stages in order to achieve a fully developed personality. At each stage, the person must meet and resolve a particular crisis. In so doing, the individual develops particular ego qualities; these are outlined in Erikson's most important book, Childhood and Society (1950), and in Identity: Youth and Crisis (1968). In Insight and Responsibility (1964), he argued that each of these strengths was associated with a corresponding virtue or ego strength. Finally, in Toys and Reasons (1976), Erikson argued that a particular ritualization, or pattern of social interaction, develops alongside the qualities and virtues. Although Erikson's theory emphasizes the development of positive qualities, negative attributes can also be acquired. Thus, each of the eight positive ego qualities has its negative counterpart. Both must be incorporated into personality in order for the person to interact effectively with others -- although, in healthy development, the positive qualities will outweigh the negative ones. Similarly, each positive ritualization that enables us to get along with other people has its negative counterpart in the ritualisms that separate us from them. Development at each stage builds on the others, so that successful progress through the sequence provides a stable base for subsequent development. Personality development continues throughout life, and ends only at death.

Stage 1: Trust, mistrust, and hope. The oral-sensory stage of development covers the first year of life. In this stage the infant hungers for nourishment and stimulation, and develops the ability to recognize objects in the environment. He or she interacts with the world primarily by sucking, biting, and grasping. The developmental crisis is between trust and mistrust. The child must learn to trust that his or her needs will be satisfied frequently enough. Other people, for their part, must learn to trust that the child will cope with his or her impulses, and not make their lives as caregivers too difficult. By the same token, if others do not reliably satisfy the child's needs, or make promises that they do not keep, the child acquires a sense of mistrust. As noted earlier, both trust and mistrust develop in every individual -- though in healthy individuals, the former outweighs the latter.

Out of the strength of trust the child develops the virtue of hope: "the enduring belief in the attainability of fervent wishes, in spite of the dark urges and rages which mark the beginning of existence". The basis for hope lies in the infant's experience of an environment that has, more than not, provided for his or her needs in the past. As a result, the child comes to expect that the environment will continue to provide for these needs in the future. Occasional disappointments will not destroy hope, provided that the child has developed a sense of basic trust.

An important feature of social interaction during this period is the ritualization of greeting, providing, and parting. The child cries: the parents come into the room, call its name, nurse it or change it, make funny noises, say goodbye, and leave -- only to return in the same manner, more or less, the next time the situation warrants. Parent and child engage in a process of mutual recognition and affirmation. Erikson calls this ritualization numinous, meaning that children experience their parents as awesome and hallowed individuals. This can be distorted, however, into idolism in which the child constructs an illusory perception of his or her parents as perfect. In this case, reverence is transformed into adoration.

Stage 2: Autonomy, Shame, Doubt, and Will. The muscular-anal stage covers the second and third years of life. Here the child learns to walk, to talk, to dress and feed him- or herself, and to control the elimination of body wastes. The crisis at this stage is between autonomy and shame or doubt. The child must learn to rely on his or her own abilities, and deal with times when his or her efforts are ineffectual or criticized. There will of course be times, especially early in this period, when the child's attempts at self-control will fail -- he will wet his pants, or fall; she will spill her milk, or put on mismatched socks. If the parents ridicule the child, or take over these functions for him or her, then the child will develop feelings of shame concerning his or her efforts, and doubt that he or she can take care of him- or herself.

If things go well, the child develops the virtue of will: the unbroken determination to exercise free choice as well as self- restraint, in spite of the unavoidable experience of shame and doubt in infancy. As will develops, so does the ability to make choices and decisions. Occasional failures and misjudgments will not destroy will, so long as the child has acquired a basic sense of autonomy.

The ritualization that develops at this time is a sense of the judicious, as the child learns what is acceptable and what is not, and also gets a sense of the rules by which right and wrong are determined. The hazard, of course, is that the child will develop a sense of legalism, in which the letter of the law is celebrated over its spirit, and the law is used to justify the exploitation and manipulation of others.

Stage 3: Initiative, Guilt, and Purpose. The locomotor-genital stage covers the remaining years until about the sixth birthday. During this time the child begins to move about, to find his or her place in groups of peers and adults, and to approach desired objects. The crisis is between initiative and guilt. The child must approach what is desirable, at the same time that he or she must deal with the contradictions between personal desires and environmental restrictions.

The development of autonomy leads to the virtue of purpose: the courage to envisage and pursue valued goals uninhibited by the defeat of infantile fantasies, by guilt and by the foiling fear of punishment.

Stage 4: Industry, Inferiority, and Competence. The latency stage begins with schooling and continues until puberty, or roughly 6 to 11 years of age. Here the child makes the transition to school life, and begins to learn about the world outside the home. The crisis is between industry and inferiority. The child must learn and practice adult roles, but in so doing he or she may learn that he or she cannot control the things of the real world. Industry permits the development of competence, the free exercise of manual dexterity and cognitive intelligence.

Stage 5: Identity, Role Confusions, and Fidelity. The stage of puberty-adolescence covers ages 11-18. Biologically, this stage is characterized by another spurt of physiological growth, as well as sexual maturity. Socially, the features of adolescence are involvement with cliques and crowds, and the experience of adolescent love. The crisis is between identity and role confusion. The successful adolescent understands that the past has prepared him or her for the future. If not, he or she will not be able to differentiate him- or herself from others, or find his or her place in the world. Identity, a clear sense of one's self and one's place in the world, forms the basis for fidelity, the ability to sustain loyalty to another person.

Stage 6: Intimacy, Isolation, and Love. Erikson marks the stage of young adulthood as encompassing the years from 18 to 30. During this time, the person leaves school for the outside world of work and marriage. The crisis is between intimacy and isolation. The person must be able to share him- or herself in an intense, long-term, committed relationship; but some individuals avoid this kind of sharing because of the threat of ego loss. Intimacy permits love, or mutuality of devotion.

Stage 7: Generativity, Stagnation, and Care. The next 20 years or so, approximately 30 to 50 years of age, are called the stage of adulthood. Here the individual invests in the future at work and at home. The crisis is between generativity and stagnation. The adult must establish and guide the next generation, whether this is represented in terms of children, students, or apprentices. But this cannot be done if the person is concerned only with his or her personal needs and comfort. Generativity leads to the virtue of care, the individual's widening concern for what has been generated by love, necessity, or accident.

Stage 8: Ego Integrity, Despair, and Wisdom. The final stage, beginning at about 50, is that of maturity. Here, for the first time, death enters the individual's thoughts on a daily basis. The crisis is between ego identity and despair. Ideally, the person will approach death with a strong sense of self, and of the value of his or her past life. Feelings of dissatisfaction are especially destructive because it is too late to start over again. The resulting virtue is wisdom, a detached concern for life itself.

Stage 9: Despair, Hope, and Transcendence?  As he (actually, his wife and collaborator, Joan Erikson) entered his (her) 9th decade, Erikson (in The Life Cycle Completed, 1998) postulated a ninth stage, in which the developments of the previous eight stages come together at the end of life. In this stage of very old age, beginning in the late 80s, the crisis is despair vs. hope and faith, as the person confronts a failing body and mind.  If the previous stages have been successfully resolved, he will be able to transcend these inevitable infirmities. 

Observational studies have provided some evidence for this fourth stage, but Erikson's original "eight-stage" view is the classic theory of personality and social development across the life cycle.

Shakespeare's Seven Ages of Man

Erikson's account of the Eight Ages of Man is a play on the Seven Ages of Man, described by Shakespeare in As You Like It:

All the world's a stage,
And all the men and women merely players,
They have their exits and entrances,
And one man in his time plays many parts,
His acts being seven ages. At first the infant,
Mewling and puking in the nurse's arms.
Then, the whining schoolboy with his satchel
And shining morning face, creeping like snail
Unwillingly to school. And then the lover,
Sighing like furnace, with a woeful ballad
Made to his mistress' eyebrow. Then a soldier,
Full of strange oaths, and bearded like the pard,
Jealous in honour, sudden, and quick in quarrel,
Seeking the bubble reputation
Even in the cannon's mouth. And then the justice
In fair round belly, with good capon lin'd,
With eyes severe, and beard of formal cut,
Full of wise saws, and modern instances,
And so he plays his part. The sixth age shifts
Into the lean and slipper'd pantaloon,
With spectacles on nose, and pouch on side,
His youthful hose well sav'd, a world too wide,
For his shrunk shank, and his big manly voice,
Turning again towards childish treble, pipes
And whistles in his sound. Last scene of all,
That ends this strange eventful history,
Is second childishness and mere oblivion,
Sans teeth, sans eyes, sans taste, sans everything.

 

Life-Span Theory Since Erikson

Erikson's theory was extremely influential. By insisting that development is a continuous, ceaseless processes, he fostered the new discipline of life-span developmental psychology, with its emphasis on personality and cognitive development after adulthood. Much life-span work has been concerned with cognitive changes in the elderly, but personality psychologists have been especially concerned with the years between childhood and old age.

Erikson's stages inspired a number of popular treatments of "life span" personality development, including Roger Gould's (1978) identification of periods of transformation; Gail Sheehy's Passages (1976) and New Passages (1995), and Daniel Levinson's The Seasons of a Man's Life (1985).   





These and other schemes are all, to a greater or lesser extent, social conventions superimposed on the biological reality that we're born, age, and die.  They are social categories that organize a continuum of age.

Piaget's Stages of Cognitive Development

The Swiss developmental psychologist Jean Piaget marked four stages of cognitive development:

  • Sensory-Motor Period
  • Preoperational Period
  • Concrete Operations (corresponding, roughly, to arithmetic)
  • Formal Operations (corresponding, roughly, to algebra)

Some "neo-Piagetian theorists, such as Michael Commons, have argued that there are even higher stages in the Piagetian scheme (presumably corresponding, roughly, to calculus and other higher mathematics)

The late C.N. Alexander even argued that the Science of Creative Intelligence announced by the Maharishi Mahesh Yogi as an offshoot of his Transcendental Meditation program promoted cognitive development beyond the Piagetian stages.

Piaget's theory was very influential among psychologists and educators (though it also proved controversial).  But Piaget's stages never entered popular parlance, the way Freud's and even Erikson's did, so it would not seem appropriate to include them as social categories.

 

"Generations"

Generations.JPG
              (91773 bytes)Moving from the individual life cycle to social history: In 1951, Time magazine coined the term "Silent Generation" to describe those born from 1923-1933.  The term "generation", as a demographic category referring to people who were born, or lived, during a particular historical epoch, gained currency with the announcement of the Baby Boom (1946-1964) by the Census Bureau. 



Following these examples, a number of different generations have been identified by two sociologists, Strauss and Howe (1991, 1997).  Other "generations" include:

GenerationalBoundaries.JPG (81929 bytes)As with most social categories, the boundaries between generations are somewhat fuzzy.  For example, the US Census Bureau classified Americans born between 1946 (the end of World War II) and 1964 as constituting the "Baby Boom", while Strauss and Howe argue that the Baby Boom actually began in 1943, and lasted only until 1960.

 

 

GenerationalConflict.JPG (66765 bytes)As with any other social categorization, generational categories can be a source of group conflict.  For example, the 2008 race for the presidency pitted John McCain, a member of the Silent Generation, against Barack Obama, a member of Generation X, who had won the Democratic nomination over Hillary Rodham Clinton, a member of the Baby Boom Generation.

 

 

JapaneseDiaspora.JPG (76874 bytes)These examples are drawn from American culture, but they can be found in other cultures, too.  Consider the terms used to characterize various generations of the Japanese diaspora (Nikkei).



There is a similar classification for American Chinese:

Artist June Yee explores these stereotypes in her piece, Two Chinese Worlds in California, on display in the Gallery of California History at the Oakland Museum of California (2010).

"I was surprised at how much misunderstanding there was.  They called us FOBs, for fresh off the boat, and they were ABCs, American-born Chinese.  Ironically, we did not fit into each other's stereotype, even though we were all Chinese.  We weren't aware of the anti-Chinese sentiment they had endured for years.  And they didn't understand our feelings about Mao, who in the '60s was a hero for many ABCs who joined the student protests.  I remember being appalled by ABCs who embraced Mao's Little Red Book" (OMCA Inside Out, Spring 2010).

In South Africa, young people born since the end of apartheid in 1994 are called the "Born Free" generation (they cast their first votes in a presidential election in 2014).

Although the concept of "generations" may be familiar in popular culture, its scientific status is suspect (see critiques by Bobby Duffy in The Generation Myth and Gen Z, Explained by Roberta Katz, Sarah Olgivie, Jane Shaw, and Linda Woodhead, both reviewed by Louis Menand in "Generation Overload", New Yorker, 10/18/2021).    Menand points out that the pop-culture concept of a "generation" differs radically from the biblical span of 30 years -- which is also the heuristic employed by reproductive biologists.  The current popular concept of "generations" had its origins in 19th-century efforts to understand cultural change: Karl Mannheim, an early sociologist, introduced the term "generation units" to refer to elites to deliberately embraced new ways of thinking and acting.  Charles Reich, a Yale legal scholar, revived the concept in The Greening of America (1970), based on his observations of the young people in San Francisco during the Summer of Love (1967).  new "generation" of .  Social scientists who embrace the idea of "generations" differ in terms of whether generations are cause or effect of socio-historical change.  In the "pulse" hypothesis, each generation introduces new ways of thinking; in the "imprint" hypothesis, each generation is affected by the historical events that they lived through, like World Wars I and II, the Depression, Vietnam, the civil rights movement, the September 11 terror attacks, etc.

These days, though, the concept of "generations" is mostly a marketing ploy.  The boundaries between "generations" are fuzzy: as noted earlier, Barack Obama is technically a Baby Boomer.  And as with the fraught issue of racial differences, it turns out that variability within generations is greater than variability between generations.  Menand points out, for example, that the salient figures of the "generational revolt" of the 1960s and 1970s -- Gloria Steinem (feminism), Tom Hayden (Vietnam War protests) Abbie Hoffman (hippies), Martin Luther King (civil rights), and many, many others -- were all members of the putative "Silent Generation"!  Timothy Leary, Allen Ginsberg, and Pauli Murray (look her up and be amazed, as  I was, that you never heard of her) were even older.  A poll of young people taken in 1969 found that most had not smoked marijuana, and most supported the Vietnam War.  The same goes for Generation Z. 

So, as with race, "generations" are definitely social constructions.  But as with all social constructions, perception is reality -- perceived reality, that is, and from a cognitive social-psychological perspective that is pretty much all that counts.  If you believe that you're part of a generation, you're likely to behave like part of that generation; and if you believe others are part of a generation, you're likely to treat them as if they really were.


Occupation Categories

Sociologists (especially) have devoted a great deal of energy to measuring socioeconomic status.  In these schemes, information about occupation, education, and income is used to classify individuals or families into categories:

In addition, sociologists and other social scientists make use of other categorical distinctions based on occupation, such as:

All these terms have entered common parlance: they're not just technical terms used in formal social science.

In contrast to earlier classification schemes, there is nothing "biological" about these categories, which wouldn't exist at all except in societies at a certain level of economic development.  In feudal economies, for example, there was a distinction between serf and master that simply doesn't exist in industrial economies.  

As the serf-master distinction indicates, classification by socioeconomic status evolves as societies evolve. In England, for example, the traditional class distinction was the tripartite one described above: upper, middle, and working classes (In England, as viewers of Downton Abbey understand, the upper class prided themselves on the fact that they did not work for a living). In 2013, the British Broadcasting Corporation commissioned "The Great British Class Survey", which revealed that British society now included at least seven distinct social classes,

As described in the BBC press release:

Here's a list of the Standard Occupational Classification employed by the Bureau of Labor Statistics in the U.S. Department of Labor:  This is the list employed beginning in 2010; as of 2013, a new classification scheme was under review.

These are clearly social categories -- but are they any less natural than biological categories, just for being social rather than biological in nature?


Caste in Hindu India...

Caste.JPG
                          (65151 bytes)A unique set of social categories is found in the caste system in Hindu India.  Although a product of the Vedic age (1500 BCE to 600 CE), the term "caste" (casta) itself was first used by 16th-century Portuguese explorers.  


Traditionally, Indian society was divided into four varnas (Sanskrit for "class" or "color"):

  • Brahmins: priests and scholars;
  • Ksatriyas: rulers and warriors;
  • Vaisyas: merchants, traders, and farmers;
  • Sudras: artisans, laborers, servants, and slaves.

Below these four groups are the Panchamas ("fifth class"), popularly known as "untouchables".  Mahatma Gandhi labeled these individuals as Harijans, or "children of God".  Untouchability was outlawed in 1949, though -- as in the American "Jim Crow" South -- prejudice against them remained strong.  As an outgrowth of social protest in the 1970s, the untouchables began to view the Harijan label as patronizing, and to identify themselves as Dalit, or "oppressed ones". 

Membership in a caste is largely hereditary, based on ritual purity (the panchamas are untouchable because they are considered to be polluting), and maintained by endogamy.  So long as one follows the rules and rituals (Dharma) of the caste into which he or she is born, a person will remain in his or her caste.  However, one can lose one's caste -- become an outcast, as it were -- identity by committing various offenses against ritual purity, such as violations of dietary taboos or rules of bodily hygiene; and one can move "up" in the caste system by adopting certain practices, such as vegetarianism -- a process known as "Sanskritization".  One can also regain his or her original caste status by undergoing certain purification rites.  Movement "upwards" from untouchability is not possible, however -- though in recent years the Indian government has created "affirmative action" programs to benefit untouchables.

Caste is not exactly a matter of socioeconomic status: there can be poor Brahmans (especially among the scholars!).  Parallel to varna is a system of social groupings known as Jati, based on ethnicity and occupation. 

Although the caste system has its origins in Hindu culture, Indian Muslims, Sikhs, and Christians also follow caste distinctions.  For example, Indian Muslims distinguish between ashraf (Arab Muslims) and non-ashraf (such as converts from Hinduism).

The caste system has been formally outlawed in India, but remnants of it persist, as for example in the identification of a broad class of largely rural "daily-wages people", which is more a matter of social identity than economics.  

Beginning in 1993, the Indian government began a sort of affirmative action program, guaranteeing 27% of jobs in the central and state government, and college admissions, to members of an official list of "backward classes" -- of which there are more than 200, taking into account various subcastes and other caste-like groupings -- not just the dalits.  

Sometimes members of the "backward classes" take matters into their own hands.  An article in the Wall Street Journal about India's affirmative-action program told the story of Mohammad Rafiq Gazi, a Muslim from West Bengal, whose family name was Chowduli -- roughly, the Bengali equivalent of the "N-word".  He legally changed his last name to the higher-caste Gazi in order to escape the stereotypes and social stigma associated with his family name.  But when India initiated its affirmative-action program, individuals with the higher-caste surname of Gazi were not eligible, so he sought to change his name back to the low-caste Chowduli ("For India's Lowest Castes, Path Forward is 'Backward'" by Geeta Anand and Amol Sharma, 12/09/2011).

Actually, India has now extended its affirmative action program beyond the untouchables, leading to the spectacle of some higher castes seeking recognition under the category of "other backward classes".  In August 2015, for example, the Patidar caste of Gujarat, traditionally farmers, who have begun to achieve middle class levels of income and education, petitioned for the "reservations" granted to the "other backward classes".  The same move was made by the Jats, another farming caste in Haryana, in February 2016.

For insights into the Hindu caste system in modern India, see "A 'Life of Contradictions' by Gyan Prakash (New York Review of Books, 06/20/2024), reviewing books exploring the life and works of B.R. Ambedkar, a Dalit political leader who fought for the rights of Dalits and against the caste system in general, and also drafted the Indian Constitution before   India achieved independence in 1948.   A sometime associate of Mahatma Gandhi, he broke with Gandhi, who supported the expansion of rights for Dalits but opposed dismantling of the caste system itself, on the grounds that it was an essential element of Hindu culture, and dismantling the caste system would threaten the unit of Hindus against the British (and Indian Muslims).  (The Hindu nationalist BJP party, which (as of 2024) governs India, supports enhanced rights for Dalits, but otherwise seeks to preserve the varna system as an essential element of HIndu culture.)

 

...and in Japan...

Feudal Japan had a class of outcasts, known as the seti or buraku, who were indistinguishable, in ethnic terms, from any other native Japanese.  Like the "untouchables" of India, the buraku performed various tasks that were deemed impure by classical Buddhism, such as slaughtering animals and handling corpses.  They wore distinctive clothing and lived in segregated areas.  Although officially liberated in 1871, they lagged behind other Japanese in terms of education and socioeconomic status.  From 1969 to 2002, they were the subjects of affirmative action programs designed to diminish historic inequalities.  But despite substantial gains, the descendants of the buraku still live largely in segregated areas, and are objects of continuing, if more subtle, discrimination.  For example, by 2001 Hiromu Nonaka, a burakumin politician, had achieved the #2 position in Japan's ruling Liberal Democratic Party, but was not able to make the leap to the #1 position, and the post of prime minister.  As Taro Aso, the LDP prime minister at the time, reportedly said at a private meeting, "Are we really going to let those people take over the leadership of Japan?".  [See "Japan's Outcasts Still Wait for Society's Embrace" by Norimitsu Onishi, New York Times, 01/16/2009.] 

 

...and perhaps in America.

Some social theorists argue that the United States, too, has a caste system, otherwise known as white supremacy.  That is, American society effectively relegates Blacks and other persons of color to a permanent "one-down" position in the social hierarchy in which persons of color are not valued as much as whites are.   As Isabel Wilkerson writes in Caste: The Origins of Our Discontents (2020; reviewed by Sunil Khilnani in "Top Down", New Yorker, 08/17/2020), "Caste is insidious and therefore powerful because it is not hatred; it is not necessarily personal.  It is the worn grooves of comforting routines and unthinking expectations, patterns of a social order that have been in place for so long that it looks like the natural order of things".  In her book, Wilkerson mounts considerable evidence in support of her argument that racism is just a visible manifestation of a deeper, more subtle, system of social domination.  The argument is not new with her, however.  Allison Davis, an early Black anthropologist made a similar argument in Deep South (1941), his study of racial and class relations in the post-Reconstruction "Jim Crow" South.  Gunnar Myrdal cited Davis's ideas about race and cast favorably in his own study of American race relations, An American Dilemma (1944).  Martin Luther King, whose doctrines of nonviolent civil disobedience, told a story about how, when he was visiting India, he was introduced by a Dalit schoolmaster as "a fellow untouchable".  The story may be apocryphal, as Khilnani suggests, but Wilkerson convincingly argues for the similarities between racial relations in America -- and perhaps the rest of the industrialized world as well -- as very similar to a caste system.  This remains true, Wilkerson argues, even considering the gains that Blacks have made since the success of the Civil Rights Movement in the 1950s and 1960s.  In India, Dalits may attain higher education and professional status, by virtue of India's own version of affirmative action; but they're still Dalits.  The caste system in America, to the extent that it actually exists, is what "systemic racism" -- the focus of the Black Lives Matter movement, and distinct from personal racial prejudice -- is all about

Wilkerson also argues that Nazi Germany practiced a caste system regulating the relations (if you could call them that) between "Aryans" (a false racial category) and Jews (not a racial category either).  It's an interesting argument -- but, as Khilnani points out, the Nazi's killed Jews by the millions.  The goal of the Final Solution was elimination, not mere domination.  With the exception of the police killings that inspired the Black Lives Matter movement, he points out that American whites, mostly just try to keep Blacks "in their place" -- for example, by failing to recognize the status of upper-class Blacks.

Political Categories

Similarly, political scientists (as well as other social scientists) slot people into categories based on their political affiliations.  In the United States, some commonly used political categories are:

The category "Progressive" is still used in certain states in the Upper Midwest, but not anywhere else.  The category "Communist" used to be (somewhat) popular, but pretty much disappeared after the fall of the Soviet Union in 1989 (actually, it died long before that).  The "Green" party label is emerging in some places.

Some Germans used to be Nazis, and from 1933-1945 they killed or incarcerated many Germans who used to be Communists; but in the cold War, there were lots of communists in East Germany (though not so many in West Germany); in the post-Cold War era, Germans come in three new political categories categories: Christian Democrats, Social Democrats, and Greens.

In addition, political science employs a number of alternative categorization schemes, which have also entered common parlance.  Some examples include:

Just to underscore how much of a social construction these categories are:


Political categories of very recent vintage include "Soccer Mom" and "NASCAR Dad".  Not to mention "TEA Party".

In the Democratic People's Republic of North Korea, otherwise known as North Korea, there exist a peculiar combination of political and socioeconomic categories known as songbun, or class status (note the irony of class distinctions in an avowedly communist country).  There are, in fact, 51 songbun (note the irony again, especially if you missed it the first time), based on official judgments of loyalty to the state and to the ruling Kim family (as of 2017, North Korea was on its third generation of Kims -- another irony, if you want to note it).  These 51 categories are collected into 3 superordinate categories, essentially representing "core" (about 25% of the population), "wavering" (about 55%), and "hostile" (about 20%).  Membership in the class is hereditary, as in the Hindu caste system of India, but you can be demoted from one songbun to a lower one.  Either way, class membership has all sorts of socioeconomic consequences -- for education, employment, access to consumer goods and even to food.  For example, members of the "core" songbun are allowed to live in the (more or less) First-World conditions of central Pyongyang, the capital; other songbun are permitted to live only in the suburbs, where life is still tolerable; the rest are relegated to extremely impoverished rural areas (which, admittedly is probably still better than one of the DPRK's notorious prison camps.  For details, see Marked for Life: Songbun, North Korea's Social Classification System by Robert Collins (2012), published by the Committee for Human Rights in North Korea.


Religious Categories

People are commonly classified by religion.  Indeed, religious classifications can be a major source of group conflict, as seen by the disputes between Muslims, Eastern Orthodox, and Catholics in the former Yugoslavia, or the disputes between Hindus and Muslims in India and Pakistan.  

The obvious form of religious classification is by religion itself -- Jewish, Christian, Muslim, Buddhist, Hindu, etc. 

But at an even higher level than that is a classification based on the number of gods worshiped in a religion:

Within many religions, there is a hierarchy of subcategories. 

There is also a new category of "Spiritual but not Religious", preferred by many Americans who do not affiliate with any institutional church.


Nationality Categories

We also classify people by their national origin.

In some sense, national origin is a matter of geography: the English Channel divides the British Isles from the Continent; the Alps divide Northern and Southern Europe; the Danube divides Western and Eastern Europe; the Mediterranean, the Strait of Bosporus, and the Black Sea divide Europe from Africa, the Bosporus and the Caucasus Mountains divide Europe from Asia, and so on.  South Asia is on a separate tectonic plate from Asia proper.  But again, we can see social concepts imposed on the map of the world.

Nationality categories also change with historical and political developments.  For example, with the formation and consolidation of the European Union, many citizens of European countries have begun to identify themselves as "European" as well as Dutch, Italian, etc.  Based on the Eurobarometer survey, Lutz et al. (Science, 2006) reported that 58% of Europeans above 18 reported some degree of "multiple identity" (actually, a dual identity), as against 42% who identified themselves only in terms of their nationality.  The percentages were highest in Luxembourg, Italy, and France (despite the French rejection of the proposed European constitution in 2006), and lowest in Sweden, Finland, and the United Kingdom (which maintains its national currency instead of adopting the Euro).   Perhaps not surprisingly, younger respondents were more likely to report a multiple national identity than older respondents.

The Israeli-Palestinian conflict is an interesting case in point (see Side by side: Parallel Narratives of Israel-Palestine by Sami Adwan, Dan Bar-On, and Eyal Naveh, 2012; see the review by Geoffrey Wheatcroft, "Can They Ever Make a Deal?", New York Review of Books, 04/05/2012).   Yasser Arafat, president of the Palestinian National Authority, and his successor, Mahmoud Abbas, agitated for a Palestinian state separate from both Israel and Jordan; on the other hand, Golda Meier (1969), the former Israeli prime minister, denied that there was such a thing as a Palestinian people, and Newt Gingrich (2012), the former US presidential candidate, called the Palestinians "an invented people".  Which raises a question: What does it mean to be a Palestinian -- or an Israeli, for that matter, but let's stick with the Palestinian case for illustration.  It turns out that national consciousness -- one's identity as a citizen of a particular nation -- is a relatively recent cultural invention.  Before the 1920s, Arabs in Palestine -- whether Muslim or Christian -- considered themselves part of the Ottoman Empire, or perhaps as part of a greater Arab nation, but apparently not as Palestinians as such.  In fact, it has been argued that the Palestinian identity was created beginning in the 1920s in response to Zionism -- an identity which was itself an invention of the 1890s, before which Jewish tradition did not include either political Zionism or the idea of a Jewish state.  It's one thing to be Jewish (or Palestinian) as a people; it's quite another to be citizens of a Jewish or Palestinian (or greater Arab) nation.  And -- just so I'm not misunderstood here -- Israelis and Palestinians are by no means unique in this regard.

These two aspects of identity -- identity as a people and identity as a nation -- are not the same thing.  But at the Versailles Conference that followed World War I, Woodrow Wilson championed the idea that every people should get their own nation -- this is what is known as self-determination, as opposed to the imperial and colonial systems (including those of Britain, France, and Belgium) which had existed prior to that time.  On the other hand, Walter Lippman argued that self-determination was not self-evidently a good thing, because it "rejects... the ideal of a state within which diverse peoples find justice and liberty under equal laws".  Lippman predicted that the idea of self-determination would lead to mutual hatred -- the kind of thing that boiled up in the former Yugoslavia in the late 20th century.

The question of national identity can become very vexed, especially as nation-states arose in the 18th century, and again in the 20th century with the breakup of the Austro-Hungarian and Ottoman empires.  In contrast to non-national states, where the state was identified with some sort of monarch (a king or a queen, an emperor, or a sultan), who ruled over a large and usually multi-ethnic political entity (think of the Austro-Hungarian Empire, or the Ottoman Empire), nation-states are characterized by a loyalty to a particular piece of territory, defined by natural borders or the settlement of a national group, common descent, common language, shared culture promulgated in state-supported public schools -- and, sometimes, the suppression of "non-national" elements.  Think of England, France, and Germany.

But immigration, globalization, and other trends can challenge this national identity, raising the question of exactly what it means to be a citizen of a nation-state.  It turns out that belonging to the group is not precisely a matter of citizenship.

As Henry Gee notes, 

"the abiding horror [of the July 7, 2005 suicide attacks on the London Underground] is that the bombers were not foreign insurgents -- Them -- but were British, born and raised; in Margaret Thatcher's defining phrase, One of Us" (in "Tricky, Turbulent, Tribal", Scientific American 12/05).

Traditionally, the United States has portrayed itself as a great melting pot, in which immigrants from a wide variety of nations, religions, and ethnicities blended in to become a homogeneous group of -- well, Americans.  But after World War II, with the rise of the Black civil rights movement, and increasing degrees of Hispanic-Latino and Asian immigration, the American self-image has shifted from that of a melting pot to that of a stew (Jesse Jackson's famous image), or a gumbo, where the various ingredients combine and influence each other, making something delicious while each maintaining its original character.

Other societies have not favored the melting-pot image, striving to maintain ethnic homogeneity and resisting immigration.  A case in point is Belgium, a country which includes both the German-speaking Flemish (in the northern Flanders) and the French-speaking Walloons (in the southern Wallonia), and the conflicts between the two have made for highly unstable governments, and increasing discussion of the possibility that the country will, in fact, break up -- much as happened in the former Czechoslovakia and the former Yugoslavia.  The irony is that Brussels, seat of the European Union, while nominally bilingual, is for all practical purposes Francophone -- and it's located in German-speaking Wallonia.  So breaking up isn't going to be easy.  

Yet other societies have fostered immigration, but have held to the melting-pot image, despite the desire of new immigrants to retain their ethnic identities -- creating the conditions for cultural conflict.  Ironically, the potential for conflict has been exacerbated by those societies' failure to make good on the promise of integrating new immigrants

France.JPG (69249 bytes)A case in point is rioting that broke out in some Arab immigrant communities in France in 2005, and the more recent dispute over the desire of some observant Muslim Frenchwomen to wear the headscarf, or hijab, as an expression of modesty or their religious heritage or, perhaps, simply their identity.   

 


As part of the heritage of the French revolution, which abolished the aristocratic system and made all Frenchmen simple "citizens", the French Constitution guarantees equality to all -- so much so that, until recently, anyone born in any territory ever held by France is eligible to become President -- including Bill and Hillary Rodham Clinton, born, respectively, in Arkansas and Illinois (in fact, the law was changed when the French realized that Bill Clinton was eligible).  France identifies itself as a "universalist" republic that draws no official distinctions among its "equal" citizens.  Unlike the United States, where terms like "African-American", "Asian-American", and "Mexican-American" have become familiar, there are no such "hyphenated" categories in France, and the French census has no provision for identifying the race, ethnicity, national origin, or religion of those who respond to it.  So the government has no idea how many of its citizens are immigrants, or from where.  And, officially, it doesn't care.  Everybody's French, and all French are alike.  In theory, anyway, and in law (see "Can You Really Become French?" by Robert O. Paxton, New York Review of Books, 04/09/2009)..

Then again:

But it has become painfully clear that (paraphrasing George Orwell in Animal Farm) some French are more equal than others.  Despite a large number of immigrants from Algeria and Morocco, there are few Arabs represented in government, or in the police force.  Many Arab immigrants feel that they have been left out of French society -- effectively denied education, employment, and other opportunities that were available to the "native" French.  As one immigrant put it, "The French don't think I'm French" (quoted in "France Faces a Colonial Legacy: What Makes Someone French?" by Craig S. Smith, New York Times, 11/11/05).  The situation has been worsened by the fact that, while there is full freedom of religious practice in France, the state virtually outlaws any public display of religious piety, such as the headscarf (hijab) worn by many Muslim women (as well as the Jewish yarmulke and oversize Christian crosses). Moreover, as part of a policy of secularization, the state owns and maintains all religious properties.  Just as it has not built any new churches or synagogues, so it hasn't built any mosques.  The problem is that while there are plenty of churches and synagogues to go around, there are lots of Muslims who are forced to worship in gymnasiums and abandoned warehouses.  

In part, the 2005 riots in France reflect a desire on the part of recent Arab immigrants to be classified as fully French, and treated accordingly, without discrimination; but also a desire to be recognized as different, reflecting their African origins and their Muslim religion.  Such are the contradictions of social categorization.

Another example: the French treatment of the Roma people, often called "Gypsies".  The Roma are a nomadic people who migrated into Eastern Europe and the Balkans from Northern India about 1,000 years ago.  Previously, they were mostly confined there, chiefly in Romania and Bulgaria, but under the laws of the European Union, which guarantee "free movement of persons" among member states, they have begun to move into Western Europe, as well, including France -- spurring popular movements to expel them.  This cannot be done, legally, unless individual Roma acquire criminal records -- then they can be deported back to their countries of origin.  But how do you know who's Roma and who isn't?  France is an interesting case because, by virtue of its republican tradition, the French government recognizes no ethnic distinctions among its citizens or residents.  But France also requires everyone to have a fixed place of residence (even if it's a tourist hotel), and the Roma are nomadic, traveling in groups, and don't have a fixed address.  The most France can do is to classify Roma as gens du voyage, or "traveling people", and waive the requirement that they have a fixed residence. 

American.JPG (119594
              bytes)Despite the mythology of the melting pot, the United States itself is not immune from these issues.  Many of the earliest European settlers, especially in the original 13 colonies, came to the New World to escape ethnic and religious conflict, and quite quickly a view of a new American type, blending various categories, emerged.  In Letters from an American Farmer (1782, Letter III), Hector St. John de Crevecoeur, a French immigrant to America in the 18th century, noted the mix of "English, Scotch, Irish, French, Dutch, Germans, and Swedes" in the New World and characterized "the American" as a "new man", in which "individuals of all nations are melted into a new race of men".  In Democracy in America (1835), Alexis de Tocqueville (another Frenchman) predicted that America, as a country of immigrants, would be exempt from the conflicts between ethnicities, classes, and religions that had so often beset Europe -- initiating a view of American exceptionalism.  The image of America as a "melting pot" was fixed in Israel Zangwill's play of that title, first produced in 1908.

AssimilMulticulti.JPG (90967 bytes)Beginning in the 19960s this traditional view of what it means to be an American was challenged, first by a new wave of African-American civil rights leaders, and later by Mexican-Americans, Chinese-Americans, and others who wanted to keep their traditions at the same time as they became Americans.  This movement away from assimilationism toward multiculturalism is captured by the image of America as a "gorgeous mosaic", or "salad bowl" of cultures -- an image derived, in turn from John Murray Gibbon's image of Canada.  It's what Jesse Jackson has in mind with his "Rainbow Coalition" -- a rainbow in which white light can be decomposed into several different colors.  

In 1963, Nathan Glazer and Daniel Patrick Moynihan noted in their book, Beyond the Melting Pot, that "the point about the melting pot... is that it did not happen ".  By 1997, Glazer would title his new book on the subject We're All Multiculturalists Now.

It turns out that whether members of the majority culture (meaning whites) hold assimilationist or multiculturalist views has an impact on the quality of life of members of majority cultures (meaning persons of color, broadly defined).  Victoria Plaut and her colleagues conducted a "diversity climate survey" in 17 departments of a large corporation, and found that white employees embrace of multiculturalism was strongly associated with both minority employees' "psychological engagement" with the company and their perceptions of bias.  But where the dominant ideology of the white employees tended toward "colorblindness" (a variant on assimilationism), minority employees were actually less psychologically engaged, and perceived their white co-workers as more biased against them.


Typically, we categorize other people in terms of their national identity, but national identity can also be part of one's own self-concept.  In The Red Prince: The Secret Lives of a Habsburg Archduke (2008), the historian Timothy Snyder tells the story of Archduke Wilhelm (1895-1948), son of Archduke Stefan (1860-1933), of the Austro-Hungarian Empire.

Anne Applebaum, reviewing The Red Prince (Laughable and Tragic", New York Review of Books, 10/23/2008), writes:

Snyder is more convincing when he places Wilhelm's story not in the politics of contemporary Ukraine, but in the context of more general contemporary arguments about nations and nationalism. For the most striking thing about this story is indeed how flexible, in the end, the national identities of all the main characters turn out to be, and how admirable this flexibility comes to seem. Wilhelm is born Austrian, raised to be a Pole, chooses to be Ukrainian, serves in the Wehrmacht as a German, becomes Ukrainian again out of disgust for the Nazis -- and loses his life for that decision. His brother Albrecht chooses to be a Pole, as does his wife, even when it means they suffer for it too. And it mattered: at that time, the choice of "Polishness" or "Ukrainianness" was not just a whim, but a form of resistance to totalitarianism.

These kinds of choices are almost impossible to imagine today, in a world in which "the state classifies us, as does the market, with tools and precision that were unthinkable in Wilhelm's time," as Snyder puts it. We have become too accustomed to the idea that national identity is innate, almost genetic. But not so very long ago it was possible to choose what one wanted to be, and maybe that wasn't such a bad thing. In sacrificing that flexibility, something has been lost. Surely, writes Snyder,

the ability to make and remake identity is close to the heart of any idea of freedom, whether it be freedom from oppression by others or freedom to become oneself. In their best days, the Habsburgs had a kind of freedom that we do not, that of imaginative and purposeful self-creation.

And that is perhaps the best reason not to make fun of the Habsburgs, or at least not to make fun of them all the time. Their manners were stuffy, their habits were anachronistic, their reign endured too long, they outlived their relevance. But their mildness, their flexibility, their humanity, even their fundamental unseriousness are very appealing, in retrospect -- especially by contrast with those who sought to conquer Central Europe in their wake.


Racial and Ethnic Categories

The complicated relationship between natural categories of people, based on biology or geography, and social categories of people, based on social convention, is nowhere better illustrated than with respect to race and ethnicity.  By virtue of reproductive isolation, the three "races" (Caucasoid, Mongoloid, and Negroid), and the various ethnicities (Arab vs. Persian, Chinese vs. Japanese) do represent somewhat different gene pools.  But members of different races and ethnicities have much more in common, genetically, than not: they hardly constitute different species or subspecies of humans. Moreover, social conventions such as the "one drop rule", widespread in the American south (and, frankly, elsewhere) during the "Jim Crow" era, by which an individual with any Negro heritage, no matter how little, was classified as Negro (see "One Drop of Blood" by Lawrence Wright, New Yorker, 07/24/94), indicates that much more goes into racial and ethnic classifications than genes. Consider, for example:

For an interesting debate concerning the biological reality of racial classifications, compare two recent books:
So here's a question for empirical research: Are national stereotypes like "American" weaker, less informative, than racial or ethnic stereotypes?

This is a continuing issue, and not just for multiracial individuals, but also for governmental bookkeeping. 

The census, and common usage, suggests that African-Americans comprise a single group -- a good example of the outgroup homogeneity effect described earlier.  But things look different if you're in the outgroup, and they look different if you're an outgroup member who's looking closely.  For example, W.E.B. Dubois famously distinguished between the "Talented Tenth" and other American Negroes (as they were then called).

Eugene Robinson (in Disintegration: The Splintering of Black America, 2010) argues that there is no longer any such thing as a "black community" in America -- that is, a single group with shared identity and experience.  Instead, Robinson argues that American blacks divide into four quite different subgroups:

Despite the fact that all of these groups are composed of black Americans, Robinson argues that they nonetheless have little in common -- that the divisions of economics and culture, interests and demands, overwhelm the commonality of race.  

Another way of cutting up the racial pie in America begins with the insight that not all African-Americans are the descendants of freed slaves.  Before 1965, black Americans of foreign birth were vanishingly few in number.  But the Immigration and Nationality Act of 1965, replacing the Johnson-Reed Act of 1924, which had favored European immigrants -- and northern European immigrants at that -- opened the doors to blacks from the Caribbean and Africa.   Ira Berlin, a historian, traces the effect of this act on African-American culture in The Making of African America: The Four Great Migrations (2010).  These four migrations offer yet another set of subcategories of African-Americans:

The first three migrations arguably created three very distinct groups of African-Americans.  But Berlin notes that the distinction between native-born African Americans, with a family heritage of slavery, and black immigrants to America, is very real to African-Americans.  He quotes one Ethiopian-born man, speaking to a group in Baltimore: "I am African and I am an American citizen; am I not African American?".  Berlin reports that "To his surprise and dismay, the audience responded no.  Such discord over the meaning of the African-American experience and who is (and isn't) part of it is not new, but of late has grown more intense (Migrations Forced and Free" by Ira Berlin, Smithsonian, 02/2010). 


When Tom Met Sally...

As an example of the power of the "one-drop rule" in American history, consider the case of Thomas Jefferson, principal drafter of the Declaration of Independence, third President of the United States, and founder of the University of Virginia, who we now know fathered as many as six children by one of his Negro slaves, Sally Hemmings.  In a letter written in 1815, Jefferson tried to work out the "mathematical problem" of determining how many "crossings" of black and white would be necessary before a mixed-race offspring could be considered "white" (see "President Tom's Cabin" by Jill Lepore, reviewing the Hemmingses of Monticello: An American Family by Annette Gordon-Reed, New Yorker, 09/22/08, from which the following quotation is drawn).

Let us express the pure blood of the white in the capital letters of the printed alphabet... and any given mixture of either, by way of abridgment, in [small] letters.

Let the first crossing be of a, a pure negro [sic], with A, a pure white.  The unit of blood of the issue being composed of the half of that of each parent, will be a/2 + A/2.  Call it, for abbreviation, h (half blood)....

[Jefferson refers to b as the second crossing, and q as the resulting "quarteroon".]

Let the third crossing [denoted c] be of q and C, their offspring will be q/2 + C/2 = a/8 + A/8 + B/4 + C/2, call this e (eighth), who having less than 1/4 of a, or of pure negro blood, to wit 1/8 only, is no longer a mulatto, so that a third cross clears the blood.  

Given that Sally Hemmings herself had both a white father and a white grandfather, Jefferson was apparently satisfied that his own children by her -- who, by the way, were each freed as they reached age 21 -- had "cleared the blood".  In any event, their daughter Harriet, and one of their sons, Beverly, did in fact live as whites in Washington, D.C. -- though another son, Madison, remained part of the community of free Negroes in Virginia.  

In modern society, Blacks have often been subject to racial discrimination, but this was not always the case.   Frank M. Snowden, a historian of blacks in the ancient world, has argued that "color prejudice" was virtually unknown in the ancient world of Egypt, Assyria, Greece and Rome (see his books, Blacks in Antiquity: Ethiopians in the Greco-Roman Experience, 1970, and Before Color Prejudice: The Ancient View of Blacks, 1983).  In his view, blackness was not equated with inferiority and subordination because ancient Whites encountered Blacks as warriors and statesmen, rather than as slaves or colonial subjects.  Color prejudice, then, appears largely to be a social construction, arising relatively recently out of specific historical circumstances.

Nor is racial prejudice necessarily about color, per se.  We can see this is the case when legally enforced racial categories are abandoned. 

 

Who is a Jew?

WhoIsAJew.JPG (106934 bytes)Social categorization can have important legal (and personal) ramifications.  For example, the question of Jewish identity, which mixes categorization on the basis of ethnicity and religion, took an interesting turn with the establishment of Israel as a Jewish state in 1948, and the enactment in 1950 of the "Law of Return", which gave any Jew the right to aliyah -- to immigrate to Israel and live in the country as a citizen.  Thus, the question, Who is a Jew?, is addressed by the Israeli Chief Rabbinate, whose court, or Beit din, is dominated by Orthodox and Ultra-Orthodox rabbis who operate under Halakha, or Jewish Rabbinical law.

Because Jewish culture is matrilineal (Deuteronomy 7:4), the easiest answer is that anyone born to a Jewish mother is Jewish; if not, then not.  It doesn't matter whether the child is raised Jewish, or whether the mother considers herself to be Jewish.   In this view, "Jew" is a proper set, with all instances sharing a single defining feature. But then things get complicated.

  • There are conversions to Judaism, which also must be sanctioned by a Beit Din, or religious court.  Thus, the set "Jew" is rendered disjunctive, with some Jews in the category by virtue of having a Jewish mother, and other Jews in the category by virtue of having undergone conversion.
  • Under Israeli law, a person can make aliyah even though he or she has only one Jewish grandparent.  But that person is not considered to be a Jew under Halakha.  Thus, a person can become an Israeli by virtue of a Jewish heritage, but still not be considered a Jew.
  • In Britain and America, the Liberal and Reform movements define as a Jew anyone who has at least one Jewish parent, and who is raised as a Jew.  This sets up a conflict for some Reform or Liberal Jews who want to emigrate to Israel, as the Beit Din generally does not recognize as Jewish individuals who were not born to Jewish mothers.   To make things even more difficult, the rabbinical court prefers that the individual be the child of an Orthodox mother.
  • Similarly, the Israeli Chief Rabbinate, which is dominated by Orthodox Jews, tends not to recognize non-Orthodox conversions -- which excludes most conversions performed in the US and Britain.
  • Then there is the matter of individuals who were born Jewish but then convert to another religion.  In Orthodox Judaism, individuals born to Jewish mothers are considered to remain Jews even after their conversion.  But this is not the case for Reform and Liberal Judaism.
  • Then there are members of so-called "lost tribes", including the Falasha of Ethiopia, and other "lost tribes" in Africa, the Caucasus, India, Siberia, Burma, and New Mexico.

The situation is made more acute by the fact that the Israeli Chief Rabbinate not only controls aliyah but also controls marriage: Jews are not permitted to marry non-Jews in Israel -- and, as just described, the criteria for "Who is a Jew?" are at best unclear, and at worst incredibly strict.  But the rules of categorization have real-life consequences (see "How do You Prove You're a Jew?" by Gershom Gorenberg, New York Times Magazine, 03/02/2008).

To make things even more interesting "Jew" is an ethnic category as well as a religious one.  This dual status was brought to light in a lawsuit heard in Britain's supreme Court in 2009, over the issue of admission to a Jewish high school in London.  Britain has some 7000 publicly financed religious schools; although these are normally open to all applicants, when there are more applicants than openings these schools are permitted to select students based on their religion.  The plaintiff in the case, known as "M", is Jewish, but his mother converted to Judaism in a non-Orthodox synagogue.  Therefore, she does not meet the Orthodox criteria for being Jewish -- and, so, neither does "M".  "M" appealed, and the British Court of Appeals declared that the classic definition of Judaism, based on whether one's mother is Jewish, is inherently discriminatory.  The appeals court argued that the only test of religious faith should one be religious belief, and that classification based on parentage turns a religious classification into an ethnic or racial one -- which is quite illegal under British law.  The Orthodox rabbinate, which controls these sorts of things, claims that this ruling violates 5,000 years of Jewish tradition, and represents an unlawful intrusion of the State into religious affairs.   (See "British Case Raises Issue of Identity for Jews" by Sarah Lyall, New York Times, 11/08/2009.)  

Another perspective on "Jewish" as a social category is provided by a rabbinical debate concerning fertility treatments.  In one form of fertility treatment, eggs from a donor will be implanted in an infertile woman, to be fertilized by her husband (or whatever).  Recall that, according to Orthodox rule, a child is a Jew if his or her mother is a Jew.  But in the case of egg-donation, who's the mother?  The woman who donated the egg, or the woman who gave birth to the child?  This issue was hotly debated at a conference in Jerusalem hosted by the Puah Institute.  Many Orthodox authorities argue that it's the egg donor who matters, and the birth mother is only an "incubator", whose womb is an "external tool".  Others argue that because Judaism accepts converts, it cannot be considered a "genetic" religion -- that there's no "Jewish blood".  But then again, that principle is compromised by the British school case described earlier -- though maybe not: in the school case, if the mother had undergone an Orthodox conversion, the issue of her son's Jewishness would never have arisen.  Then again, it's conceivable that, at an Orthodox wedding , the officiating rabbi could ask the couple how they were conceived, and require evidence of the egg-donor's Jewishness.

This sidebar can provide only the briefest sketch of the question, which turns out to be incredibly complicated -- and also in flux, depending on the precise makeup (in terms of the balance between Modern Orthodox and Ultra-Orthodox members) of the Israeli Chief Rabbinate.  I claim no expertise in Halakha.  The point is that the question "Who is a Jew?" is not self-evident, and it's not just a matter of anti-Semitic stereotyping, but has real consequences, even among Jews, and even in Israel itself.  (See "Fertility Treatment Gets More Complicated" by Gabrielle Birkner, Wall Street Journal, 05/14/2010).

The basic issue seems to be this:

  • Is "Jew" a religious category, defined by certain beliefs and practices?

  • Or is it an ethnic category, defined by birth and blood?

  • Or is it a national category, defined by a certain ethical tradition, history, worldview, and culture?

It's a good example of the issues that surround social categorization.  Social categories may exist in the mind(s) of the beholder(s), but -- as the Thomases would surely agree -- they are real in their consequences.


It should be said, in conclusion, that the racial category "white" -- the usual contrast for social categories such as Black, and Asian is also problematic.  In the ancient world, ethnic distinctions were based on culture, not physical differences such as skin color -- as when the Greeks and Romans, not to mention the Chinese, distinguished between themselves and "barbarians".  In fact, the Greeks noted that the Scythians and Celts were lighter in skin tone than themselves.  So were the Circassians, from whom we derive the very term Caucasian -- but at this time the Caucasians were hardly a dominant ethnic group, and whiteness had no special cachet. 

Apparently, the notion of "white" as an ethnic category began with German "racial science" in the 18th and 19th centuries, such as Johann Friedrich Blumenbach -- an early anthropologist who classified humans into five races based on skin color: Caucasians (white), Mongolians (yellow), Malays (brown), Negroids (black), and Americans (red).  Blumenbach's system was adopted in America by Thomas Jefferson and others.  However, while Blumenbach took Caucasians as the exemplars of whiteness, Jefferson and others focused on Anglo-Saxons (English and lowland Scots, but not the Irish) and Teutons (Germans).  Later, the boundaries of "white" were extended to Nordics and Aryans, and still later to Alpines (Eastern Europeans) and Mediterraneans (Italians and Greeks).  The Irish, who were considered to be only 30% Nordic, and 70% Mediterranean, were granted "white" status after the Civil War.  Thomas Carlyle considered the French to be an "ape-population" -- but then again, a 1995 episode of The Simpsons referred to the French as "cheese-eating surrender monkeys", so maybe we haven't progressed so far after all.  [For details, see The History of White People (2010) by Nell Irvin Painter; the illustration, by Leigh Wells, is taken from the review of Painter's book, by Linda Gordon, New York Times 03/28/2010.]

Hardly anyone uses the term "Caucasian" anymore -- nor, for that matter, the other Blumenbachian terms, "Mongoloid" and "Negroid". But it has come up in interesting contexts. In Taka Ozawa vs. United States (1922), the United States Supreme Court found a Japanese man ineligible for citizenship because he was night "Caucasian" -- even though he was light-skinned. In United States v. Bhagat Singh Thind (1923), the Court denied citizenship to a man of Indian descent because, although he was technically "Caucasian", he was not light-skinned (the case is a notable example of the judicial theory of Original Intent).

Shaila Dewan, an American of East Indian and European descent, writes about "whiteness" in her essay, "Has 'Caucasian' Lost Its Meaning?" (New York Times, 07/07/2013). She notes that in the American South, she was often asked about her ethnic origins. When she answered that her father was from India, but her mother was white, she felt pressed for further clarification: "What kind of white". The answer was that her mother was a mix of Norwegian, Scottish, and German ancestry. Which experience, in turn, led her to think about sub-classifications within the category of "white". The implication, as explained to her by Matthew Pratt Guterl, who wrote The Color of Race in America, 1900-1940, is that "all whitenesses are not created equal".

All of which seems to illustrate the outgroup homogeneity effect. Whites care about whether someone is English or Irish, Swedish or Norwegian, Polish or Lithuanian; but they don't distinguish between Chicanos and other Hispanics; and they don't ask whether the ancestors of African-Americans were from East or West Africa. However, Latinos may well distinguish between people of Mexican, South-American, Cuban, or for that matter, Iberian heritage; and African-Americans may well distinguish between those with a heritage of slavery (like Michele Obama) and those without one (like Barack Obama). There's a study here: hint, hint.

Race Blindness -- Literally

We usually identify a person's race by visual cues -- and chiefly by skin color, which raises the question of whether race is some sort of visual artifact. Which raises the further question of what "race" means to people who are visually impaired.  A study by Osagie Obasogie, at UC's Hastings College of Law indicates that "race" clearly has meaning for blind people (see his Blinded by Sight: Seeing Race Through the Eyes of the Blind, 2013).


  • Sighted people, who typically identify race based on skin color, facial features, and other visible features, tended to believe that blind people wouldn't be affected by race.
  • But Obasogie's intensive interviews indicated that blind people have the same understanding of racial categories that sighted people do, derived from their socialization experiences.  As a result, he argues, blind people not only acknowledge racial boundaries, but also have a kind of "visual sensibility" about race.

The point, says Obasogie, is that "race" isn't a visual characteristic.  We are all, he argues, "trained" through our exposure to various social practices to "see race" the same way -- regardless of whether we can see at all!


Social Categorization in the United States Census

RaceUSCensus.jpg
              (337103 bytes)Nowhere is the intertwining of the "natural" and "social" bases for racial and ethnic classification clearer than with the history of the United States census:



To make things even more complex, different government agencies, such as the Department of Education and the National Center for Health Statistics tally multiracial individuals according to different schemes than the one used by the Census Bureau (see "In a Multiracial Nation, Many Ways to Tally" by Susan Saulny, New York Times, 02/10/2011, and other articles in the Times' ongoing series, "Race Remixed: The Pigeonhole Problem).

Beyond 2010, Kenneth Prewitt, who served as director of the Census Bureau from 1998 to 2000, has written that "the demographic revolution since the immigration overhaul of 1965 has pushed the outdated (and politically constructed) notion of race to the breaking point" ("Fix the Census' Archaic Racial Categories", New York Times, 08/22/2013).  Prewitt has proposed three reforms to the Census: 

For more details, see "Historical Census Statistics on Population Totals by Race, 1790 to 1990, and by Hispanic Origin, 1970 to 1990), for the United States, Regions, Divisions, and States" by Campbell Gibson & Kay Jung (Working Paper Series No. 56, Population Division, U.S. Census Bureau (09/02), from which much of this this material is taken.  

See also:

  • "Hispanics Debate Racial Grouping by Census" by Rachel L. Swarns, New York Times, 10/24/04;
  • "Marrying Out" by Luna Shyr, National Geographic, 04/2011;
  • What Is Your Race? The Census and Our Flawed Effort to Classify Americans (2013) by Kenneth Prewitt.
  • "But Who's Counting?  The Coming Census" by Jill Lepore (New Yorker, 03/23/2020).  Lepore, in turn, bases her article mostly on two recent books:
    • The Sum of the People: How the Census Has Shaped Nations, from the Ancient World to the Modern Age (2020) by Andrew Whitby.
    • Counting Americans: How the U.S. Census Classified the Nation (2017) by Paul Schor.

For an excellent analysis of the evolution of the 'Hispanic" category in the US Census, see the book by UCB's own Cristina Mora: Making Hispanics: How Activists, Bureaucrats, and Media Constructed a New America (2014).

  • You'll find a summary of her book in an article, "The Institutionalization of Hispanic Panethnicity, 1965 to 1990" (American Sociological Review, 2014).
  • Link to a video about Mora's work.

 

Minorities and Diversity on Campus

The difficulties of social categorization are not confined to the census.

Consider the evolution of ethnic categories offered to undergraduate applicants to the University of California system.

For most of its recent history, the UC has classified its applicants into eight categories:

However, the application for the 2008-2009 academic year contains a more differentiated set of "Asian" ethnicities:
Still, out of a concern that certain Southeast Asian and Pacific Islander groups were disadvantaged in the admissions process, in part because their numbers were submerged in larger ethnic groups like Vietnamese and Filipinos, representatives of Pacific Rim students mounted a "Count Me In" campaign.  In response, the UC system greatly expanded its categories for Southeast Asians and Pacific Islanders.
Note that, aside from Mexican Americans, "Other Spanish Americans" are still lumped together -- never mind Middle Easterners, who are lumped together with Whites (and, for that matter, never mind ethnicities among Whites!).  As more and more ethnic groups mount "Count Me In" campaigns, we can expect official recognition of more and more ethnic categories.

And, as a smaller example, the racial and ethnic classifications used in the Research Participation Program of the UCB Department of Psychology.   For purposes of prescreening, students in the RPP are asked to classify themselves with respect to gender identity, ethnic identity, and other characteristics.  

038RPP2004.jpg
              (64032 bytes)In 2004, a relatively small number of such categories were employed -- pretty much along the lines of the 2000 census.

 

 

 

039RPP2006.jpg
              (89552 bytes)But in 2006, RPP employed a much more diverse set of racial and ethnic categories -- with more than a dozen subcategories for Asians and Asian-Americans, for example.  Arguably, the ethnic composition of the Berkeley student body didn't change all that much in just two years!  Rather, the change was motivated by the fact that the Psychology Department has a number of researchers interested in cultural psychology, and especially in differences between people of Asian and European heritage.  In this research, it is important to make rather fine distinctions among Asian, with respect to their ancestral lands.  But note some anomalies:


In fact, the term "African-American", as a proxy for 'Black", emphasizes ethnic heritage over race, but creates all sorts of problems of its own:
If you think this last category is a contradiction in terms, think again:

The category of Hispanic has also been contested: should it apply to anyone with Spanish heritage, including immigrants from Spain as well as Latin America -- not to mention Spanish Morocco? 


A Note on "Passing"

Concepts are mental representations of categories, and so classification is not always accurate.  It's one thing to be categorized as a member of some racial or ethnic group, and quite another thing to actually be an member of that group.  Or the reverse.  Discussions of racial and ethnic categorization often raise the question of "passing" -- presenting oneself, and being perceived (categorized), as a member of one group (usually the majority) when one is really a member of another (usually a minority group). 

The term has its origins Passing, in a 1929 novel by Nella Larsen, about the friendship between two light-skinned Black women who grew up together in Chicago.  One, Clare, goes to live with white relatives when her father dies, marries a bigoted white man who knows nothing of her mixed-race background, and has a light-skinned daughter who likewise knows nothing of her heritage.  The other, Irene, marries a Black physician and raises two dark-skinned boys in New York at the time of the Harlem Renaissance.  One day, while Irene is passing for white in a whites-only restaurant, she encounters Clare.  In 2021, the novel was made into a film directed by Rebecca Hall (see "The Secret Toll of Racial Ambiguity" by Alexandra Kleeman, New York Times Magazine, 10/24/2021; also "Black Sink, White Masks", a review of the film by Manohla Dargis, New York Times, 11/12/2021).

There is an interesting story here.  According to Kleeman, Larsen herself was of mixed race, daughter of a white mother and a Black father, with light skin.  When her white remarried to a white man, and gave birth to a white daughter, the contrast was obvious.  Larsen's mixed-race lineage made it difficult for her family to move into a white working-class neighborhood. 

And Hall, too, may has a mixed-race background.  She is the daughter of Sir Peter Hall, founder of Britain's Royal Shakespeare Company, and Maria Ewing, an opera superstar (she briefly appeared nude at the end of the "Dance of the Seven Veils" in a famous Metropolitan Opera production of Richard Strauss's Salome), whose father, born to a former slave (who once toasted Frederick Douglass at a banquet) and a free woman of color (descended from a Black man who fought in the Revolutionary War), himself passed for white.  (Got that?  For details, see Season 8, Episode 1 (01/04/2022) of Henry Louis Gates's TV program, "Finding Your Roots"). 

An excellent scholarly account of passing is A Chosen Exile: A History of Racial Passing in American Life by Allyson Hobbs, a historian at Stanford (2014).  Reviewing the book in the New York Times Book Review (11/23/2014), Danzy Senna wrote:

Hobbs tells the curious story of the upper-class black couple Albert and Thyra Johnston.  Married to Thyra in 1924, Albert graduated from medical school but couldn't get a job as a black doctor, and passed as white in order to gain entry to a reputable hospital.  His ruse worked and he and his wife became pillars of an all-white new Hampshire community.  For 20 years, he was the town doctor and she was the center of the town's social world.  Their stately home served as the community hub, and there they raised their four children, who believed they were white. Then one day, when their eldest son made an off-the-cuff comment about a black student at his boarding school, Albert blurted out, "Well, you're colored."  It was almost as if Albert had grown weary after 20 years of carefully guarding their secret.  And with that Albert and Thyra began the journey toward blackness again.

Occasionally, we see examples of passing in reverse.  A famous case in point is Black Like Me by John Howard Griffin (1961), a white journalist who artificially darkened his skin and traveled through the American South as a black man. 

Many novels and films have plots based on passing.  Famous examples include The Tragedy of Pudd'nhead Wilson by Mark Twain (1894), in which a 1/32 black infant is switched with a white baby; and Gentlemen's Agreement (1947), a film starring Gregory Peck as a Gentile reporter who pretends to be Jewish.  

For a scholarly treatment, see A Chosen Exile: A History of Racial Passing in American Life by Allyson Hobbs (2014).


What Means "Indigenous"?

In the early 21st century, a new social category emerged, BIPoC, which stands for "Black, Indigenous, and People of Color" (For a discussion, see "You First" by Manvir Singh, New Yorker, 02/27/2023., from which this discussion is drawn)  That certainly appears to cover the whole territory of people who are not whites of European heritage (though, frankly, the term reminds me of the Morlocks, the outcast, subterranean mutants of H.G. Welles's The Time Machine, which is not at all what's intended).  The term "Indigenous", coined in 1588, referred to people (and plants and animals) that are "sprung from the soil", and refers to "native"peoples who inhabited a territory (like North America) before contact with European explorers and colonists (like the Spanish, the English, the French, and Dutch).  In the late 20th century, the term took on a global identity, referring to those who were in a particular territory "first" -- roughly 500 million people, all taken together constituting the third-largest country in the world.  That much is clear, but it turns out that indigenous no less problematic than other racial or ethnic labels. 

  • For example, the Americas were unpeopled before roughly 12,000 years ago, when paleoIndians crossed a land bridge across what is now the Bering Strait from Siberia to North America (see discussion below).  They were present on the land before 1492, but they themselves were colonists.  They were here "first", in some genuine sense that deserves acknowledgement and respect; but before that, they themselves came from "somewhere else".
  • Iceland was unpeopled before the Vikings settled there, but their descendants do not refer to themselves as "indigenous".
  • The Maasai people in modern Tanzania do refer to themselves as "indigenous", but their own oral history holds that they emigrated from South Sudan (roughly) only a few hundred years ago. 
With the establishment of the World Council of Indigenous Peoples and the United Nations Permanent Forum on Indigenous Issues, the label of indigenous has taken on an official status but, as Singh points out, "being first" is neither necessary nor sufficient to qualify a people as "indigenous".  Instead, the UN relies on indigenous groups to identify themselves as such, although it has denied some groups' claims to indigenous status.  The World Council rejected the adjective "aboriginal" in favor of "indigenous", which it defined as "descendants of the earliest populations living in the area... who do not, as a group, control the national government of the countries within which they live".  You get the picture, but as Singh points out that definition seems to leave Africans and Asians outside the category: Africans govern all African countries (though, admittedly, the Maasai don't actually govern Tanzania), the Chinese govern China, and the Japanese govern Japan.  Hindus control India, no matter what political party (in 2023, it's the Hindu nationalist Bharatiya Janata Partyrather than the more secular Indian National Congress) is in power.  And the UN considers Samoans to be indigenous, even though they dominate the society, culture, and politics of Samoa. 

So, like other racial and ethnic categories, "indigenous" is a fuzzy set, with no singly necessary or jointly sufficient defining features.  Instead, "indigeneity" is better represented as a prototype, with characteristic features; or as a set of exemplars, like the Ohlone and the Maasai; or maybe even a theory of what makes an indigenous people indigenous.

There's a catch here: Whose theory?  As Singh points out, "Centuries of colonialism have entangled indigeneity with outdated images of simple, timeless people unsullied by history", in what Mark Rifkin, in Settler Time (2017), calls a simulacrum of pastness" or what Samuel J. Redman, in Prophets and Ghosts: The Story of Salvage Anthropology (2021) calls "an idyllic, heavily romanticized and apparently already bygone era of uncorrupted primitive societies".  The conundrum for indigenous people is to identify themselves as indigenous, celebrate their cultural heritage, and advocate for their status, rights, and concerns, without being trapped in some Western colonialist version of primitiveness. 

UC Berkeley has posted the following

"Acknowledgement of Land and Place"

on its website:

The Division of Equity & Inclusion recognizes that Berkeley sits on the territory of xučyun (Huichin (Hoo-Choon), the ancestral and unceded land of the Chochenyo (Cho-chen-yo) speaking Ohlone people, the successors of the historic and sovereign Verona Band of Alameda County. This land was and continues to be of great importance to the Muwekma (Muh-wek-muh) Ohlone Tribe and other familial descendants of the Verona Band.

We recognize that every member of the Berkeley community has benefitted, and continues to benefit, from the use and occupation of this land since the institution’s founding in 1868. Consistent with our values of community and diversity, we have a responsibility to acknowledge and make visible the university’s relationship to Native peoples. By offering this Land Acknowledgement, we affirm Indigenous sovereignty and will work to hold the University of California, Berkeley more accountable to the needs of American Indian and Indigenous peoples.

This statement was developed in partnership with the Muwekma Ohlone Tribe and is a living document



Personality Types

Obviously, our language contains a large number of nouns which designate various types of people.  These types are categories of people, and the nouns are category labels.  Many of these classificatory labels have their origins in scientific research on personality, including the terms used to label various forms of mental illness, but they have also filtered into common parlance.  You don't have to be a psychologist or a psychiatrist to label someone an extravert or a psycho.  

The classification of people according to their personality type has a history that goes back almost 2,500 years.

 

Theophrastus and the Characterological Tradition in Literature

039Theophrastus.jpg (99023 bytes)The chief preoccupation of Greek science was with classification. Aristotle (384-322 B.C.), in his Historia Animalium provided a taxonomy, or classificatory scheme, for biological phenomena.   Theophrastus (370-287 B.C.), his successor as head of the Peripatetic School in Athens (so named because the teachers strolled around the courtyard while lecturing), followed his example by developing a two-part classification of plants that heavily influenced the modern "genus-species"  taxonomy introduced by Linnaeus.  Then he turned his attention to developing a taxonomy of people. His work is embodied in Characters, a delightful book in which he described the various types of people encountered in Athenian society. Unfortunately, that portion of the book which described socially desirable types has been lost to history: All that remains are his portraits of 30 thoroughly negative characters, most of whom are instantly recognizable even today, more than 2000 years later. All his descriptions follow the same expository format: a brief definition of the dominant feature of the personality under consideration, followed by a list of typical behaviors representative of that feature.
 

The Distrustful Man

It goes without saying that Distrustfulness is a presumption of dishonesty against all mankind; and the Distrustful man is he that will send one servant off to market and then another to learn what price he paid; and will carry his own money and sit down every furlong to count it over. When he is abed he will ask his wife if the coffer be locked and the cupboard sealed and the house-door bolted, and for all she may say Yes, he will himself rise naked and bare-foot from the blankets and light the candle and run round the house to see, and even so will hardly go to sleep. Those that owe him money find him demand the usury before witnesses, so that they shall never by any means deny that he has asked it. His cloak is put out to wash not where it will be fulled best, but where the fuller gives him good security. And when a neighbor comes a-borrowing drinking-cups he will refuse him if he can; should he perchance be a great friend or a kinsman, he will lend them, yet almost weigh them and assay them, if not take security for them, before he does so. When his servant attends him he is bidden go before and not behind, so that he may make sure he do not take himself off by the way. And to any man who has bought of him and says, 'Reckon it up and set it down; I cannot send for the money just yet,' he replies, 'Never mind; I will accompany you home' (Theophrastus, 319 B.C./1929, pp. 85-87).


Theophrastus initiated a literary tradition which became very popular during the 16th and 17th centuries, especially in England and France (for reviews see Aldington, 1925; Roback, 1928). However, these later examples represent significant departures from their forerunner. Theophrastus was interested in the objective description of broad types of people defined by some salient psychological characteristic. In contrast, the later efforts show an increasing interest in types defined by social class or occupational status. In other instances, the author presents word portraits of particular individuals, with little apparent concern with whether the subjects of the sketch are representative of any broader class at all. Early examples of this tendency are to be found in the descriptions of the pilgrims in Chaucer's (c. 1387) Canterbury Tales.  Two examples that lie closer to Theophrastus' intentions are the Microcosmographie of John Earle (1628) and La Bruyere's Les Caracteres (1688). More recent examples of the form may be found in George Eliot's Impressions of Theophrastus Such (1879) and Earwitness: Fifty Characters (1982) by Elias Canetti, winner of the 1981 Nobel Prize for Literature.

The later character sketches also became increasingly opinionated in nature, including the author's personal evaluations of the class or individual, or serving as vehicles for making ethical or moral points. Like Theophrastus, however, all of these authors attempted highly abstract character portraits, in which individuals were lifted out of the social and temporal context in which their lives ran their course. Reading one of these sketches we have little or no idea what forces impinged on these individuals to shape their thoughts and actions; what their motives, goals, and intentions were; or what their lives were like from day to day, year to year. As authors became more and more interested in such matters they began to write "histories" or "biographies" of fictitious characters -- in short, novels. In the 18th century the novel quickly rose to a position as the dominant literary form in Europe, and interest in the character-sketch waned. Character portraits still occur in novels and short stories, but only as a minor part of the whole -- perhaps contributing to the backdrop against which the action of the plot takes place. Again, insofar as they describe particular individuals, character sketches embedded in novels lack the quality of universality which Theophrastus sought to achieve.

A new translation of Characters was published in 2018 by Pamela Mensch, with wonderful, usually anachronistic illustrations by Andre Carrilho (e.g., Marie Antoinette taking a selfie with a cellphone).  The review of her book by A.E. Stallings ("You Know the Types", Wall Street Journal, 12/08/2018) sets the book in context, but also gives it a contemporary spin (e.g., Theophrastus's "Newshound" spreads Fake News).  Stallings's review is worth reading all on its own, not least because he suggests some modern-day characters that Theophrastus would have thought of if he lived long enough: the Mansplainer, the Humblebragger, the Instagram Poet, the Meme-Spreader, the Virtue Signaler, the More-Outraged-Than-Thou, and the Troll.

 

Scientific and Pseudoscientific Typologies in the Ancient World

Characters is a classic of literature because -- despite the radical differences between ancient Athenian culture and our own -- Theophrastus' 30 character types are instantly recognizable by readers of any place and time. As a scientific endeavor, however, it is not so satisfying. In the first place, Theophrastus provides no evidence in support of his typological distinctions: were there really 30 negative types of Greeks, or were there 28 or 32; and if there were indeed 30 such types, were they these 30?  (Theophrastus didn't describe any positive characters, but suggested that he described them in another manuscript that has been lost - -or, perhaps Theophrastus was just kidding.)  Moreover, Theophrastus did not offer any scheme to organize these types, showing how they might be related to each other. Perhaps more important -- assuming that Characters attained classic status precisely because Theophrastus' types were deemed to be universal -- is the question of the origin of the types. Theophrastus raised this question at the very beginning of his book, but he did not offer any answer:

I have often marveled, when I have given the matter my attention, and it may be I shall never cease to marvel, why it has come about that, albeit the whole of Greece lies in the same clime and all Greeks have a like upbringing, we have not the same constitution of character (319 B.C./1929, p. 37).

The ancients had solutions to all problems, both scientific and pseudoscientific.

 

Astrology

Some popular approaches to creating typologies of personality have their origins in ancient folklore, and from time to time they have been endowed with the appearance of science. For example, a tradition of physiognomy diagnosed personality on the basis of similarities in physical appearance between individual humans and species of infra-human animals. Thus, a person possessing hawk-like eyes, or an eagle-like nose was presumed to share behavioral characteristics with that species as well.

Astrology.JPG
              (103820 bytes)By far the most prominent of these pseudoscientific approaches to personality was (and still is) astrology, which holds that the sun, moon, planets, and stars somehow influence events on earth. The theory has its origins in the ancient idea that events in the heavens -- eclipses, conjunctions of stars, and the like -- were omens of things to come. This interest in astral omens has been traced back almost 4000 years to the First Dynasty of the kingdom of Babylon. Astrology per se appears to have begun in the 3rd century B.C., when religious authorities began using the planets to predict events in an individual's life. The various planets, and signs of the Zodiac, were thought to be associated with various attributes. The astrologer prepared a horoscope, or map of the heavens at the moment of an individual's birth (or, sometimes, his or her conception), and predicted on the basis of the relative positions of the heavenly bodies what characteristics the person would possess. Of course, because these relative positions varied constantly, somewhat different predictions could be derived for each individual. To the extent that two individuals were born at the same time and in the same place, then, they would be similar in personality.

Later, this complicated system was considerably simplified such that these predictions were based on the zodiacal signs themselves. Each sign was associated with a different portion of the calendar year, and individuals born during that interval were held to acquire corresponding personality characteristics. Thus, modern astrology establishes 12 personality types, one for each sign of the Zodiac. In the passages which follow, taken from the Larousse Encyclopaedia of Astrology, note the stylistic similarity to the character portraits of Theophrastus.

Astrology was immensely powerful in the ancient world, and even in this century various political leaders such as Adolph Hitler in Germany and Lon Nol in Cambodia have computed horoscopes to help them in decision-making (Nancy Reagan famously consulted an astrologer about the scheduling of some White House events). However, by the 17th century astrology had lost its theoretical underpinnings. First, the new astronomy of Copernicus (1473-1543), Galileo (1564-1642), and Kepler (1571-1630), showed that the earth was not at the center of the universe, as astrological doctrine required. Then, the new physics of Descartes (1596-1650) and Newton (1642-1727) proved that the stars could have no physical influence on the earth. If that were not enough, the more recent discovery of Uranus, Neptune, and Pluto would have created enormous problems for a system that was predicated on the assumption that there were six, not nine, planets. In any event, there is no credible evidence of any lawful relationship between horoscope and personality.

Never mind that there are actually thirteen signs of the zodiac.  The Babylonians noted that the sun also passes through the constellation Ophiuchus, the serpent-holder (November 29-December 17).  But the sun spends less time in Ophiuchus than it does in the other constellations, and the "pass" is really only a tangential nick in spatial terms.  So the Babylonians, who wanted there to be just twelve  zodiacal signs, discarded it, leaving us with the twelve signs we know today.  And never mind that the boundaries between astrological signs are wrong.  Because of the astronomical phenomenon of precession, caused by the wobbling of the Earth on its axis, the actual dates are shifted by about a month from their conventional boundaries.  The true dates for Scorpio, which are usually given as October 24-November 22, are actually November 23-November 29.  If you want to mock either astrologers or horoscope-readers for not being faithful to their system, then you should knock Sir Isaac Newton as well.  After all, a prism really breaks white light up into only six primary colors (look for yourself), and he added indigo because he thought that the number 7 had occult significance (he was also an alchemist, after all.

In 2011, Parke Kunkel, an astronomer and member of the Minnesota Planetarium Society, reminded astrologers of these inconvenient fact, which meant that large number of people would have to adjust their signs.  According to a news story ("Did Your Horoscope Predict This?", by Jesse McKinley, New York Times, 01/15/2011), one astrology buff Twittered: "My zodiac sign changed.  Does that mean that I'm not anymore who I used to be?!?".  Another wrote, "First we were told that Pluto is not a planet, now there's a new zodiac sign, Ophiuchus.  My childhood was a bloody lie."  On the other hand, an astrologer told of "A woman who told me she'd always felt there were one or two traits about Sagittarius that didn't fit her personality, but that the new sign is spot on".  Other people, I'm sure responded "I don't care: I'm still a Scorpio" or whatever -- which, I think, is eloquent testimony to the fact that the traditional zodiacal signs really do serve as social categories, and as elements of personal identity -- which is why so many people exchange their astrological signs on first dates.  

 

The Humor Theory of Temperament

Greek science had another answer for these questions, in the form of a theory first proposed by Hippocrates (460?-377? B.C.), usually acknowledged as the founder of Western medicine, and Galen (130-200? A.D.), his intellectual heir. Greek physics asserted that the universe was composed of four cosmic elements, air, earth, fire, and water. Human beings, as microcosms of nature, were composed of humors -- biological substances which paralleled the cosmic elements. The predominance of one humor over the others endowed each individual with a particular type of temperament. 

Humor theory was the first scientific theory of personality -- the first to base its descriptions on some basis other than the personal predilections of the observer, and the first to provide a rational explanation of individual differences. The theory was extremely powerful, and dominated both philosophical and medical discussions of personality well into the 19th century. Immanuel Kant , the German philosopher, abandoned Greek humor theory but retained its fourfold classification of personality types in his Anthropology of 1798 (this book was the forerunner of the now-familiar introductory psychology textbook). His descriptions of the four personality types have a flavor strongly reminiscent of Theophrastus' Characters.

In the end, Greek humor theory proved to be no more valid than astrology. Nevertheless, it formed the basis for the study of the psychophysiological correlates of emotion -- the search for patterns of somatic activity uniquely corresponding to emotional experiences. Moreover, the classic fourfold typology of personality laid the basis for a major tradition in the scientific study of personality, which emerged around the turn of the 20th century, which analyzed personality in terms of traits rather than types. We shall examine each of these topics in detail later. First, however, we should examine other typological schemes that are prominent today.

 

The Four Temperaments

The classic fourfold typology, derived from ancient Greek humour theory, is often referred to as The Four Temperaments.  Under that label, it has been the subject of a number of artworks.

In music, a humoresque is a term given to a light-hearted musical composition.  But Robert Schumann's "Humoreske in Bb", Op. 20 (1839), is a suite based on the four classical humours.

The German composer Paul Hindemith also wrote a suite for piano and strings --  actually, a theme with four variations -- entitled The Four Temperaments (1940), which was choreographed for the Ballet Society, the forerunner of the New York City Center Ballet by George Balanchine (1946).

 

 

Modern Clinical Typologies

With the emergence of psychology as a scientific discipline separate from philosophy and physiology in the late 19th century, a number of other typological schemes were proposed. Most of these had their origins in astute clinical observation by psychiatrists and clinical psychologists rather than in rigorous empirical research. However, all of these were explicitly scientific in intent, in that their proponents attempted to develop a body of evidence that would confirm the existence of the types.

 

Intellectual Types

Beginning in the late 18th century, and especially in the late 19th century, as psychiatry began to emerge as a distinct branch of medicine, a great deal of attention was devoted to classification by intellectual ability, as measured by IQ (or something like it).




At the lower end of the scale, there were three subcategories of "mental defective" (what we now call mental retardation):

At the upper end of the scale, there was only a single category, genius, for those with extremely high IQs.  More recently, the term "genius" has been replaced with "gifted".  The upper end has also been divided into subcategories:


Freudian Typologies

Sigmund Freud (1908), a Viennese psychiatrist whose theory of personality was enormously influential in the 20th century (despite being invalid in every respect), claimed that adults displayed constellations of attributes whose origins could be traced to early childhood experiences related to weaning, toilet training, and sexuality. Freud himself described only one type -- the anal character, which displays excessive frugality, parsimony, petulance, obstinacy, pedantry, and orderliness. His followers, working along the same lines, elaborated a wide variety of additional types such as the oral, urethral, phallic, and genital (Blum, 1953; Fenichel, 1945; Shapiro, 1965).

The passage through the five stages of development leaves its imprint on adult personality. If all goes well, the person emerges possessing what is known as the genital character. Such a person is capable of achieving full sexual satisfaction through orgasm, a fact which for the first time permits the effective regulation of sexual impulses. The individual no longer has any need to adopt primitive defenses, though the adaptive defenses of displacement, creative elaboration, and sublimation are still operative. The person's emotional life is no longer threatening, and he or she can express feelings openly. No longer ambivalent, the person is capable of loving another.

Unfortunately, according to Freud, things rarely if ever go so well. People do not typically pass through the psychosexual stages unscathed, and thus they generally do not develop the genital character spontaneously. Developmental crises occurring at earlier stages prevent growth, fulfillment, and the final achievement of genital sexuality. These difficulties are resolved through the aid of additional defense mechanisms. For example the child can experience anxiety and frustration while he or she is in the process of moving from one stage to the next. Fixation occurs when the developmental process is halted, such that the person remains at the earlier stage. Alternatively, the child may experience anxiety and frustration after the advance has been completed. In this case, the person may return to an earlier stage, one that is free of these sorts of conflicts. This regression, of course, results in the loss of growth. Because of fixation and regression, psychological development does not necessarily proceed at the same pace as physical development.

Nevertheless, the point at which fixation or regression occurs determines the person's character -- Freud's term for personality -- as an adult. Not all of the resulting character types were described by Freud, but they have become generally accepted by the psychoanalytic community (Blum, 1953).

The Oral Character "... is extremely dependent on others for the maintenance of his self-esteem. External supplies are all- important to him, and he yearns for them passively.... When he feels depressed, he eats to overcome the emotion. Oral preoccupations, in addition to food, frequently revolve around drinking, smoking, and kissing" (Blum, 1953, p. 160).The oral character develops through the resolution of conflict over feeding and weaning. The oral dependent type relies on others to enhance and maintain self-esteem, and to relieve anxiety. Characteristically, the person engages in oral preoccupations such as smoking, eating, and drinking to overcome psychic pain. By contrast, the oral aggressive type expresses hostility towards those perceived to be responsible for his or her frustrations. This anger and hatred is not expressed by physical biting, as it might be in an infant, but rather by "biting" sarcasm in print or speech.

The Urethral Character:  "The outstanding personality features of the urethral character are ambition and competitiveness..." (Blum, 1953, p. 163).

The Anal Character develops through toilet training. The anal expulsive type retaliates against those deemed responsible for his or her suffering by being messy, irresponsible, disorderly, or wasteful. Or, through the mechanism of reaction formation, the person can appear neat, meticulous, frugal, and orderly. If so, however, the anal expulsive character underlying this surface behavior may be documented by the fact that somewhere, something is messy. The anal creative type, by contrast, produces things in order to please others, as well as oneself. As a result, such an individual develops attributes of generosity, charity, and philanthropy. Finally, the anal retentive type develops an interest in collecting and saving things -- as well as personality attributes of parsimony and frugality. On the other hand, through reaction formation he or she may spend and gamble recklessly, or make foolish investments.

The Phallic Character "behaves in a reckless, resolute, and self-assured fashion.... The overvaluation of the penis and its confusion with the whole body... are reflected by intense vanity, exhibitionism, and sensitiveness.... These individuals usually anticipate an expected assault by attacking first. They appear aggressive and provocative, not so much from what they say or do, but rather in their manner of speaking and acting. Wounded pride... often results in either cold reserve, deep depression, or lively aggression" (Blum, 1953, p. 163).  The phallic character, by virtue of his or her development, overvalues the penis. The male must demonstrate that he has not been castrated, and does so by engaging in reckless, vain, and exhibitionistic behaviors -- what is known in some Latin American cultures as machismo. The female resents having been castrated, and is sullen, provocative, and promiscuous -- as if to say, "look what has been done to me".

In the final analysis, Freud held that adult personality was shaped by a perpetual conflict between instinctual demands and environmental constraints. The instincts are primitive and unconscious. The defenses erected against them in order to mediate the conflict are also unconscious. These propositions give Freud's view of human nature its tragic flavor: conflict is inevitable, because it is rooted in our biological nature; and we do not know the ultimate reasons why we do the things that we do.

 

Jungian Typologies

C.G. Jung (1921), an early follower of Freud, developed an eightfold typology constructed from two attitudes and four functions. In Jung's system, the attitudes represented different orientations toward the world: the extravert, concerned with other people and objects; and the introvert, concerned with his or her own feelings and experiences. The functions represented different ways of experiencing the objects of the attitude: thinking, in which the person was engaged in classifying observations and organizing concepts; feeling, in which the person attached values to observations and ideas; sensing, in which the person was overwhelmingly concerned with concrete facts; and intuition, in which the person favored the immediate grasping of an idea as a whole.

In each person, one attitude and one function dominated over the others, resulting in eight distinct personality types.  These attitudes and functions, inferred by Jung on the basis of his clinical observations, may be measured by a specialized psychological test, the Myers-Briggs Type Indicator (Myers, 1962).

 

Sheldonian Typologies

Another historically important typology was developed by Sheldon (1940,1942), as an extension of the constitutional psychology introduced by the German psychiatrist, Ernst Kretchmer (1921). Kretchmer and Sheldon both asserted that there was a link between physique and personality. On the basis of his anthropometric studies of the bodily builds, in which he took various measurements of the head, neck, chest, trunk, arms, legs, and other parts of the body, Sheldon discerned three types of physique reflecting both the individual's constitutional endowment and his or her current physical appearance:

On the basis of personality data, including questionnaires and clinical interviews, Sheldon likewise discerned three types of temperament:

Sheldon also found a relationship between the physical and psychological typologies, such that:

Sheldon felt that this relationship reflected the common genetic and biochemical determinants of both physique and temperament, though of course the relationship could also reflect common environmental sources. For example, one's physique may place some limits on the kinds of activities in which one engages; or, alternatively, social stereotyping may limit the kinds of activities in which people with certain physiques are involved.

Caveat emptor.  The ostensible correlation between somatotype and personality type is almost entirely spurious, because Sheldon and his research assistants were not "blind" to the subjects' somatotypes when they evaluated their personality types.  Accordingly, Sheldon's research was vulnerable to experimenter bias and other expectancy confirmation effects, especially perceptual confirmation.

The Dreaded "Posture Photograph"

For several decades in the middle part of the 20th century, incoming freshmen at many Ivy League and Seven Sisters colleges, and some other colleges as well (including some state universities) lined up during orientation week to be photographed nude, or in their underwear (front, sides, and back), for what were called "posture photographs". The ostensible purpose of this exercise, especially at the Seven Sisters schools, was to identify students whose posture and other orthopedic characteristics might need attention during physical education classes; but in many cases the real purpose was to collect data for Sheldon's studies of somatotypes and their relation to personality.  Such photographs were taken at Harvard from 1880 into the 1940s, and many served as the illustrations for Sheldon's monograph, An Atlas of Men.  They were also taken from 1931 to 1961 at Radcliffe, then the women's college associated at Harvard.  Note that posture photographs were continued at Radcliffe long after they were discontinued at Harvard.  Somewhere there may exist a photograph of a nearly-naked George W. Bush, Yale Class of 1968 (but not of Bill Clinton, who went to Georgetown).  

The practice of taking posture photographs was discontinued in the 1960s, and many sets were destroyed, but some still exist in various archives (see "The Posture Photo Scandal" by Ron Rosenbaum, New York Times Magazine, 02/12/95), and "Nude Photos Are Sealed at Smithsonian", New York Times, 01/21/95). 

 

Horney's Psychological Types

Another follower of Freud, Karen Horney (1945), proposed a three-category system based on characteristic responses of the developing child to feelings of helplessness and anxiety.

 

The Types at Chemistry.com

Helen Fisher, an anthropologist who is the "chief scientific advisor" to the internet dating site Chemistry.com, has proposed her own typology, which serves as the basis for that site's assessment of personality and predictions of compatibility.  Each type is, ostensibly, based on the dominance of a particular hormone in the individual's body chemistry.

The four types are:

Fisher assesses these types with a 56-item questionnaire, helpfully reprinted in her popular-press book, Why Him? Why Her? (2010; see also her previous pop-psych book, Why We Love -- The Nature and Chemistry of Romantic Love, 2004).  However, she argues that each of these types, much like the classical humours, has its roots in the excessive activity of one or more hormones.  Why she just doesn't employ a blood panel to screen for relative hormone levels isn't clear.  Nick Paumgarten, has noted that Fisher's approach "represent[s] a frontier of relationship science, albeit one that is thinly populated and open to flanking attach" ("Looking for Someone: Sex, love, and loneliness on the Internet", New Yorker, 07/04/2011).

Through fMRI, Fisher has also identified the ventral area of the tegmentum and the caudate nucleus as the brain centers associated with "mad" romantic love -- and even gone so far as to suggest that couples undergo brain-scanning to find out whether, in fact, they actually love each other.

 


Typologies in Social Science

Other typological systems have resulted from the analysis of whole societies rather than of individual clinic patients. Although sociology is an empirical science, these typologies are not typically determined quantitatively. Rather, like their clinical counterparts, they represent the investigator's intuitions about the kinds of people who inhabit a particular culture.

 

Spranger's Types

The German philosopher Spranger (1928) did not, strictly speaking, postulate a typology. Rather, he was interested in describing various coherent sets of values which a person could use to guide his or her life. However, the argument was presented in a book entitled Types of Men, so the descriptions below (as summarized by Allport & Vernon, 1931), seem to fit our purposes.

 

Riesman's Types

In one of the most influential pieces of social science written since World War II, David Riesman (1950) analyzed the impact of industrial development on personality.

Note that tradition-, inner-, and other-directed people are not different in conformity; rather, they are different in terms of what they conform to. Riesman argued that the other-directed type predominated in postwar American society. But to the extent that economic and cultural conditions vary in a pluralistic society, we might still expect to find a fair representation of the other types as well.

 

Fromm's Types  

Eric Fromm, who was as much influenced by Karl Marx as by Sigmund Freud, and by economics as much as by psychopathology, offered a list of five basic character types, which result from differential socialization rather than from childhood anxieties.

These types of adjustment are labelled "unproductive" by Fromm, because they prevent the individual from realizing his or her full potential.


Political Types

Most countries with a developed political system include a set of categories by which people can be classified according to their political leanings, and which, in turn, help predict what they will they and how they will vote concerning various topics.

In the United States, the most familiar of these are the contrasting categories of Democrat and Republican.  But these two organized political parties only scratch the surface.

Setting aside organized political parties are more informal political categories, such as

In Great Britain, the Democrats and Republicans have their counterparts in the Labor and Conservative (Tory) parties, respectively (there is also a prominent third party, the Liberals).

On the Continent, their respective counterparts are often known as Liberal Democrats and Christian Democrats.

In Japan, they are the Democratic Party and the Liberal Democratic Party (which is actually pretty conservative).

In the Soviet Union then, and in China now, there is only one party, the Communist Party -- which pretty much gave, and gives, their political scenes a serious ingroup-outgroup, "us versus them" quality.


Biophysical vs. Biosocial Classifications

The typologies of Theophrastus and his successors -- Jung, Sheldon, Reisman, and others -- are satisfying from one standpoint, because they seem to capture the gist of many of the people whom we encounter in our everyday lives. 



One problem with personality typologies, however, is the sheer number of different typological schemes that have been proposed. Most of these typologies are eminently plausible: each of us knows some extraverts and some sanguines, thinkers and intuiters, somatotonics and other-directed people. The reader who has stuck with this material so far may also have discovered that some (perhaps many or most) of his or her acquaintances can be classified into several different type categories, depending on which features are the focus of attention. This puts us in the curious position that a person's type can change according to the mental set of the observer, even if his or her behavior has not changed at all. 

Moreover, as Allport (1937) noted, no typology can encompass all the attributes of a person:

Whatever the kind, a typology is always a device for exalting its author's special interest at the expense of the individuality of the life which he ruthlessly dismembers .... This harsh judgment is unavoidable in the face of conflicting typologies. Certainly not one of these typologies, so diverse in conception and scope, can be considered final, for none of them overlaps any other. Each theorist slices nature in any way he chooses, and finds only his own cuttings worthy of admiration .... What has happened to the individual? He is tossed from type to type, landing sometime in one compartment and sometime in another, often in none at all (p. 296).

As this passage makes clear, Allport's objection to typologies was not that there were so many of them, but that they are biosocial rather than biophysical in nature. Allport held that types are cognitive categories rather than attributes of people: they exist in the minds of observers rather than in the personalities of the people who are observed. For Allport, as for many other personologists who wish to base a theory of personality on how individuals differ in essence, types appear to be a step in the wrong direction.  Allport believed that traits would provide the basis for a biophysical approach to personality that would go beyond cognitive categories to get at the core of personality.  But, as we will see, traits can be construed as categories as well.  


Psychiatric Diagnosis

Diagnosis lies at the heart of the medical model of psychopathology: the doctor's first task is to decide whether the person has a disease, and what that disease is. Everything else flows from that. A diagnostic system is, first and foremost, a classification of disease -- a description of the kinds of illnesses one is likely to find in a particular domain. But advanced diagnostic systems go beyond description: they also carry implications for underlying pathology, etiology, course, and prognosis; they tell us how likely a disease is to be cured, and which cures are most likely to work; failing a cure, they tell us how successful rehabilitation is likely to be; they also tell us how we might go about preventing the disease in the first place. Thus, diagnostic systems are not only descriptive: they are also predictive and prescriptive. Diagnosis is also critical for scientific research on psychopathology -- as R.B. Cattell put it, nosology precedes etiology. Uncovering the psychological deficits associated with schizophrenia requires that we be able to identify people who have the illness in the first place.

Psychiatric diagnosis is intended as a classification of mental illness, but it quickly becomes a classification of people with mental illness.  Thus,

Before Emil Kraepelin, the nosology of mental illness was a mess. Isaac Ray (1838/1962) followed Esquirol and Pinel in distinguishing between insanity (including mania and dementia) and mental deficiency (including idiocy and imbecility), but otherwise denied the validity of any more specific groupings.  It fell to Kraepelin to systematically apply the medical model to the diagnosis of psychopathology, attempting a classification of mental illnesses that went beyond presenting symptoms.  But in this respect, Kraepelin's program largely failed. Beginning in the fifth edition (1896) of his Textbook, and culminating in the seventh and penultimate edition (the second edition to be translated into English), Kraepelin acknowledged that classification in terms of pathological anatomy was impossible, given the present state of medical knowledge. His second choice, classification by etiology, also failed: Kraepelin freely admitted that most of the etiologies given in his text were speculative and tentative. In an attempt to avoid classification by symptoms, Kraepelin fell back on classification by course and prognosis: what made the manic-depressive psychoses alike, and different from the dementias, was not so much the difference between affective and cognitive symptoms, but rather that manic-depressive patients tended to improve while demented patients tended to deteriorate.

By focusing on the course of illness, in the absence of definitive knowledge of pathology or etiology, Kraepelin hoped to put the psychiatric nosology on a firmer scientific basis. In the final analysis, however, information about course is not particularly useful in diagnosing a patient who is in the acute stage of mental illness. Put bluntly, it is not much help to be able to say, after the disease has run its course, "Oh, that's what he had!". Kraepelin appears to have anticipated this objection when he noted that:

there is a fair assumption that similar disease processes will produce identical symptom pictures, identical pathological anatomy, and an identical etiology. If, therefore, we possessed a comprehensive knowledge of any one of these three fields, -- pathological anatomy, symptomatology, or etiology, -- we would at once have a uniform and standard classification of mental diseases. A similar comprehensive knowledge of either of the other two fields would give not only just as uniform and standard classifications, but all of these classifications would exactly coincide. Cases of mental disease originating in the same causes must also present the same symptoms, and the same pathological findings (1907, p. 117).

Accordingly, Kraepelin (1904/1907) divided the mental illnesses into 15 categories, most of which remain familiar today, including dementia praecox (later renamed schizophrenia), manic-depressive insanity (bipolar and unipolar affective disorder), paranoia, psychogenic neuroses, psychopathic personality, and syndromes of defective mental development (mental retardation). What Kraepelin did for the psychoses, Pierre Janet later did for the neuroses (Havens, 1966), distinguishing between hysteria (today's dissociative and conversion disorders) and psychasthenia (anxiety disorder, obsessive-compulsive disorder, and hypochondriasis).

Paradoxically, Kraepelin's assertion effectively justified diagnosis based on symptoms -- exactly the practice that he was trying to avoid. And that is just what the mental health professions have continued to do, for more than a century. True, the predecessors of the current Diagnostic and Statistical Manual for Mental Disorders (DSM), such as the Statistical Manual for the Use of Institutions for the Insane  or the War Department Medical Bulletin, Technical 203 spent a great deal of time listing mental disorders with presumed or demonstrated biological foundations. But for the most part, actual diagnoses were made on the basis of symptoms, not on the basis of pathological anatomy -- not least because, as Kraepelin himself had understood, evidence about organic pathology was usually impossible, and evidence about etiology was usually hard, to obtain. In distinguishing between psychosis and neurosis, and between schizophrenia and manic-depressive disorder, or between phobia and obsessive-compulsive disorder, all the clinician had was symptoms.

Psychopathology.JPG (55122 bytes)Similarly, while the first edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-I; American Psychological Association, 1952) may have been grounded in psychoanalytic and psychosocial concepts, diagnosis was still based on lists of symptoms and signs. So too, for the second edition (DSM-II; American Psychiatric Association, 1968). For example, the classical distinctions among simple, hebephrenic, catatonic (excited or withdrawn), and paranoid schizophrenia were based on presenting symptoms, not on the basis of pathological anatomy (they were "functional", etiology (unknown), or even course (all chronic and deteriorating).


In point of fact, the first two editions of DSM gave mental health professionals precious little guidance about how diagnoses were actually to be made -- which is one reason why diagnosis proved to be so unreliable (e.g., Spitzer & Fleiss, 1974; Zubin, 1967). Correcting this omission was one of the genuine contributions of what has come to be known as the neo-Kraepelinian movement in psychiatric diagnosis (Blashfield, 1985; Klerman, 1977), as exemplified by the work of the "St. Louis Group" centered at Washington University School of Medicine (Feighner, Robins, Guze, Woodruff, Winokur, & Munoz, 1972; Woodruff, Goodwin, & Guze, 1974), and the Research Diagnostic Criteria (RDC) promoted by a group at the New York State Psychiatric Institute (Spitzer, Endicott, & Robins, 1975). The third and fourth editions of the Diagnostic and Statistical Manual for Mental Disorders (DSM-III, DSM-III-R, and DSM-IV; American Psychiatric Association, 1980, 1987, 1994), were largely the product of these groups' efforts.

Diagnosis by symptoms was codified in the Schedule for Affective Disorders and Schizophrenia (SADS; Endicott & Spitzer, 1978), geared to the RDC, and in analogous instruments geared to the DSM: the Structured Clinical Interview for DSM-III-R (SCID; Spitzer, Williams, Gibbon, & First, 1990) and Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; First, Spitzer, Gibbon, & Williams, 1997). The neo-Kraepelinian approach exemplified by DSM-IV and SCID-I has arguably made diagnosis more reliable, if not more valid. For example, clinicians can show a high rate of agreement in diagnosing Multiple Personality Disorder (in DSM-III; in DSM-IV, renamed Dissociative Identity Disorder) but it is difficult to believe that the "epidemic" of this diagnosis observed in the 1980s and 1990s represented a genuine increase in properly classified cases (Kihlstrom, 1999).

From time to time, psychiatrists, psychologists, and other mental-health specialists have considered alternatives to the diagnostic system represented by the DSM.  None of these has yet gained wide acceptance in the mental-health community -- not least by the insurance companies who require that patients be professionally diagnosed before they will pay for therapy.  In a review of several books on psychiatric diagnosis, Manvir Singh puts his finger on a subtle problem with the revisionist task: psychiatric diagnoses are social categories, and as with all social categories, are critical to social cognition and even self-identity ("Read the Label", New Yorker, 05/13/2024).

In DSM: A History of Psychiatry's Bible (2021), the medical sociologist Alln V. Horwitz presents reasons for the DSM-5's botched revolution, including infighting among members of the working groups and the sidelining of clinicians during the revision process.  But there's a larger difficulty: revamping the DSM requires destroying kinds of people.  As the philosopher Ian Hacking observed [in a book on the history of multiple personality disorder], labelling people is very different from labelling quarks or microbes.   Quarks and microbes are indifferent to their labels; by contrast, human classifications change how "individuals experience themselves -- and may even lead people to evolve their feelings and behavior in part because they are so classified...."  Hacking referred to this process, in which naming creates the thing named -- and in which the meaning of names can be affected, in turn, bu the name bearers -- as "dynamic nominalism".

Three new books [on autism, sociopathy, and borderline personality disorder] illustrate how pschiatric diagnosis shapes the people it describes.  It models social identities.  It offers scripts for how to behave and explanations for one's interior lefe.  By promising to tell people who they really are, diagnosis produces personal stakes in the diagnostic system, fortifying it against upheaval.

Just as personality tests (see, I'm an introvert!), astrological signs (I'm a Libra!) and generational monikers (I'm Gen Z!) are used to aid self-understanding, so are psychiatric diagnoses.  When Page Layle [author of one of the books under review] was fifteen, a psychiatrist told her that she had autism spectrum disorder.  She describes the rush of clarity she expeienced when hearing the DSM-5 criteria: "I'm not crazy.  I'm not making it up.  I'm not manipulative or trying to fake anything....  There's a reason why I'm the way that I am."

The point being that psychiatric diagnoses are not just labels applied to other people.  They are labels that patients can apply to themselves, and thus form part of their identity and self-concept.


Social Stereotypes

Every culture and subculture provides its members with a set of social stereotypes.  On American college campuses, for example, there are stereotypes of jocks and preppies, townies, wonks, nerds, radicals, and so on.  In addition, students may be stereotyped by their choice of major, or by their living units. 




OKCupid.com, an Internet dating site, promotes a Dating Persona Test which classifies people into various personality types -- 16 for men and another 16 for women, based on combinations of four personality characteristics:

Thus, the male "Boy Next Door" (random, gentle, love, dreamer) contrasts with the "Pool Boy" (random, gentle, sex, dreamer), while the female "Maid of Honor" (deliberate, brutal, sex, master) contrasts with the "Sonnet" (deliberate, gentle, love, dreamer).  

As originally defined by the American journalist Walter Lippman (1922), "a stereotype is an oversimplified picture of the world, one that satisfies a need to see the world as more understandable than it really is".  From a cognitive point of view, stereotypes are concepts -- summary mental representations of an entire class of individuals.  But not all individuals in a society are subject to stereotyping.

Studies of the content of social stereotypes confirm that they are represented by lists of features. 

As social categories, stereotypes have two aspects:



Yuppies (and Swifties): Social Categories in Popular Culture

The term yuppie (for Young Urban Professional) was coined in the early 1980s, and the type was celebrated in novels such as Jay McInerney's Bright Lights, Big City, Lawrence Kasdan's movie The Big Chill, and Madonna's anthem "Material Girl".  Reviewing Tim McGrath's Triumph of the Yuppies: America, the Eighties, and the Creation of an Unequal Nation, Louis Menand has comments that apply equally well to other social categories ("What Happened to the Yuppie?", New Yorker, 07/29/2024): 

Hippies and yuppies signified not as political constituencies but as social types. A social type stands for something that people think is important to identify either with or against. As with the Swiftie. There are people who really want to be Swifties, and there are people who can’t believe that there are people who really want to be Swifties. But, no matter how little you care about Swifties, you have to have an opinion. Even professing indifference is an opinion. And your view on Swifties says something about you. You’re the kind of person who says whatever it was you just said about Swifties—or yuppies or hippies. It all hangs together.

Underneath your indifference or disapproval or feeling of superiority about the social type that you are disidentifying with lurks, inevitably, the secret fear that those people are riding the crest of the wave. Right now, all things considered, it’s probably better to be a Swiftie. You’re part of something greater than yourself, and the world has organized itself to make you happy. In 1984, maybe it was better to be a yuppie.

In the nineteen-eighties, the yuppie served this self-definitional function—am I pro-yuppie or anti-yuppie?—exceptionally well. It enabled people to orient themselves to the times. Far more people hated yuppies, and everything the yuppie stood for, than wanted to be yuppies, of course. The term itself is a put-down. It’s close to “puppy,” and no one wants to be a puppy; everyone wants to be the big dog. But more people had contempt for hippies in the nineteen-sixties and beatniks in the nineteen-fifties—or have today, for that matter, for Zoomers—than aspireds to be beatniks or hippies. “Beatnik,” too, is a put-down, a mashup of “Beat” and “Sputnik.” (Both are “far out.”) “Hippie” is a dismissive diminutive of “hipster.”

Social types are also useful as personifications. You know a hippie or a yuppie by sight. They wear a certain kind of shoe, eat a certain kind of food, drive a certain kind of car. LSD was the hippie drug, associated with dropping out. The yuppie drug was cocaine, associated with life in the fast lane. Terms like “hippie” and “yuppie” come fully loaded. They provide a completely accoutred objective correlative to a certain package of tastes and attitudes. If they luck out, they come to stand for an era—usually, since we have ten fingers, a decade. When we think of American life in the nineteen-eighties, we think of the yuppie.

 

The Evolution of Theories of Conceptual Structure

Person categories are as diverse as the classical fourfold typology and Freud's four psychosexual characters.  It is one thing (and great fun) to determine what categories of persons there are, but the more important scientific task is to determine how social categories are organized in the mind -- in other words, to determine their structure.  Cognitive psychology has made a great deal of progress in understanding the structure of concepts and categories, and research and theory in social cognition has profited from, and contributed to these theoretical developments.



Perhaps the earliest philosophical discussion of conceptual structure was provided by Aristotle in his Categories.  Aristotle set out the classical view of categories as proper sets -- a view which dominated thinking about concepts and categories well into the 20th century.  Beginning in the 1950s, however, and especially the 1970s, philosophers, psychologists, and other cognitive scientists began to express considerable doubts about the classical view.  In the time since, a number of different views of concepts and categories have emerged -- each attempting to solve the problems of the classical view, but each raising new problems of its own.  Here's a short overview of the evolution of theories of conceptual structure.


The Classical View: Categories as Proper Sets

According to the classical view, concepts are summary descriptions of the objects in some category.  This summary description is abstracted from instances of a category, and applies equally well to all instances of a category.  

In the classical view, categories are structured as proper sets, meaning that the objects in a category share a set of defining features which are singly necessary and jointly sufficient to demarcate the category.

Examples of classification by proper sets include:

According to the proper set view, categories can be arranged in a hierarchical system which represents the vertical relations between categories, and yield the distinction between superordinate and subordinate categories.

Such hierarchies of proper sets are characterized by perfect nesting, by which we mean that subsets possess all the defining features of supersets (and then some). Examples include:




Note, for example, the perfect nesting in the hierarchy of geometrical figures.

Such hierarchies show perfect nesting: all instances of subcategories also possess the defining features of the relevant superordinate category.  All trapezoids have the features of quadrilaterals, and all quadrilaterals have the features of planes.  

Proper sets are also characterized by an all-or-none arrangement which characterizes the horizontal relations between adjacent categories, or the distinction between a category and its contrast. Because defining features are singly necessary and jointly sufficient, proper sets are homogeneous in the sense that all members of a category are equally good instances of that category (because they all possess the same set of defining features). An entity either possesses a defining feature or it doesn't; thus, there are sharp boundaries between contrasting categories: an object is either in the category or it isn't. You're either a fish, or you're not a fish.  There are no ambiguous cases of category membership.

According to the classical view, object categorization proceeds by a process of feature-matchingThrough perception, the perceiver extracts information about the features of the object; these features are then compared to the defining feature of some category.  If there is a complete match between the features of the object and the defining features of the category, then the object is labeled as another instance of a category.


Problems with the Classical View

The proper set view of categorization is sometimes called the classical view because it is the one handed down in logic and philosophy from the time of the ancient Greeks. But there are some problems with it which suggest that however logical it may seem, it's not how the human mind categorizes objects.  Smith & Medin (1981) distinguished between general criticisms of the classical view, which arise from simple reflection, and empirical criticisms, which emerge from experimental data on concept-formation.  

 

General Criticisms

On reflection, for example, it appears that some concepts are disjunctive: they are defined by two or more different sets of defining features.




Disjunctive categories violate the principle of defining features, because there is no defining feature which must be possessed by all members of the category.

Another problem is that many entities have unclear category membership. According to the classical, proper-set view of categories, every object should belong to one category or another. But is a rug an article of furniture? Is a potato a vegetable? Is a platypus a mammal? Is a panda a bear? We use categories like "furniture" without being able to clearly determine whether every object is a member of the category.

Furthermore, some categories are associated with unclear definitions.  That is, it is difficult to specify the defining features of many of the concepts we use in ordinary life. A favorite example (from the philosopher Wittgenstein) is the concept of "game". Games don't necessarily involve competition (solitaire is a game); there isn't necessarily a winner (ring-around-the-rosy), and they're not always played for amusement (professional football). Of course, it may be that the defining features exist, but haven't been discovered yet. But that doesn't prevent us from assigning entities to categories; thus, categorization doesn't seem to depend on defining features.

 

Empirical Criticisms

nesting.gif (6674
              bytes)Yet another problem is imperfect nesting: it follows from the hierarchical arrangement of categories that members of subordinate categories should be judged as more similar to members of immediately superordinate categories than to more distant ones, for the simple reason that the two categories share more features in common. Thus, a sparrow should be judged more similar to a bird than to an animal. This principle is often violated: for example, chickens, which are birds, are judged to be more similar to animals than birds.  This results in a tangled hierarchy of related concepts.

typicality.gif
              (5180 bytes)The chicken-sparrow example reveals the last, and perhaps the biggest, problem with the classical view of categories as proper sets: some entities are better instances of their categories than others. This is the problem of typicality. A sparrow is a better instance of the category bird -- it is a more "birdy" bird -- than is a chicken (or a goose, or an ostrich, or a penguin). Within a culture, there is a high degree of agreement about typicality. The problem is that all the instances in question share the features which define the category bird, and thus must be equivalent from the classical view. But they are clearly not equivalent; variations in typicality among members of a category can be very large.

Variations in typicality can be observed even in the classic example of a proper set -- namely, geometrical figures.  For example, subjects usually identify an equilateral triangle, with equal sides and equal angles, as more typical of the category triangle, than isosceles, right, or acute triangles.  

There are a large number of ways to observe typicality effects:

Typicality appears to be determined by family resemblance.  Category instances seem to be united by family resemblance rather than any set of defining features shared by all members of a category.  Just as a child may have his mother's nose and his father's ears, so instance A may share one feature with instance B, and an entirely different feature with instance C, while B shares yet a third feature with C which that it does not share with A.  Empirically, typical members share lots of features with other category members, while atypical members do not. Thus, sparrows are small, and fly, and sing; chickens are big, and walk, and cluck.

Typicality is important because it is another violation of the homogeneity assumption of the classical view. It appears that categories have a special internal structure which renders instances nonequivalent, even though they all share the same singly necessary and jointly sufficient defining features. Typicality effects indicate that we use non-necessary features when assigning objects to categories. And, in fact, when people are asked to list the features of various categories, they usually list features that are not true for all category members.

The implication of these problems, taken together, is that the classical view of categories is incorrect. Categorization by proper sets may make sense from a logical point of view, but it doesn't capture how the mind actually works.

 

The Prototype View

Recently, another view of categorization has gained status within psychology: this is known as the prototype or the probabilistic  view. The probabilistic view has its origins in the philosophical work of Ludwig Wittgenstein, but was brought into psychological theory by UCB's Prof. Eleanor Rosch (now Prof. Emerita), who, with a single paper in 1975, overturned 2,500 years of thinking -- ever since Aristotle, actually -- about concepts and categories.



The prototype view retains the idea, from the classical view, that concepts are summary descriptions of the instances of a category.  Unlike the classical view, however, in the prototype view the summary description does not apply equally well to every member of the category, because there are no defining features of category membership.  

According to the prototype view, categories are fuzzy sets, in that there is only a probabilistic relationship between any particular feature and category membership. No feature is singly necessary to define a category, and no set of features is jointly sufficient.

 

Fuzzy Sets and Fuzzy Logic

The notion of categories as fuzzy rather than sets, represented by prototypes rather than lists of defining features, is related to the concept of fuzzy logic developed by Lofti Zadeh, a computer scientist at UC Berkeley.  Whereas the traditional view of truth is that a statement (such as an item of declarative knowledge) is either true or false, Zadeh argued that statements can be partly true, possessing a "truth value" somewhere between 0 (false) and 1 (true).  

Fuzzy logic can help resolve certain logical conundrums -- for example the paradox of Epimenides the Cretan (6th century BC), who famously asserted that "All Cretans are liars".  If all Cretans are liars, and Epimenides himself is a Cretan, then his statement cannot be true.  Put another way: if Epimenides is telling the truth, then he is a liar.  As another example, consider the related Liar paradox: the simple statement that "This sentence is false".  Zadeh has proposed that such paradoxes can be resolved by concluding that the statements in question are only partially true.

Fuzzy logic also applies to categorization.  Under the classical view of categories as proper sets, a similar "all or none" rule applies: an object either possesses a defining feature of a category or it does not; and therefore it either is or is not an instance of the category.  But under fuzzy logic, the statement "object X has feature Y" can be partially true; and if Y is one of the defining features of category Z, it also can be partially true that "Object X is an instance of category Z".


A result of the probabilistic relation between features and categories is that category instances can be quite heterogeneous. That is, members of the same category can vary widely in terms of the attributes they possess. All of these attributes are correlated with category membership, but none are singly necessary and no set is jointly sufficient.

Some instances of a category are more typical than others: these possess relatively more central features.

According to the prototype view, categories are not represented by a list of defining features, but rather by a category prototype, or focal instance, which has many features central to category membership (and thus a family resemblance to other category members) but few features central to membership in contrasting categories.

It also follows from the prototype view that there are no sharp boundaries between adjacent categories (hence the term fuzzy sets). In other words, the horizontal distinction between a category and its contrast may be very unclear. Thus, a tomato is a fruit but is usually considered a vegetable (it has only one perceptual attribute of fruits, having seeds; but many functional features of vegetables, such as the circumstances under which it is eaten). Dolphins and whales are mammals, but are usually (at least informally) considered to be fish: they have few features that are central to mammalhood (they give live birth and nurse their young), but lots of features that are central to fishiness.

Actually, there are two different versions of the prototype view.




The two versions of the of the prototype view have somewhat different implications for categorization. 

Either way, categorization is no longer an "all-or-none" matter.  Category membership can vary by degrees, depending on how closely the object resembles the prototype.

You Say "Tomato"... I Say "It's a Truck"

It turns out that defining categories isn't easy, even for legislators, judges, and policymakers. 

Consider, again, the tomato.  In 1883, Congress enacted a Tariff Act which placed a 10% duty on "vegetables in their natural state", but permitted duty-free import of "green, ripe, or dried" fruits. The Customs Collector in the Port of New York, seeing the prospects of increased revenues, declared that tomatoes were vegetables and therefore taxable. The international tomato cartel sued, and the case (known as Nix v. Hedden) eventually reached the United States Supreme Court, which unanimously declared the tomato to be a vegetable, while knowing full well that it is a fruit. Justice Gray wrote for the bench:

Botanically speaking, tomatoes are the fruit of a vine, just as are cucumbers, squashes, beans, and peas. But in the common language of the people, whether sellers or consumers of provisions, all these are vegetables which are grown in kitchen gardens, and which, whether eaten cooked or raw, are, like potatoes, carrots, parsnips, turnips, beets, cauliflower, celery and lettuce, usually served at dinner in, with, or after the soup, fish, or meats which constitute the principal part of the repast, and not, like fruits, generally, as dessert.

Nearly a century later, the Reagan administration, trying to justify cuts in the budget for federal school-lunch assistance, likewise declared tomato ketchup -- like cucumber relish -- to be a vegetable.

While tomatoes are commonly considered to be vegetables of a sort to be found on salads and in spaghetti sauce, and not fruits of a sort found on breakfast cereals or birthday cakes, Edith Pearlman did find Tomate offered as a dessert in a restaurant in Paris ("Haute Tomato", Smithsonian, July 2003).  Nevertheless, as the British humorist Miles Kingston noted, that "Knowledge tells us that a tomato is a fruit; wisdom prevents us from putting it into a fruit salad" (quoted by Heinz Hellin, Smithsonian, September 2003).  

As another example: American elementary-school students are commonly taught that there are five "Great Lakes" on the border between the United States and Canada -- Ontario, Erie, Michigan, Huron, and Superior.  But in 1998, at the behest of Senator Patrick Leahy of Vermont, Congress voted to designate Lake Champlain, which lies on the border between Vermont and New York, as a sixth Great Lake.  Leahy's logic is unclear, but seems to have been that the Great Lakes were all big lakes that were on political boundaries, or at least near Canada, and Lake Champlain was also a big lake on a political boundary, or at least it was near Canada too, so Lake Champlain ought to be a Great Lake too (the designation was written into law, but later revoked).

And finally, is an SUV a car or a truck? (see "Big and Bad" by Malcolm Gladwell, New Yorker, 01/12/04).  Generally, cars are intended to move people, while trucks are intended to move cargo.  When the Detroit automakers introduced sport utility vehicles, they were classified as trucks, on the grounds that they were intended for off-road use by farmers, ranchers, and the like for carrying heavy loads and towing heavy trailers.  But then the same vehicles were marketed to urban and suburban customers as a "lifestyle choice" for everyday use.  Legally, the classification of SUVs as trucks means that they do not have to conform to "CAFE" standards for fuel efficiency, or to car standards for safety.  Still, the categories of car and truck appear to be fuzzy sets, as illustrated by the following exchange between Tom and Ray Magliozzi, "the Click and Clack" car guys of National Public Radio ("Reader Defends Headroom in Subaru", West County Times, 02/28/04):

Tom:  We have to admit, though, we're a little ticked off at Subaru these days.

It recently decided to modify the new Outback wagon AWD sedan so they would qualify as "trucks" under the National Highway Traffic Safety Administration's fuel-economy rules.

Ray:  This is a sleazy little move, in our humble opinion, to get around the federal fuel-economy guidelines -- which are different for trucks.

And a company like Subaru, which counts many environmentalists among its customers, should be embarrassed to take advantage of a loophole like that.  I'm sure customers will let Subaru hear about this.

Tom:  But we would be remiss if we didn't point out that other companies do this, too, because the loophole is the size of a Ford Excursion.  It's something the federal regulators really ought to close.

Ray:  Right.  I mean, an exception to the fuel-economy rules is made for trucks, because they've traditionally been considered work vehicles.  And we don't want to limit a person's ability to work for a living.

But is a Chrysler PT Cruiser a work truck?  Is a Volvo station wagon? A minivan that's designed to carry kids?

Tom:  How do you define a truck?  Well, the feds have a variety of definitions.  Some of them make sense -- like, a vehicle with an open bed, such as a pickup, is a truck.

But then they have other definitions that are too easy to meet.  Like, if the back seat folds down and leaves a flat loading floor, they'll call it a truck.

I don't think so.  That's a car with a flat loading floor, isn't it?

Ray:  Or if you have a certain amount of ground clearance, you can call your vehicle a truck (that's the alteration Subaru is making to the Outback).

But lots of all-wheel-drive cars have good ground clearance these days -- for styling as much as anything else.

Tom:  So NHTSA really needs to close these loopholes.  Maybe it should define trucks as having a minimum amount of load capacity.

You can't just add a couple of spacers to the suspension of a Subaru and make it carry 1,500 pounds of gravel.

Ray:  I actually don't know the best way to define a truck.  But, to quote Supreme Court Justice Potter Stewart about pornography, "I know it when I see it".

And the Subaru Outback sedan ain't a truck.  Shape up, Subaru.


The prototype view solves most of the problems that confront the classical view, and (in my view, anyway) is probably the best theory of conceptual structure and categorization that we've got.  But as research proceeded on various aspects of the prototype view, certain problems emerged, leading to the development of other views of concepts and categories.



In the prototype view, as in the classical view, related categories can be arranged in a hierarchy of subordinate and superordinate categories.  Many accounts of the prototype view argue that there is a basic level of categorization, which is defined as the most inclusive level at which:

  • objects in a category have characteristic attributes in common;
  • objects have characteristic movements in common;
  • objects have a characteristic physical appearance; and
  • objects can be identified and categorized from their average appearance.

In the realm of animals, for example, dog and cat are at the basic level, while beagle and Siamese are at subordinate levels.  In the domain of musical instruments, piano and saxophone are at the basic level, while grand piano and baritone saxophone are at subordinate levels.  The basic level is in some important sense psychologically salient, and preferred for object categorization and other cognitive purposes.

 

The Exemplar View

For example, some theorists now favor a third view of concepts and categories, which abandons the definition of concepts as summary descriptions of category members. According to the exemplar view, concepts consist simply of lists of their members, with no defining or characteristic features to hold the entire set together. In other words, what holds the instances together is their common membership in the category. It's a little like defining a category by enumeration, but not exactly. The members do have some things in common, according to the exemplar view; but those things are not particularly important for categorization.


When we want to know whether an object is a member of a category, the classical view says that we compare the object to a list of defining features; the prototype view says that we compare it to the category prototype; the exemplar view says that we compare it to individual category members. Thus, in forming categories, we don't learn prototypes, but rather we learn salient examples.

Teasing apart the prototype and the exemplar view turns out to be fiendishly difficult. There are a couple of very clever experiments which appear to support the exemplar view.  For example, it turns out that we will classify an object as a member of a category if it resembles another object that is already labeled as a category member, even if neither the object, or the instance, particularly resemble the category prototype.

Nevertheless, some theorists investigators are worried about it because it seems to be uneconomical. The compromise position, which has many adherents, is that we categorize in terms of both prototypes and exemplars. For example, and this is still a hypothesis to be tested, novices in a particular domain categorize in terms of prototypes and experts categorize in terms of exemplars.

Despite these differences, the exemplar view agrees with the prototype view that categorization proceeds by way of similarity judgments.  And they further agree that similarity varies in degrees.  They just differ in what the object must be similar to:

  • In the prototype view, the object must be similar to the category prototype.
  • In the exemplar view, the object must be similar to some category instance (or exemplar). 

Following the work of Amos Tversky, Medin (1989) has outlined a modal model of similarity judgments:

  • similarity increases with the number of shared features;
  • similarity decreases with the number of distinctive features;
  • the features in question are, at least in principle, independent of each other;
  • features all exist at the same level of abstraction.

In either case, similarity is sufficient to describe conceptual structure -- all the instances of a concept are similar, in that they either share some features with the category prototype or they share some features with a category exemplar.


The Theory-Based View

As noted, the prototype and exemplar views of categorization are all based on a principle of similarity. What members of a category have in common is that they share some features or attributes in common with at least some other member(s) of the same category. The implication is that similarity is something that is an attribute of objects, that can either be measured (by counting overlapping features) or judged (by estimating them).  But ingenious researchers have uncovered some troubles with similarity as a basis for categorization -- and, for that matter, with similarity in general.

Context Effects.  However, recently it has been recognized that some categories are defined by theories instead of by similarity.

For example, in one experiment, when subjects were presented with pictures of a white cloud, a grey cloud, and a black cloud, they grouped the grey and black clouds together as similar; but when presented with pictures of white hair, grey hair, and black hair, in which the shades of hair were identical to the shades of cloud, subjects grouped the grey hair with the white hair. Because the shades were identical in the two cases, grouping could not have been based on similarity of features. Rather, the categories seemed to be defined by a theory of the domain: grey and black clouds signify stormy weather, while white and grey hair signify old age.


Ad-Hoc Categories.  What do children, money, insurance papers, photo albums, and pets have in common? Nothing, when viewed in terms of feature similarity. But they are all things that you would take out of your house in case of a fire. The objects listed together are similar to each other in this respect only; in other respects, they are quite different.  

This is also true of the context effects on similarity judgment: grey and black are similar with respect to clouds and weather, while grey and white are similar with respect to hair and aging.  

These observations tell us that similarity is not necessarily the operative factor in category definition. In some cases, at least, similarity is determined by a theory of the domain in question: there is something about weather that makes grey and black clouds similar, and there is something about aging that makes white and grey hair similar.

In the theory-based view of categorization (Medin, 1989), concepts are essentially theories of the categorical domain in question.  Conceptual theories perform a number of different functions:




  • they provide a causal explanation for why the members of a category have the features they have --
    •  or, put another way, for why the members of a category are in the category in the first place;
  • they explain the relations among features;
  • they render some features relevant (central), and others irrelevant (peripheral).

From this point of view, similarity-based classification, as described in the prototype and exemplar views, is simply a short-cut heuristic used for purposes of classification.  The real principle of conceptual structure is the theory of the categorical domain in question.


Conceptual Coherence

One way or another, concepts and categories have coherence: there is something that links members together. In classification by similarity, that something is intrinsic to the entities themselves; in classification by theories, that something is imposed by the mind of the thinker.

But what to make of this proliferation of theories?  From my point of view, the questions raised about similarity have a kind of forensic quality -- they sometimes seem to amount to a kind of scholarly nit-picking.  To be sure, similarity varies with context; and there are certainly some categories which are only held together by a theory, and similarity fails utterly to hold a category together.  For most purposes, the prototype view, perhaps corrected (or expanded) a little by the exemplar view, seems to work pretty well as an account of how concepts are structured, and how objects are categorized.

As it happens, most work on social categorization has been based on the prototype view.  But there are areas where the exemplar view has been applied very fruitfully, and even a few areas where it makes sense to abandon similarity, and to invoke something like the  theory-based view.

To summarize this history, concepts were first construed as summary descriptions of category members.

  • In the classical view of categories as proper sets, this summary consisted of a list of the features that were singly necessary and jointly sufficient to define the category.
  • In the prototype view of categories as fuzzy sets, this summary consisted of a prototype which possessed many features central to category membership, and few features central to membership in contrasting categories.  In this view, categorization is a matter of judgment, and depends on the amount of similarity between the prototype and the object to be categorized.  

The exemplar view abandons the notion that concepts are summary descriptions, and instead proposes that concepts are collections of instances that exemplify the category.  But it does not abandon the notion that concepts are based on similarity of features.  While in the prototype view category members are similar to the prototype, in the exemplar view category members are similar to other exemplars.  

Between them, the prototype and exemplar views provide a pretty good account of concepts and categories.  Conventional wisdom holds that concepts are represented as a combination of prototypes and exemplars, with novices relying on prototypes and experts relying on exemplars for categorization of new objects.  

The theory view of categories abandons similarity as the basis for categorization.  Instead, concepts are represented as "theories" which guide the grouping of instances into a category.  According to the theory view, similarity is a heuristic that we use as an economical shortcut strategy for categorization; but the closer you look, according to this view, the more it becomes clear that conceptual coherence -- the "glue" that holds a concept together -- is really provided by a theory, not similarity.  

In any event, it turns out that all four views of classification -- all five, if you count the dimensional and featural versions of the prototype view separately -- have been applied to the problem of person categorization.  But by far the most popular framework for the study of person concepts has been the featural version of the prototype view.


The Structure of Person Categories

Having now looked at theories of conceptual structure in the nonsocial domain, let's see how these have worked out in the social domain of persons, behaviors, and situations.


Person Categories as Proper Sets

Initially, person categories were, at least implicitly, construed as proper sets, summarized by a list of singly necessary and jointly sufficient defining features.

Consider, for example, the classical fourfold typology of Hippocrates, Galen, and Kant.  As characterized by Wundt (1903), each of the four types was defined by a set of traits which served to define them.  Melancholics are anxious and worried, Cholerics are quickly roused and egocentric, Phlegmatics are reasonable and high-principled, and Sanguines are playful and easy-going.

 


This classification scheme has such wide appeal that it lasted for more than 2,000 years, but it has a kind of Procrustean quality.  

 

The Myth of Procrustes

Procrustes is a character who appears in the Greek myth of Theseus and the Minotaur, as depicted by Apollodorus and other classical authors.  Procrustes was an innkeeper in Eleusis who would rob his guests, and then torture them on a special bed.  If his victim were too short, he was stretched on a rack until he fit; if a victim were too tall, he was cut to the right length.  Theseus, a prince of Athens and cousin of Hercules who had ambitions to become a hero himself, fought with Procrustes, and other bandits as well, and killed him with his own bed.  

The tale is perhaps the origin of the old expression: "You made your bed: Now lie in it!".


Empirically, it proves difficult to fit everybody into this (or any other) typological scheme, for the simple reason that most human attributes are not generally present in an all-or-none manner, but are continuously distributed over an infinite series of gradations. The most prominent exceptions are gender and blood type -- and, as noted, even gender isn't exactly all-or-none.  The claim of continuity certainly holds true for other strictly physical dimensions such as height, girth, and skin color -- How tall is tall? How fat is fat? How black is black? -- and this is all the more the case for psychological attributes. All sanguines may be sociable, but some people are more sociable than others. The notion that some people may be more sanguine than others, which is what this fact implies, is inconsistent with the classical view of categorization. 

Moreover, it is also apparent that individuals can also display features that define contrasting categories. If sanguines are sociable, and cholerics egocentric, what do we make of a person who is both sociable and egocentric? 

We are reminded of the entomologist who found an insect which he could not classify, and promptly stepped on it (joke courtesy of E.R. Hilgard).

The problem of partial and combined expression of type features, once recognized and taken seriously, was the beginning of the end for type theories.

 

Foreshadowing the Dimensional Prototype View

These problems were recognized early on, and one of the founders of modern scientific psychology, Wilhelm Wundt (1903) -- offered a solution to both these dilemmas. Wundt was a structuralist, primarily concerned with analyzing mental life into its basic elements. While his work emphasized the analysis of sensory experience, he also turned his attention to the problems of emotion and personality. Wundt argued that the classic fourfold typology of Hippocrates, Galen, and Kant could be understood in terms of emotional arousal. 



  • Cholerics and melancholics were disposed to strong emotions,
    • while sanguines and phlegmatics were disposed to weak ones; 
  • similarly, the emotions were quickly aroused in cholerics and sanguines, 
    • but only slowly in melancholics and phlegmatics.

Instead of slotting people into four discrete categories, Wundt proposed that people be classified in terms of two continuous dimensions reflecting their characteristic speed and intensity of emotional arousal. In this way, Kant's categorical system was transformed into a dimensional system, in which individuals could be located not in categories but as points in two-dimensional space. The classic fourfold types described those individuals who fell along the diagonals of the system.

Abandonment of categorical types in favor of a dimensional scheme proposed by Wundt seemed to allow the differences. between people to be represented more accurately.  Such a solution also appealed to Allport, who celebrated the individual and who was more interested in studying a person's unique combinations of attributes than in studying whole classes of people or people in general. 

Wundt's solution to the problem of types initiated a tradition in the scientific study of personality known as trait psychology.  In the present context, however, and with the benefit of hindsight (because Wundt, like everyone else of his time, implicitly or explicitly embraced the classical view of categories as proper sets), we can view Wundt's work as an early anticipation of the prototype view of categories -- and, specifically, of the dimensional view of concepts.  In this view, the instances of each type occupy their respective quadrants in a two-dimensional space; and the "prototypical" instance of each type lies at a point representing the average of all members of that category.


Applying the Featural Prototype View

A line of research initiated by Nancy Cantor was the first conscious, deliberate application of the prototype view of categories to the problem of person concepts.  Cantor's hypothesis was simply that person categories, like other natural categories, were structured as fuzzy sets and represented by category prototypes.

063Cantor.jpg
                (85822 bytes)In one set of studies Cantor & Mischel (1979; Walter Mischel was Cantor's graduate advisor) devised four three-level hierarchies of person categories.  These categories were not those of the classic fourfold typology, but rather were selected to be more recognizable to nonprofessionals. 

 

 

060CantMisch79a.jpg (106413 bytes)Because her categories were somewhat artificial, Cantor & Mischel first asked a group of subjects to sort the various types into four large categories, and then to sort each of these into subcategories.  Employing a statistical technique known as hierarchical cluster analysis, she showed that subjects' category judgments largely replicated the structure that she had intended to build into her experiment.

 

Next, Cantor & Mischel asked subjects to list the attributes that were "characteristic of and common to" each type of person.  They then compiled a list of features listed by more than 20% of the subjects, and asked a new group of subjects to rate the percentage of members in each category who would possess that feature.  Finally, Cantor & Mischel assembled "consensual" feature lists for each of the categories, consisting of those features that had been rated by at least 50% of the subjects as "characteristic of and common to" the category members.

Cantor & Mischel found, as would be expected from the prototype view, that examples within a particular category had relatively few shared features, but they were held together by a kind of family-resemblance structure, centered around prototypical examples of each type at each level.

  • There were no defining features of any category, in that there was no single feature listed by all subjects for any category.
  • Taken together the categories formed a kind of tangled hierarchy in which some features listed for subordinate categories were also listed as features for superordinate levels of contrasting categories.

Using the data they had collected, Cantor & Mischel asked whether there was a "basic level" of person categorization, as Rosch and her colleagues (1976) had found for object categories.  Recall that the basic level of categorization is the most inclusive level at which:

  • objects have characteristic attributes in common;
  • objects have characteristic movements in common;
  • objects have a characteristic physical appearance; and
  • objects can be identified and categorized from their average appearance.

Counting the number of consensual attributes at each level of categorization, Cantor & Mischel found, analogous to Rosch et al. (1976), that the middle level of their hierarchy -- the level of phobic and criminal madman, religious devotee and social activist seemed to function like a basic level for person categorization. 

062CantMisch79b.jpg (84877 bytes) For example, moving from the superordinate level to the middle level produced a greater increase in the number of attributes associated with the category, compared to the shift from the middle level to the subordinate level.  In other words, the middle level maximized the information value of the category.

 

 

063CM70c.jpg (40396
                bytes)This was also the case for specific attributes, such as physical appearance and possessions,

 

 

 

065CM79e.jpg (33713
                bytes)trait dispositions,

 

 

 

066CM79f.jpg (31818
                bytes)and behaviors.

 

 

 

064CM79d.jpg (37767
                bytes)Things didn't work out quite so neatly for socioeconomic status, where the subordinate level seemed to provide a greater increment in information (but there wasn't much socioeconomic information to begin with).

 

 

Cantor & Mischel's work on person prototypes and basic levels was a pioneering application of the prototype view to social categorization, but Roger Brown (in his unpublished 1980 Katz-Newcomb Lecture at the University of Michigan) offered a gentle criticism -- which was that the categories they worked with seemed a little artificial.  Accordingly, and following even more closely the method of Rosch et al. (interestingly, Rosch had worked with Brown as an undergraduate and graduate student at Harvard, where she did her pioneering work on color categorization and the Sapir-Whorf hypothesis), Brown began by identifying the most frequent words in English that label categories of persons.    In a somewhat impressionistic analyses, Brown suggested that terms like boy and girlgrandmother and grandfather, and lawyer and poet, functioned as the basic-object level for person categorization.

Note to aspiring graduate (and undergraduate) students.  Brown's analysis was inspired by that of Rosch et al., but it was not at all quantitative.  Somebody who wanted to make a huge contribution to the study of person categorization could do a lot worse than to start where Brown did, with the dictionary, and perform the same sorts of quantitative analyses that Rosch did.  Any takers?


Prototypes and Exemplars in Psychiatric Diagnosis

An excellent example of the featural prototype view of social categorization is supplied by psychiatric diagnosisEssentially, medical diagnosis of all sorts not only classifies an illness, it also classifies the patient who suffers from that illness.  This is especially the case for psychiatric diagnosis, which has terms like schizophrenic, depressive, and neurotic.  The other medical specialties rarely have terms like canceric, fluic, or heart-attackic.  The various syndromes (hallucinations, anxiety, delusions) are features, and the various syndromes (schizophrenia, bipolar disorder, obsessive-compulsive disorder) are categories.  Categorization proceeds by feature matching, in which the patient's symptoms are matched to the symptoms associated with the syndrome.  The Diagnostic and Statistical Manual (DSM) of the American Psychiatric Association, currently in its 5th edition (2013), constitutes an official list of the diagnostic categories and the features associated with them.  Note, as a matter of historical interest, the tremendous growth of DSM since the first edition in 1952.  This may represent a real increase in knowledge about mental illness.  Or it may represent a "medicalization", or "pathologization", of ordinary problems in living. 


Psychopathology.JPG (55122 bytes)Early approaches to psychiatric diagnosis, at least tacitly, construed the diagnostic categories as proper sets, defined by a set of symptoms that were singly necessary and jointly sufficient to assign a particular diagnosis to a particular patient.



  • Mental illnesses, and thus mental patients, were classified as organic vs. functional based on the presence of brain insult, injury, or disease.
    • Among the psychoses, the distinction between schizophrenia  and manic-depressive illness  was determined by whether the impairment of reality testing affected the cognitive or emotional domain. 
      • All schizophrenics, for example, were thought to display Bleuler's "4 As": 
        • Association disturbance
        • Anhedonia (this is an emotional symptom -- long story there)
        • Autism
        • Ambivalence
      • And the Bleulerian subtypes of schizophrenia were diagnosed by the presence of additional defining symptoms: 
        • Simple schizophrenia, by the presence of just the "4 As";
        • Hebephrenic schizophrenia, by childlike behavior;
        • Catatonic schizophrenia, by "waxy flexibility" in posture;
        • Paranoid schizophrenia, by delusions.

Similarly, Janet (1907) distinguished between subcategories of neurosis, namely hysteria and psychasthenia, and identified the defining "stigmata" of hysteria. 

The construal of the diagnostic categories as proper sets, to the extent that anyone thought about it at all, almost certainly reflected the classical view of categories handed down from the time of Aristotle. Indeed, much of the dissatisfaction with psychiatric diagnosis, at least among those inclined toward diagnosis in the first place, stemmed from the problems of partial and combined expression (e.g., Eysenck,1961). Many patients did not fit into the traditional diagnostic categories, either because they did not display all the defining features of a particular syndrome, or because they displayed features characteristic of two or more contrasting syndromes.

  • The term borderline personality disorder was originally coined by Adolf Stern (1938) to cover patients who displayed symptoms of both neurosis and psychosis. (the term means something different now).
  • Schizoaffective disorder was used for patients who had symptoms of both schizophrenia and manic-depressive illness; 
    • pseudoneurotic schizophrenia, for patients who had both schizophrenia and high levels of anxiety (Hoch, 1959); and 
    • pseudopsychopathic schizophrenia for patients who had symptoms of schizophrenia combined with the antisocial behavior characteristic of psychopathic personality disorder.
  • Terms such as schizoid, schizotypy, and paranoid personality disorder were coined for patients who displayed some, but not all, of the defining symptoms of schizophrenia. 

This worked for a while.

In the 1970s, however, psychologists and other cognitive scientists began to discuss problems with the classical view of categories as proper sets, and to propose other models, including the probabilistic or prototype model (for a review of these problems, and an explication of the prototype model, see Smith & Medin, 1981). According to the prototype view, categories are fuzzy sets, lacking sharp boundaries between them. The members of categories are united by family resemblance rather than a package of defining features. Just as a child may have her mother's nose and her father's eyes, so the instances of a category share a set of characteristic features that are only probabilistically associated with category membership. No feature is singly necessary, and no set of features is jointly sufficient, to define the category. Categories are represented by prototypes, which possess many features characteristic of the target category, and few features characteristic of contrasting categories.

DSM-III and DSM-IV marked a shift in the structure of the psychiatric nosology, in which the diagnostic categories were re-construed as fuzzy, rather than proper sets, represented by category prototypes, rather than a list of defining symptoms.

  • For example, in order to receive a diagnosis of schizophrenia, a patient must show
    • two or more "characteristic symptoms such as delusions, hallucinations, disorganized speech, or affective flattening;
    • The Bleulerian subtypes have been abandoned in favor of two new ones:
      • Type I schizophrenia is characterized by "positive" symptoms such as delusions and hallucinations;
      • Type II schizophrenia is characterized by "negative" symptoms such as affective flattening.
  • And similarly for depression and anxiety disorder.  In fact, in order to be diagnosed as depressed, a patient doesn't even have to display depressed mood.  Other symptoms, such as insomnia or inability to concentrate, in sufficient numbers, will do the trick.
  • Post-traumatic stress disorder retains one necessary symptom -- exposure to trauma (reasonably enough).  But after that, PTSD patients can display a wide variety of symptoms, in a large number of different patterns.


Schizophrenia1.JPG (60218 bytes) Schizophrenia2.JPG (93979 bytes) Schizophrenia3.JPG (84936 bytes)

MajorDepressive.JPG (80189 bytes) Anxiety.JPG (63550 bytes) PTSD.JPG
                          (98954 bytes)

This "fuzzy-set" structure of the diagnostic categories has continued with DSM-5 (which dropped the Roman numerals).  The precise criteria for certain diagnoses, such as schizophrenia or major depressive disorder , may have changed somewhat, but the emphasis on characteristic rather than defining symptoms, and therefore the allowance for considerable heterogeneity within categories, remains constant.



The prototype view solves the problems of partial and combined expression, and in fact a seminal series of studies by Cantor and her colleagues, based largely on data collected before the publication of DSM-III, showed that mental-health professionals tended to follow it, rather than the classical view, when actually assigning diagnostic labels (Cantor & Genero, 1986; Cantor, Smith, French, & Mezzich, 1980; Genero & Cantor, 1987). In a striking instance of art imitating life, DSM-III tacitly adopted the prototype view in proposing rules for psychiatric diagnosis. For example, DSM-III permits the diagnosis of schizophrenia if the patient presents any one of six symptoms during the acute phase of the illness, and any two of eight symptoms during the chronic phase. Thus, to simplify somewhat (but only somewhat) two patients -- one with bizarre delusions, social isolation, and markedly peculiar behavior, and the other with auditory hallucinations, marked impairment in role functioning, and blunted, flat, or (emphasis added) inappropriate affect -- could both be diagnosed with schizophrenia. No symptom is singly necessary, and no package of symptoms is jointly sufficient, to diagnose schizophrenia as opposed to something else. Although the packaging of symptoms changed somewhat, DSM-IV followed suit.

Of course, other views of categorization have emerged since the prototype view, including an exemplar view and a theory-based view.  These models, too, have been applied to psychiatric diagnosis.  For example, research by Cantor and Genero showed that psychiatric experts tended to diagnosis by comparing patients to category exemplars, while psychiatric novices tended to rely on category prototypes.

DSM-V was published in 2013.  It was also organized along probabilistic, prototypical lines -- suggesting that the diagnostic categories themselves, and not just the categorization process, are organized as fuzzy sets. However, this is not enough for many psychologists, who seek to embrace another basis for diagnosis entirely.

This is more than a debate over whether one diagnosis or another should be included in the new nomenclature. Some colleagues, heirs of the psychodynamically and psychosocially oriented clinicians who dominated American psychiatry before the neo-Kraepelinian revolution, wish to abandon diagnosis entirely. So do contemporary anti-psychiatrists -- though for quite different reasons. Classical behavior therapists also abjure diagnosis, seeking to modify individual symptoms without paying much attention to syndromes and diseases. For these groups, the best DSM is no DSM at all. Beyond these essentially ideological critiques, there appear to be essentially two (not unrelated) points of view: one that seeks only to put diagnosis on a firmer empirical basis, and another which seeks to substitute a dimensional for a categorical structure for the diagnostic nosology. Both seek to abandon the medical model of psychopathology represented by the neo-Kraepelinians who formulated DSM-III and DSM-IV.

The empirical critique is exemplified by Blashfield (1985), who has been critical of the "intuitive" (p. 116) way in which the neo-Kraepelinians did their work, and who wants the diagnostic system to be placed on firmer empirical grounds. For Blashfield and others like him, a more valid set of diagnostic categories will be produced by the application of multivariate techniques, such as factor analysis and cluster analysis, which will really "carve nature at its joints", showing what really goes with what. The result may very well be a nosology organized along fuzzy-set lines, as DSM-III was and DSM-IV is. But at least diagnosis will not depend on the intuitions of a group of professionals imbued with the traditional nomenclature. If schizophrenia or some other traditional syndrome fails to appear in one of the factors or clusters, that's the way the cookie crumbles: schizophrenia will have to be dropped from the nomenclature. Less radically, the analysis may yield a syndrome resembling schizophrenia in important respects, but the empirically observed pattern of correlations or co-occurrences may require revision in specific diagnostic criteria.

While Blashfield (1985) appears to be agnostic about whether a new diagnostic system should be categorical or dimensional in nature, so long as it is adequately grounded in empirical data, other psychologists, viewing diagnosis from the standpoint of personality assessment, want to opt for a dimensional alternative. Exemplifying this perspective are Clark, Watson, and their colleagues (Clark, Watson, & Reynolds, 1995; Watson, Clark, & Harkness, 1994). They argue that categorical models of psychopathology are challenged by such problems as comorbidity (e.g., the possibility that a single person might satisfy criteria for both schizophrenia and affective disorder) and heterogeneity (e.g., the fact that the present system allows two people with the same diagnosis to present entirely different patterns of symptoms). Clark et al. (1995) are also bothered by the frequent provision in DSM-IV of a subcategory of "not otherwise specified", which really does seem to be a mechanism for assigning diagnoses that do not really fit; and by a forced separation between some Axis I diagnoses (e.g., schizophrenia), and their cognate personality disorders on Axis II (e.g., schizotypal personality disorder).

Clark and Watson's points (some of which are essentially reformulations of the problems of partial and combined expression) are well taken, and it is clear -- and has been clear at least since the time of Eysenck (1961) -- that a shift to a dimensional structure would go a long way toward addressing them. At the same time, such a shift is not the only possible fix. After all, heterogeneity is precisely the problem which probabilistic models of categorization are designed to address (the exemplar and theory-based models also address them), although it seems possible that such categories as schizophrenia, as defined in DSM-III and DSM-IV may be a little too heterogeneous. Comorbidity is a problem only if diagnoses label people, rather than diseases. After all, dual diagnosis has been a fixture in work on alcohol and drug abuse, mental retardation, and other disorders at least since the 1980s (e.g., Penick, Nickel, Cantrell, & Powell, 1990; Woody, McClellan, & Bedrick, 1995; Zimberg, 1993). There is no a priori reason why a person cannot suffer from both schizophrenia and affective disorder, just as a person can suffer from both cancer and heart disease.

There is no doubt that the diagnostic nosology should be put on a firmer empirical basis, and it may well be that a shift from a categorical to a dimensional structure will improve the reliability and validity of the enterprise. It should be noted, however, that both proposals essentially represent alternative ways of handling information about symptoms -- subjectively experienced and/or publicly observable manifestations of underlying disease processes. So long as they remain focused on symptoms, proposals for revision of the psychiatric nomenclature, nosology, and diagnosis amount to rearranging the deckchairs on the Titanic. Instead of debating alternative ways of handling information about symptoms, we should be moving beyond symptoms to diagnosis based on underlying pathology. In doing so, we would be honoring Kraepelin rather than repealing his principles, and following in the best tradition of the medical model of psychopathology, rather than abandoning it.

 

Person Concepts as Theories

The theory view of concepts has not been systematically applied to the categorization of persons, but there are hints of the theory view in the literature.  In particular, some person categories seem to be very heterogeneous.  What unites the members of these categories is not so much similarity in their personality traits, but rather similarity in the theory of how they acquired the traits they have.

 

Psychoanalysis

The character types of Freud's psychoanalytic theory of personality and psychotherapy (oral, anal, phallic, and genital) can be very heterogeneous -- so heterogeneous that they seem, at first glance, to have little or nothing in common at all.  Yet, from a Freudian perspective, all "anal types" have in common that they are fixated at, or have regressed to, the oral stage of psychosexual development.  Similarly, the Freudian theory of depression holds that all depressives, despite variability in their superficial symptoms, have in common the fact that they have introjected their aggressive tendencies.

 

"Survivorhood" and "Victimology"

Similarly, a prominent trend among certain mental-health practitioners is to identify people as "survivors" of particular events -- a tendency that has sometimes been called victimology.  For example: survivors of child abuse (including incest and other forms of sexual abuse), adult children of alcoholics, and children (or grandchildren) of Holocaust survivors.  An interesting feature of these groups is that the members in them can be very different from each other.  For example, it is claimed that one survivor of childhood sexual abuse may wear loose-fitting clothing, while another may dress in a manner that is highly provocative sexually (Blume; Walker).  

Without denying that such individuals can sometimes face very serious personal problems, in the present context what is interesting about such groupings is that the people in them seem to have little in common except for the fact that they are survivors (or victims) of something.  They are similar with respect to that circumstance only.  While in some ways survivor might be a proper set, with survivorhood as the sole singly necessary and jointly sufficient defining feature of the category, the role of "survivorhood" seems to go deeper than this -- if only because there are some victims of these circumstances who have no personal difficulties, and for that matter may not identify themselves as "survivors" of anything.  In the final analysis, the category seems to be defined by a theory -- e.g., that category members, however diverse and heterogeneous they may be, got the way they are by virtue of their victimization.  


Traits as Categories of Behavior

035Hierarchical.jpg (68827 bytes)Just as personality types can be viewed as categories of people (which, after all, is precisely what they are), so personality traits can be viewed as categories of behavior.  Of course, traits -- superordinate traits, anyway -- are also categories of subordinate traits (see the lecture supplement  on the Cognitive Perspective on Social Psychology).

 

 

img093.jpg (73698
                bytes)Consider, for example, the Big Five personality traits.  As described by Norman (1968), among others each of these dimensions (e.g., extraversion) is a superordinate dimension subsuming a number of other, subordinate dimensions (e.g., talkative, frank, adventurous, and sociable).  Presumably, each of these subordinate dimensions contributes to the person's overall score on extraversion.



  • In this way, extraversion might be a broad category of behavior, and talkativeness, frankness, adventurousness, and sociability might be considered to be features that are singly necessary and jointly sufficient to define the category.
  • But it is also possible to score as highly extraverted even if one is not very talkative -- provided that the person is frank, adventurous, and sociable enough.  Thus, the subordinate traits are in no sense necessary components to the superordinate one.  In other words, we can view extraversion as a fuzzy set, with talkativeness, frankness, adventurousness, and sociability as features that are probabilistically associated with the category.  A person who is all of these things may be the prototypical extravert, but there are other extraverts who will bear a family resemblance to this prototype.
Superordinate traits might take primary traits as their features, but at some point traits are really about behaviors.  In this respect, and following Allport, we can construe traits as categories of behaviors which are in some sense functionally equivalent; or, put another way, the behaviors are similar because they are all expressions of the same trait.  The notion that traits are categories of behaviors goes back at least as far as Allport's description of the biosocial view of traits as cognitive categories for person perception (not that Allport used these exact terms, but that's what he meant).  

 

Traits as Prototypes

095HampImagine.jpg (68442 bytes) However, the first investigator to apply the prototype view to traits, as opposed to types, was Sarah Hampson, whose work was explicitly inspired by Cantor's work on personality types.  In one study, Hampson (1982) presented subjects with a list of 60 trait terms, and had them rate the corresponding trait-related behaviors on a scale of imaginability.  Subjects found it easiest to imagine helpful behaviors, and hardest to imagine important behaviors. 

 

097HampEase.jpg
                (36390 bytes)Then Hampson asked subjects to generate 6 example behaviors for each trait, and had them rate the ease of doing so.   Perhaps not surprisingly, the subjects found it easier to think of behaviors related to traits of high imaginability, compared to traits of low imaginability.  

 

 

In the next step, Hampson presented subjects with the behaviors generated in the previous stem, and asked them to rate how "prototypical" they were of the trait in question.  Again perhaps not surprisingly, the subjects rated behaviors from highly imaginable traits to be relatively prototypical of those traits (in Hampson's original report, highly prototypical traits were given ratings of "1"; these ratings have been reversed for the slide).

 

101HampClass.jpg (44877 bytes)Finally, Hampson asked subjects to assign the behaviors to their appropriate categories.  Subjects made fewer categorization errors with highly prototypical behaviors -- even if they were related to traits that were not highly "imaginable".

 

 

From these and other results, Hampson concluded that traits are indeed categories of behaviors, and that trait-related behaviors varied in their prototypicality.

 

The Act-Frequency Approach to Traits

Buss & Craik (1983), working at UCB, also applied the prototype view to personality traits.  Although trait psychology is generally associated with the biophysical view of traits as behavioral dispositions, these investigators, like Hampson, viewed traits simply as labels for action tendencies, with no causal implications.  From this point of view, traits are not explanations of behavior, but rather summarize the regularities in a person's conduct.  Put another way, traits are simply categories of actions, or equivalence classes of behavior. 


In order to study the internal structure of trait categories, Buss & Craik examined 6 categories of acts.  For each category, they asked subjects to list behaviors that exemplified the trait, and then they asked them to rate the prototypicality of the behaviors with respect to each trait.  They found that the various acts did indeed vary in prototypicality.  In the lists that follow,

  • the most prototypical behaviors for each trait are printed in red,
  • moderately prototypical behaviors are in blue,
  • moderately unprototypical behaviors are in green, and
  • highly unprototypical behaviors are in black.

 

Agreeable 105B&CAgree.jpg (71909 bytes)
Aloof 106B&CAloof.jpg (71314 bytes)
Dominant 107B&CDom.jpg (73498 bytes)
Gregarious 108B&CGregar.jpg (75464 bytes)
Quarrelsome 109B&CQuarrel.jpg (75610 bytes)
Submissive 110B&CSubmiss.jpg (81741 bytes)

Equally important, if not more so, they found that many behaviors were associated with multiple traits, although most behaviors were more "prototypical" of one trait than another.  

These findings are consistent with the view of traits as fuzzy sets of trait-related behaviors, each represented by a set of "prototypical" behaviors which exemplify the trait in question.  The act-frequency model of traits implies that people will label prototypical acts more readily than nonprototypical ones, and that trait attributions will vary with the prototypicality of the target's actions.  It also implies that people will remember more prototypical acts better than less prototypical ones.  


Situational Concepts

In social cognition we have categories of persons, represented by types, and categories of behaviors, represented by traits.  We also have categories of the situations where persons meet to exchange social behaviors.

 

Prototypes of Situations

Recognizing that a theory of social cognition must take account of situations as well as of persons, Cantor and her colleagues explored the structure of situation categories in much the same manner as they had earlier explored the structure of person categories (Cantor, Mischel, & Schwartz, 1982).  

In their research, they constructed four three-level hierarchies, paralleling those that they had used in the earlier person studies.

  • At the highest level, there were cultural, ideological, stressful, and social situations.
  • At the middle level, there were subcategories such as tour versus performance, and party vs. date.
  • At the lowest level, there were subcategories such as   
    • tour of castle vs. tour of museum, and symphony performance vs. theater performance, and
    • fraternity party vs. birthday party, and first date vs. blind date.

Based on subjects' performance in a variety of feature-listing and classification tasks, Cantor et al. concluded that situational categories were fuzzy sets, with no defining features, represented by prototypes possessing many "prototypical" features of the situations in question. 

 

Scripts as Concepts 

From a social-psychological perspective, social situations are defined less by their physical characteristics than by the expectations and behaviors that occur within them.  Accordingly, rather different approach to situational concepts is provided by the notion of social scripts, or scripts for social behaviors.  

 

Role Theory

To some extent, the notion of scripts stems from the role theory of social behavior, as articulated by Theodore R. Sarbin (1954; Sarbin & Allen, 1968).  Although role theory has chiefly been applied to the analysis of hypnosis (e.g., Sarbin & Coe, 1972), in principle it is applicable to any kind of social interaction.  Role theory is based on a dramaturgical metaphor for social behavior, in which individuals are construed as actors, playing roles, according to scripts, in front of an audience.  In a very real sense, situations are defined by the scripts that are played out within them.  

Among the important elements in Sarbin's role theory are:

  • role enactment, or the degree to which the actor's performance is appropriate or convincing.

Role enactment, in turn, is a function of a number of different variables, including:

  • the number of roles which the individual has available for enactment;
  • the pre-emptiveness of roles, or the amount of time a person spends in one role as opposed to another;
  • role conflict, or the degree to which one role interferes with another;
  • organismic involvement, or the amount of effort devoted to the role;
  • role location, or the accuracy with which people identify both the roles being played by others, and the roles that they themselves are expected to play;
  • self-role congruence, or the extent to which a role is compatible with the individual's self-concept;
  • role expectations, the types of behaviors expected of people in different roles;
  • role learning, through socialization processes;
  • role skills, or the characteristics which enable the person to enact an effective and convincing role;
  • role demands, or the roles specified for each actor in the situation;
  • role perception, or the degree to which people understand the roles demanded of them;
  • the reinforcing properties of the audience, which serves a number of different functions:
    • it establishes a consensual basis for role enactment, confirming or denying the appropriateness of the role;
    • it furnishes cues for role enactment, much as a prompter does for the actors in a play or opera;
    • it provides social reinforcement, in the form of tangible and intangible rewards and punishments; and
    • it maintains role behavior over an extended period of time.

It is unclear to what extent Sarbin (who was a professor here at UCB before he became part of the founding faculty at UC Santa Cruz) thinks that we are "only" playing roles in our social behavior.  He wishes to stress that, in talking about "role-playing", he does not think there is anything insincere about our social interactions.  It's just in the nature of social life that we are always playing some kind of role, and following some kind of script.  But in the present context, roles, and the scripts associated with them, may be thought of as collections of the behaviors associated with various situations.  The situations we are in determine the roles we play, and the roles we play define the situations we're in.

 

Sexual Scripts

Another early expression of the script idea came from John H. Gagnon and William Simon, who applied the script concept to sexual behavior (e.g., Gagnon & Simon, 1973; Gagnon, 1974; Simon, 1974).  Not to go into detail, they noted that sexual activity often seemed to follow a script, beginning with kissing, proceeding to touching, then undressing (conventionally, first her and then him, by Simon's account), and then... -- to be followed by a cigarette and sleep.  There are variations, of course, and the script may be played out over the minutes and hours of a single encounter, or over days, weeks, and months as a couple moves to "first base", "second base", "third base", and beyond.  But Gagnon and Simon's insight is that there is something like a script being followed -- a script learned through the process of sexual socialization.  

Although Gagnon and Simon focused their analysis of scripts on sexual behavior, they made it clear that sexual scripts were simply "a subclass of the general category of scripted social behavior" (Gagnon, 1974, p. 29).  As  Gagnon noted (1974, p. 29):

The concept script shares certain similarities with the concepts of plans or schemes in that it is a unit large enough to comprehend symbolic and non-verbal elements in an organized and time-bound sequence of conduct through which persons both envisage future behavior and check on the quality of ongoing conduct.  Such scripts name the actors, describe their qualities, indicate the motives for the behavior of the participants, and set the sequence of appropriate activities, both verbal and nonverbal, that should take place to conclude behavior successfully and allow transitions into new activities.  The relation of such scripts to concrete behavior is quite complex and indirect; they are neither direct reflections of any concrete situation nor are they surprise-free in their capacity to control any concrete situation  They are often relatively incomplete, that is, they do not specify every act and the order in which it is to occur; indeed..., the incompleteness of specification is required, since in any concrete situation many of the sub-elements of the script must be carried out without the actor noticing that he or she is performing them.  They have a major advantage over concrete behavior, however, in that they are manipulable in terms of their content, sequence, and symbolic valuations, often without reference to any concrete situation.  We commonly call this process of symbolic reorganization a fantasy when it appears that there is no situation in which a script in its reorganized form may be tested or performed, but in fact, such apparently inapplicable scripts have significant value even in situations which do not contain all or even any of the concrete elements which exist in the symbolic map offered by the script. 

In later work, Gagnon, Simon, and their colleagues have distinguished among three different levels of scripting:

  • interpersonal scripts describe what takes place in a particular (typically dyadic) social interaction;
  • cultural scripts consist of instructions conveyed by social institutions concerning how people should behave in particular situations; and
  • intrapsychic scripts consist of individuals' private, subjective plans, memories, and fantasies concerning the scripted situation.

 

Scripts as Knowledge Structures

The notion of scripts in role theory (especially), and even in sexual script theory, is relatively informal.  Just what goes into scripts, and how they are structured, was discussed in detail by Schank & Abelson (1977), who went so far as to write script theory in the form of an operating computer program -- an exercise in artificial intelligence applied to the domain of social cognition.  Schank and Abelson based their scripts on conceptual dependency theory (Schank, 1975), which attempts to represent the meaning of sentences in terms of a relatively small set of primitive elements.  Included in these primitive elements are primitive acts such as:

  • ATRANS, transfer of an abstract relationship, such as possession;
  • MTRANS, transfer of mental information between animals or within an animal;
  • PTRANS, transfer of the physical location of an object;
  • MOVE, movement of the body part of an animal by that animal;
  • INGEST, taking in of an object by an animal to the inside of that animal.


118SchankRestaurant.jpg (97954 bytes)Schank & Abelson illustrate their approach with what they call the Restaurant Script:




  • The script comes in several different tracks, corresponding to the various types of restaurant, such as coffee shop.
  • There are various props, such as tables, menu, food, check, and money.
  • There are various roles, such as customer, waiter, cook, cashier, and owner.
  • There are certain entry conditions, such as Customer is hungry and Customer has money.
  • And there are certain results, such as Customer has less money, Owner has more money, Customer is not hungry, and Customer is pleased (which, of course, is optional).

 

Scene

Begins with...

Ends with...

Scene 1, 

Entering the Restaurant

Customer PTRANS Customer into restaurant 

Customer MOVE Customer to sitting position

Scene 2, 

Ordering

Customer MTRANS Signal to Waiter

Waiter PTRANS Food to Customer

Scene 3, 

Eating

Cook ATRANS Food to Waiter

Customer INGEST Food

Scene 4, 

Exiting

Waiter ATRANS Check to Customer

Customer PTRANS Customer out of restaurant.

Although script theory attempts to specify the major elements of a social interaction in terms of a relatively small list of conceptual primitives, Schank and Abelson also recognized that scripts are incomplete. 

  • For example, there are free behaviors that can take place within the confines of the script. 
  • There are also anticipated variations of the script, such as
    • equifinal actions, or actions that have the same outcome;
    • variables, such as whether the customer orders chicken or beef;
    • paths;
    • scene selection; as well as the
    • tracks described above.
  • And there are unanticipated variations as well, such as:
    • interferences such as obstacles, errors, and corrective prescriptions for them; and
    • distractions from the script.

Scripts are, in some sense, prototypes of social situations, because they list the features of these situations and the social interactions that take place within them.  But they go beyond prototypes to specify the relations, particularly, the temporal, causal, and enabling relations, among these features.  The customer orders food before the waiter brings it, and the customer can't leave until he pays the check, but he can't pay the check until the waiter brings it.

In any event, scripts enable us to categorize social situations: we can determine what situation we are in by matching its features to the prototypical features of various scripts we know.  And, having categorized the situation in terms of some script, that script will then serve to guide our social interactions within that situation.  By specifying the temporal, causal, and enabling relations among various actions, the script enables us to know how to respond to what occurs in that situation.

 

Stereotypes as Social Categories

As noted earlier, social stereotypes are categories of persons.  Walter Lippman (1922), the journalist and political analyst who coined the term, defined a stereotype as "an oversimplified picture of the world, one that satisfies a need to see the world as more understandable than it really is".  That's true, but from a cognitive point of view stereotypes are simply emotionally laden social categories -- they are a conception of the character of an outgroup that is shared by members of an ingroup.  Of course, outgroup members can share stereotypes concerning the ingroup, as well -- but in this case, the roles of ingroup and outgroup are simply reversed.  It's also true that ingroup members can share a stereotype of themselves. 


Judd and Park (1993) summarize the general social-psychological understanding of stereotypes as follows:

  • they are generalizations about social groups,
  • illogically derived,
  • and rigidly held, 
  • but erroneous in content.

As cognitive concepts, stereotypes have an inductive aspect, in that they attribute to an entire group features of a single group member (or a small subset of the group); and a deductive aspect, in that they attribute to every member of the group the features ascribed to the group as a whole.  

Stereotypes have a number of functions.

  • On the positive side:
    • They reduce effort in impression-formation, as the perceiver can attribute to the target all the characteristics included in the stereotype associated with the target's group.
    • Similarly, they allow the perceiver to make inferences about unobserved features of the target, based on group membership. 
    • They enable the perceiver to predict past and future behaviors of the individual, consistent with the stereotype.
  • But on the negative side:
  • They serve as the basis of both emotional prejudice and behavioral discrimination towards outgroup members.

Over the years, a number of theoretical accounts have been given for stereotyping.

  • According to an economic perspective, such as realistic group conflict theory, ethnic stereotyping reflects ethnocentrism, and both result from the competition between groups for limited resources (such as territory).
  • According to a motivational perspective, such as social identity theory, stereotyping results from the motivated distinction between Us and ThemThe motivational perspective thus explains stereotyping within groups that are not necessarily in economic competition.
  • According to the cognitive perspective, stereotyping is the inevitable outcome of categorization -- in Lippman's words (echoing those of William James), stereotyping reflects a "simple model of the great blooming, buzzing, confusion of reality".

Certainly stereotypes look like other categories.  Social stereotypes consist of a list of features (like traits) that are held to be characteristic of some group.  Alternatively (or in addition), they can consist of a list of exemplars of individuals who are representative of the group.  Sometimes, though not often enough, stereotypes acknowledge variability among individual group members, or exceptions to the rule.  Arguably, there's more to social stereotypes than a simple list of features associated with various groups.  Certainly, features such as various traits play a major role in the content of social stereotypes.  But social stereotypes probably also contain information about the variability that surrounds each of these features (not just their central tendency).  Moreover, if we believe the exemplar view of categories (and we should), stereotypes as categories may be represented not just by a list of features, but also of instances of the category -- including exceptional instances.  When, during the 2008 presidential race, Senate Majority Leader Harry Reid said that the American electorate might be ready to elect a "light skinned" African-American "with no Negro dialect", he was certainly referring to features associated with the African-American stereotype; but he probably also had in mind an older generation of African-American political figures, such as Jesse Jackson of Al Sharpton. 


The Content of Social Stereotypes

Most studies of stereotypes have opted for the "featural" view, and have sought to identity sets of features believed to be characteristic of various social groups. 

Among the most famous of these is the "Princeton Trilogy" of studies of social stereotypes held by Princeton University undergraduates.  In a classic study, Katz & Braly (1933) presented their subjects with a list of adjectives, and asked them to check off all that applied to the members of particular racial, ethnic, and national groups.  These can be thought of as the features associated with the concepts of American, German, etc.  Essentially the same study was repeated after World War II by Gilbert (1951), in the late 1960s by Karlins et al. (1969).  There were also follow-up studies, conducted at other universities, by Dovidio & Gaertner (1986) and Devine & Elliot (1995).  The slide at the right shows the traits most frequently associated with the :American" and "German" stereotypes. 

Interestingly, when the Katz and Braly (1933) study was repeated, these stereotypes have proved remarkably stable (e.g., Gilbert, 1951).  For example, the stereotype of Germans collected in 19967 by Karlins, Coffman, & Walters (1969) showed considerable overlap with the one uncovered by Katz and Braly in the 1930s.  Some traits dropped out, and some others dropped in, but most traits remained constant over 36 years.  Still, there were interesting differences from one study to the next (Devine & Elliot, 1995).



  • In 1993, there was considerable positive bias towards the ingroup, defined as "Americans" -- which, given the composition of the all-male Princeton student body at the time, meant "WASPS" -- White, Anglo-Saxon, Protestants (Oops -- Pardon my stereotyping!).
    • And there was considerable agreement among the subjects as to the features associated with each of the outgroups -- even though, given the composition of the Princeton student body at the time, probably few of the students had had any substantive contact with members of the groups they were rating!
  • In 1951, following World War II, the Japanese stereotype became extremely negative.
    • However, Gilbert reported that his subjects resisted making generalizations about people.
  • In 1969, Karlins et al. reported that their subjects found the task to be even more objectionable.
    • They also reported that, on average, the stereotypes became more positive in nature.
  • A similar study was conducted by Dovidio and Gaertner ((1986, 1991), but because that study did not involve Princeton students, it doesn't really count as a member of the Princeton Trilogy (which, if it did, would become the "Princeton tetralogy").

The conventional interpretation of the Princeton Trilogy was of a general fading of negative stereotypes over time, presumably reflecting societal changes toward greater acceptance of diversity, reductions in overt racism, and trends toward liberalism, and cosmopolitanism.  the three generations covered by the Trilogy tended to include different traits in their stereotypes, with decreased consistency, and especially diminished negative valence.  They still had stereotypes, but they were (quoting President George H.W. Bush, '41) "kinder and gentler" than before. 


At the same time, Devine and Elliot (1995) identified some methodological problems with the Trilogy -- especially those studies that followed up on the original 1933 study by Katz & Braly).  For one thing, these studies used an adjective set that, having been generated in the early 1930s, might have been outdated, and thus may have failed to capture the stereotypes in play at the time the later studies were done.  More importantly, the instructions given to subjects in the followup studies were ambiguous, because they did not distinguish between their knowledge of the stereotype and their acceptance of it.  It is one thing to know what your ingroup at large thinks about Germans or African-Americans, but another thing entirely to believe it yourself.  D&E argued that cultural stereotypes were not the same as personal beliefs

  • Stereotypes consist of the association of features with a group label.
    • These associations are acquired through socialization.
    • They are automatically activated by encounters with members of the stereotyped outgroup.
  • Personal beliefs, by contrast, are propositions that are accepted by the individual as true.
    • They are not necessarily congruent with cultural stereotypes, and can can exist side-by-side with them in memory.
    • Even if one's personal beliefs are congruent with wider cultural stereotypes, the individual can still control their overt expression in word and deed.
  • The match between stereotype and personal belief yields a continuum of prejudice.
    • Beliefs are congruent with stereotypes in high-prejudiced individuals.
    • Beliefs are incongruent with stereotypes in low-prejudiced individuals.

Devine and Elliot (1995) attempted to correct both these problems, using an updated set of 93 rating scales.




  • They conducted a dual assessment of each item, carefully distinguishing between a cultural stereotype (which subjects might not accept personally) and subjects' personal beliefs (which might or might not be congruent with the stereotype).
    • In contrast to the Princeton Trilogy, which covered a large number of outgroup stereotypes, Devine & Elliot focused only on stereotypes concerning African-Americans.  But there's no reason to think that their results wouldn't generalize to other outgroups.
    • Needless to say, their subjects were all white, male and female.
      • It would have been nice to have a balanced group of subjects, with equal numbers of whites and blacks completing comparable instruments, following the "full-accuracy" design of Judd & Park (1993), described below, but black subjects were simply not available in sufficient numbers for this purpose.
  • They also assessed their subjects own levels of prejudice by means of the Modern Racism Scale (MRS).  The MRS doesn't assess people's racial prejudices outright, but rather includes a number statements of socio-political attitudes that seem likely to be shared by individuals who harbor racist tendencies. 
    • For example, "Blacks have more influence upon school desegregation plans than they ought to have".
  • Now before anyone gets upset, let me be perfectly clear that I'm not a big fan of the MRS.  It's perfectly possible to have conservative positions on various social-political issues without being racist (Chief Justice John Roberts, ruling in an affirmative action case, once argued that "The way to end racial discrimination is to end discrimination based on race" -- or words to that effect).  It's just that it's hard to get anyone to own up to the kind of racial prejudice that was common, for example, in the 1930s -- or, for that matter, in the 1950s, especially in the Jim Crow South, before the civil rights movement.  But for researchers who are interested in racial prejudice, the MRS is probably the best available instrument -- though it could certainly be improved upon.

The results were fascinating.  Like Katz and Braly (1933), but unlike the later studies in the Princeton Trilogy (including the follow-up by Dovidio & Gaertner), Devine and Elliot found a high degree of uniformity in whites' stereotype of African-Americans.  They also found comparable levels of negativity.  Apparently, both the low degree of uniformity found by the later studies, and the decreasing levels of negativity, stemmed from their failure to clearly distinguish between stereotypes and personal beliefs.  As predicted, both high- and low-prejudiced subjects acknowledged the same stereotype concerning African-Americans.  But when it came to personal beliefs, the highly prejudiced subjects' beliefs were more congruent, and the low-prejudiced subjects beliefs more incongruent, with the cultural stereotype.

Devine and Elliot argued that stereotypes are automatically activated by the stimulus of an outgroup member (e.g., physically present or depicted in media).  In highly prejudiced individuals, this stereotype is then translated into prejudicial or discriminatory behavior.  In low-prejudice individuals, the automatic activation still occurs, but its translation into negative behavior is consciously controlled.  However, this conscious control requires time, effort, and consumes cognitive capacity.  So, even unprejudiced individuals may act on stereotypes, depending on the circumstances.

 

The Structure of Social Stereotypes

So stereotypes are social categories, but again, the question is how are they structured?  Surely, nobody thinks that all Americans are industrious or all Germans scientifically minded, so the features associated with these categories are not singly necessary nor jointly sufficient to define the category.  What does it mean when people stereotype Germans as "scientifically minded" and "extremely nationalistic"?  Not, surely, that these features are true for all Germans.  Nor even, perhaps, that they are true for most.  Maybe they are typical -- but what, after all,  does "typical" mean?

035GermBayes.jpg (67645 bytes)Some sense of how features get associated with stereotypes was provided by McCauley & Stitt (1978), following earlier analyses by Brigham (1969, 1971).  They took certain characteristics of the German stereotype, and asked subjects to rate them in two ways:



  • What is the "base rate" likelihood that the average person has this characteristic?
    • This yielded p(T), or the probability that the trait occurred in a person selected at random.
  • What is the likelihood that the average German has this characteristic?
    • This yielded p(T|G), or the probability that the trait would occur, given that the person is German.

Employing Bayes' Theorem, a statistical principle that compares observed probabilities to base-rate expectations, they calculated the diagnostic ratio between p(T|G) and p(T).  The result was that features such as efficient, nationalistic, industrious, and scientific, which are associated with the German stereotype, are thought to occur more frequently in Germans than in people at large (never mind whether they do -- we're dealing with cognitive beliefs here, not objective reality).  

In other words, features are associated with a stereotype if they occur -- or are believed to occur -- relatively more frequently among members of the stereotyped group than in other groups.  Stereotype traits need not be present in all group members -- remember that Lippman himself described stereotypes as over-broad generalizations; nor need they be present in a majority of group members.  They may even be less frequent in group members than traits that are not part of the stereotype.  In order to enter into a stereotype, traits need only be relatively more probable in group members compared to another group (e.g., an ingroup), or to the population as a whole.

Note, however, that the probabilities in question are subjective, not objective.  They represent people's beliefs about stereotype traits, not objective reality -- including beliefs about the strength of association between the group and the trait.  Viewed objectively, it may be that these traits really are more common in the stereotype group than in an ingroup, in which case the stereotype might have a "kernel of truth" to them.  But it is not necessarily the case, and even if it is the case the beliefs may amplify objective reality.  In either case, stereotype traits are believed to be more diagnostic than they actually are.

In the present context, the important thing is that the findings of McCauley & Smith (1978) are exactly what we would expect, if stereotypes are fuzzy sets of features that are only probabilistically associated with category membership, and summarized by a prototype that possesses a large number of central features of the category.   

 

Origins of Stereotypes

Where do stereotypes come from?  To some extent, of course, they are a product of social learning and socialization.  In the words of the old Rogers and Hammerstein song (from South Pacific), "You've Got to be Carefully Taught".  

But stereotypes can be based on direct as well as vicarious experience -- which is probably where the notion of a "kernel of truth" came from.  

Consider, for example, the stereotype that girls and women generally have poorer quantitative and spatial skills, and better verbal skills, compared to males.

It was this notion that, in 2005, led Lawrence Summers, then president of Harvard, to suggest that there were relatively few women on Harvard's math and science faculty because of innate gender differences in math and science ability.  Of course, that doesn't explain why there were more tenured males than females in Harvard's humanities departments (you can look it up to see whether that is still the case; as of 2010, it was).  The most parsimonious explanation is that, whatever "innate" differences there might be, there is systematic gender bias against women getting tenure at Harvard.  The brouhaha over his comments led Summers to resign the presidency shortly thereafter -- but didn't prevent him from being appointed chief economic advisor in the Obama administration.

Along these same lines, somebody, somewhere, sometime, observed some outgroup member display some intellectually or socially undesirable characteristic, or engage in some socially undesirable behavior, some girl somewhere who had trouble with advanced mathematics, and that got the ball rolling.  On the other hand, people who stereotype often have limited experience with those whom they stereotype.  

So, to continue the example of gender stereotypes concerning mathematical, spatial and verbal abilities, what's the evidence?  Is there a "kernel of truth"?

  • In the case of female mathematical ability, what seems to have happened is that a small gender difference got magnified into a gender stereotype.  Eleanor Maccoby and Carol Nagy Jacklin, in their comprehensive review of The Psychology of Sex Differences (1977), confirmed that there were sex differences favoring men in math and spatial ability, and favoring women in verbal ability -- though they argued that these differences were really small, that there was almost as much variation within each sex as there was between sexes, and that there were plenty of girls and women with high levels of math ability.
  • These sex differences are sometimes attributed to "masculinization of the brain" -- the idea that those extra baths of fetal androgen affect brain structure in such a way as to give males good math abilities and allow females to keep good verbal abilities.
    • You can see a contemporary version of this sort of argument in two books by Louann Brizendine, a professor at UCSF: The Female Brain (2006) and The Male Brain (2010).
    • For their part, evolutionary psychologists argue that these sex differences arose because primeval men had to go off hunting and gathering, while primeval women stayed home and talked.
  • The stereotype was ostensibly confirmed by Camilla Benbow and Julian Stanley, who reported that 7th-grade boys outperformed girls on the mathematics portion of the SAT, even though all the children had been identified as intellectually gifted.  Because this difference emerged as early as the 7th grade, that is, before there could be any differential exposure of boys and girls to mathematics instruction, Benbow and Stanley attributed the result to innate sex differences in mathematical ability.  It's this study, presumably, that Summers had in mind.
  • The Maccoby-Jacklin review was definitive for its time, but beginning in the 1980s narrative "box score" reviews of the sort they published came to be supplanted by more quantitative "meta-analyses", which cast a somewhat different light on these sorts of sex differences. In these studies, the magnitude of an experimental effect is often quantified by a  statistic known as Cohen's d.  By convention, 
    •  a value of d between .00 and .10 is considered a "trivial" effect -- hardly any effect at all;
    • d between .11 and .35 represents a "small" effect;
    • d between .36 and .60 represents a "modest" effect;
    • d between .61 and 1.00 represents a "large" effect;
    • d > 1.00 represents a "Faustian" effect, so large that most social scientists would sell their souls to get it.
  • Janet Hyde (1988) performed a meta-analysis of  165 studies of gender differences in verbal ability, and obtained an average d = +.11, indicating that females did, indeed, outperform males -- by an amount that bordered on the "small".  Moreover, the gender difference declined markedly after 1973.
  • Hyde and her colleagues (1990) performed a similar meta-analysis of 100 studies of gender differences in mathematics performance, and obtained an average d = -.05 -- meaning that females actually outperformed males, albeit by a "trivial" amount.  A gender difference favoring men emerged only in high-school (average d = .29) and college (average d = .32) -- that is, precisely when there is differential exposure of males and females to mathematics training.  Once again, she observed that the magnitude of the high school and college gender difference had declined substantially since 1973.

So there is a little sex difference in mathematical ability favoring males, and a little sex difference in verbal ability favoring females.  That's the "kernel of truth".  But there's no evidence at all of sex differences of a magnitude high enough to warrant the gender stereotype of the math-challenged female.  Even if Benbow and Stanley are right, that females are underrepresented in the highest echelons of mathematical ability, there are still enough of them to take up half the tenured chairs in mathematics at Harvard.  And even if they're right, that doesn't mean that the under-representation of girls in their group of "mathematically precocious youth" reflects an innate gender difference.  Just because boys and girls are physically located in the same elementary and junior-high classrooms, doesn't mean that their exposure to mathematics is the same.  

And even if they're right, that's no reason to impose the stereotype on each individual girl or woman.  Instead, each individual ought to be evaluated, and treated, on his or her own merits.  To do otherwise is -- well, it's un-American.  But I digress.

But most important, the fundamental fact about stereotypes is that they're essentially untrue.  If it were really true that Germans were industrious, or that the average German is industrious, or that Germans are actually more likely to be industrious than non-Germans, or people at large, then -- well, it wouldn't be a stereotype, would it?  So given that stereotypes involve false (or, at least, exaggerated) beliefs about Them, where do these beliefs some from.

One source of false belief is the illusory correlation, a term coined by Loren Chapman and Jean Chapman (1967, 1969).  The stereotype that Germans are industrious, in this view, represents an illusory correlation between "being German" and "being industrious".  

Actually, the illusory correlation comes in two forms:

  • the creation of a correlation out of whole cloth, where none exists in the real world; and
  • the magnification of a correlation that actually exists in the real world.

Illusory correlations, in turn, have two sources:

  • False beliefs generate illusory correlations through a kind of perceptual confirmation process.  For example, the Chapmans conducted an experiment in which subjects examined protocols from the Rorschach inkblot test given by patients with various psychiatric diagnoses.  The subjects reported that paranoid schizophrenics tended to see "eyes" in the inkblots, even though such an association was not actually present in the data they examined.  Apparently, the (stereotypical) belief that paranoids worry about other people looking at them, led the subjects to perceive a correlation between paranoia and eyes that was not actually there.
  • Illusory correlations can also be generated through the feature-positive effect, in which subjects (including animals as well as humans) attend to the conjunction of unusual features.  For example, African-Americans are a statistical minority in the American population; and criminal behavior is relatively infrequent.  If people pay more attention to the conjunction of these two unusual events -- a black person committing a crime, that will generate the illusion that black people are more likely to engage in criminal behavior than white people.
Hamilton and Gifford (1976) showed how illusory correlations in social cognition could be generated by the feature-positive effect.  They presented subjects with lists of behaviors engaged in by two target groups.  There were 26 individuals in Group A, and 13 individuals in Group B: thus, Group B was in the minority, and thus distinctive.  There were 27 moderately desirable behaviors (e.g., "Is rarely late for work") and 12 moderately undesirable behaviors (e.g., "Always talks about himself"): Thus, undesirable behaviors were in the minority, and thus distinctive.  

In the experiment, members of each group were depicted as engaging in a mix of desirable and undesirable behaviors, at a ratio of 9:4:

  • Members of Group A displayed 18 desirable and 8 undesirable behaviors.
  • Members of Group B displayed 9 desirable and 4 undesirable behaviors.

HamiltonGifford.JPG (58021 bytes)Accordingly, there was no actual correlation between group membership and undesirable behavior.  Nevertheless, when the subjects were asked to estimate the frequency with which undesirable actions had been displayed by each group, the subjects underestimated the frequency of undesirable behaviors by the majority group, and overestimated the frequency of undesirable behaviors by the minority group.  Similarly, when asked to rate the traits of group members, they rated the majority higher on positive traits, and the minority higher on negative traits.  In both ways, the subjects perceived a correlation between undesirable behavior, and undesirable traits, and minority-group membership -- a correlation that was entirely illusory.

A second experiment employed reversed the ratio, with 4 desirable behaviors to 9 undesirable behaviors, and induced an illusory correlation between minority-group status and positive actions.  In both studies, subjects perceived an illusory correlation between minority-group status and infrequent behaviors.

Similar results were obtained by Allison and Messick (1988), who pointed out that both humans and animals have particular difficulty processing nonoccurrences -- to continue the example, instances where members of the minority group did not engage in undesirable behavior, or majority-group members did not engage in desirable behavior.  

 

Accuracy of Stereotypes

The idea that stereotypes contain a "kernel of truth" leads us to wonder just how accurate stereotypes are.  If the probability of being efficient if you're a German is greater than the baserate for efficiency in people generally, just how efficient are Germans, anyway?  As with social perception in general, of course, the first problem in assessing accuracy is identifying an objective criterion against which stereotype accuracy can be measured.  How do you measure efficiency, and where do you find a representative sample of Germans to apply it?  But we're going to set that problem aside for the moment, and talk about the problem of stereotype accuracy in the abstract.

The most thorough analysis of stereotype accuracy comes from Judd and Park (1993), who argued that stereotypes should be assessed at the level of the individual perceiver.  stereotypes may be held by members of one group (e.g., the French) about another group (e.g., Germans) but they "need not be consensually shared" (p. 110).  The important thing is that stereotypes reside in the head of the individual, and shape his or her interactions with members of the stereotyped group.  Viewed from this perspective, stereotype accuracy may take a variety of forms.


  • Departures from neutrality: Assume that the average German, or African-American, or Muslim, or woman, or homosexual -- pick your group -- is no different from anyone else, on average.  Therefore, any departure from neutrality (e.g., the mean or median rating) would be an example of stereotyping, and the stereotype would be inaccurate by definition.  At the most basic level, if Germans really are highly efficient, then the stereotype is accurate. And if they're not highly efficient, then the stereotype in inaccurate. This definition requires only that the stereotype and reality lie on the same side of some "neutral" point on the scale of efficiency.
  • Stereotypic inaccuracy, or simple exaggeration, which can take the form of either overestimation or underestimation: Even if stereotype and reality lie on the same side of the neutral point, the perceiver's may believe that Germans are more, or less, efficient than they really are.
  • Valence inaccuracy: The perceiver may view Germans more positively, or negatively, than they really are.  So, for example, if the German stereotype includes both efficiency )a positive attribute) and authoritarianism (a negative attribute), a perceiver may believe that Germans are less efficient and more authoritarian than they really are.  In this case, the perceiver has a stereotype of Germans that is, overall, negative.
  • Dispersion inaccuracy: A perceiver may be accurate in his beliefs about the overall level of German efficiency, but be inaccurate in his beliefs about the distribution of efficiency among Germans. In overgeneralization, the perceiver underestimates the dispersion of German efficiency around the mean, basically minimizing individual differences among Germans.  In undergeneralization, the perceiver overestimates the dispersion, believing that individual differences are more widespread than they really are.

Here are some examples.  Suppose we collect subjective ratings of some outgroup on a set of traits, some negative and some positive.  The average values on each of these traits would constitute the stereotype on that group.  But now suppose we also gather objective evidence of the actual standing of outgroup members on each of these traits.  These would constitute the criterion against which the validity of the stereotype can be assessed.  Here are some possibilities for the relationship between stereotype and reality.

One possibility is that the stereotype is accurate.  That is, the average subjective estimates correspond closely to the actual objective standings of the group on each of these traits.  In this case, there would be a high correlation between the stereotype and reality, and only very small discrepancies between the two measures.

Or, the stereotype could be a little more negative in valence.  In this case, the group's negative qualities are believed to be just a little more negative, and its positive qualities just a little less positive, than is really the case.  Note, however, that the correlation between stereotype and reality is still high.  It's just that the mean values are shifted down just a bit.

Or it could be a lot more negative.  Here, the mean values have shifted further in the negative direction, but the correlation is still high. 

Here's a really, really negative stereotype, which doesn't allow for any positive qualities at all.  All the stereotypic values are below zero.  But the correlation between stereotype and reality is still high. 

There are other possibilities, of course, including one where the correlation between stereotype and reality is essentially zero.


Establishing the Criterion

OK, but now let's turn back to the thorny problem of the criterion by which the various aspects of stereotype accuracy can be assessed.  Judd and Park (1993) considered several alternatives.

  • The easiest, cheapest, and most obvious criterion is provided by self-reports from the group targeted by the stereotype.  Easiest and cheapest, perhaps, but also very problematic.
    • First, you need a representative sample of the stereotyped group to provide the self-reports.
    • Self-reports can be biased in a manner intended to maximize social desirability. 
    • In principle, at least, people may not be aware of what they're really like.
  • The obvious alternative is objective behavioral evidence, but even this has problems.
    • Such evidence is unlikely to be available, or extremely difficult to obtain.
    • Whatever behavioral measurements are available may be too indirect to serve as proxies for the stereotypical features at issue.
    • Or, they may have undesirable psychometric properties, such as low reliability.
    • If the subjects know they are being observed, self-presentational issues come to the fore.
    • Still, it can be done.  For example, McCauley & Stitt (1978) asked white subjects to estimate the standing of black Americans on various demographic variables, such as high-school graduation and employment, and then compared these estimates to census data.
  • Yet a third criterion is expert judgments.
    • But these can also be influenced by the judges' stereotypes.
For better or for worse, most studies of stereotype accuracy use self-reports as the criterion.  That is to say, if Americans hold a stereotype of Germans as industrious, we'd take a representative sample of Germans and ask how industrious they are.  And then we'd do the same thing for all the other traits included in the German stereotype.
  • Assuming that the stereotypes and criteria are collected on the same scale (say, the usual 1-7 Likert-style scale), then we can compute the discrepancy between stereotype and reality for each individual trait, and then the mean discrepancy across all the traits.  Small discrepancies mean that the stereotypes are accurate, while large discrepancies mean that the stereotypes are inaccurate.
  • Even if the stereotype and criterion are not assessed on the same scale, we can still calculate the correlation between them.  High correlations mean that the stereotypes are accurate (in that respect, at least), while low correlations -- including zero or negative correlations! -- indicate inaccuracy.
Of course, it would be better to have objective criteria.  The average German might share the American stereotype, thinking that they're industrious, when in fact they're no more industrious than anyone else.  So, for example, we might assess average hours spent at work, or average productivity, or something like that.

Or, to take another stereotype that is much in the news of late, consider the stereotype of the mentally ill as especially prone to violence - -including mass shootings such as take place with all-too-high frequency in the United StatesNow, the first thing to be said is that this stereotype is definitely wrong. Only a very small minority of mentally ill individuals engage in violent behavior, never mind mass shootings, and violent behavior is no more common among the mentally ill than it is among psychiatrically "normal" individuals.  As a second piece of evidence, consider that the incidence of mental illness is pretty constant across cultures, but the United States far outstrips other developed countries, such as the United Kingdom, Canada, and Australia, in mass shootings.  So whatever it is that leads to mass shootings, it's not mental illness.  Maybe it's lax gun controls, though: the other countries named have almost no incidents of this kind, and they have much stricter gun-control laws.  But that's just a hypothesis.

Just as important as selecting the criterion is the matter of sampling the subjects.  Inappropriate sampling can lead to false conclusions about stereotype accuracy.  To see how, let's consider how we might check the accuracy of the stereotypical belief that the mentally ill are prone to violence. 
  • One approach would be to identify individuals who have engaged in violent activity, perhaps from FBI records, and then determine which of them meet some DSM-like criterion for mental illness.  This kind of analysis can actually overstate the relation between mental illness (or any category) and violence (or any feature).
    • To give an example from public health, which I owe to Robyn Dawes (1993, 1994), consider the relation between smoking and lung cancer.  We know from solid epidemiological research that smoking is a huge risk factor for lung cancer: about 1 in 10 smokers contract lung cancer, compared to about 1 in 200 nonsmokers, an increased risk of about 2000%.  Consider now a study in which an investigator compares smoking history in 400 individuals with lung cancer, compared to 400 cancer-free controls.  The resulting table, conditioning on the consequent, lung cancer, would look something like the following table: the vast majority of cancer victims would be found to be smokers, while smoking would be very rare in controls.  The diagonal formed by the two critical cells, smoker-with-cancer and nonsmoker-without-cancer, accounts for 82% of the sample, and the resulting correlation is a very high phi = .64. 

Conditioning on the Consequent

Diagonal = 82%, phi = .64

Group

Cancer

No Cancer

Smoker

348
93

Nonsmoker

52
307
  • But this turns out to be a gross overestimate of the actual relation between smoking and lung cancer.  The reason is that there are relatively few smokers in the population, and a study which conditions on the consequent oversamples this group and inflates the correlation between antecedent and consequent.  Assume, for the purposes of illustration, that the base rate of smoking in the population is 25%.  If we drew a random sample of the population for study, the actual relation between smoking and lung cancer proves to be quite a bit weaker: as illustrated in the following table, the diagonal still accounts for 77% of the cases, but taking the baserates into account the correlation coefficient drops to phi = .25.  The correlation remains significant (statistically and clinically), but it is greatly diminished in strength.

Compound Probability

Diagonal = 77%, phi = .25

Group

Cancer

No Cancer

Smoker

20
180

Nonsmoker

3
597

  • Now what happens if we condition on the antecedent -- that is, take 400 smokers, and 400 nonsmokers, and follow them to determine how many contract lung cancer?  When we do this, applying the probabilities known from a study with proper sampling, we find (see Figure 4c) that the diagonal is reduced still more, to 55%, but that correlation is not distorted, phi = .22.

Conditioning on the Antecedent

Diagonal = 55%, phi = .2

Group

Cancer

No Cancer

Smoker

40
360

Nonsmoker

2
398

The distorting effects of conditioning on the consequent, which is the typical method in these kinds of studies, is inevitable -- As Dawes (1993, 1994) shows, the source of the distortion is in the algebra by which probabilities are calculated.  This distortion is inevitable, so long as the base rate of the antecedent is substantially different from 50%, and it increases the further the baserate departs from this value.  The proper way to determine the strength of relation between antecedent and consequent is to study a random (or stratified, or otherwise unbiased) sample of the population; failing that, it is far better to condition on the antecedent than to condition on the consequent.  Longitudinal followup studies, then, have two virtues: they obviate many of the problems involved in collecting self-reports of potentially important antecedent variables; and they introduce relatively little distortion into the relations between the variables of interest.

The implication of this demonstration is that it is critical do draw a random, or otherwise representative sample, of the population targeted by the stereotype, and of the control population (e.g., the ingroup who holds the stereotype, or alternatively some other outgroup).  But if you're looking at something like the relation between mental illness and violence, don't condition on the feature; rather, condition on the category.


The Full Accuracy Design

Judd and Park (1993) proposed that accuracy be assessed with what they call the full-accuracy design.
  • First, the investigator has to settle on some criterion.  
  • Two different subject groups rate rate each other: blacks and whites, men and women, Republicans and Democrats, whatever.
    • Note that in this case, each group is both an ingroup (rating itself) and an outgroup (rated by the other).
    • And, also, that the self-ratings for one group constitute the criterion against which the other group's ratings is assessed.
  • They rate each other on attributes, both positive and negative, that are known to be stereotypic and counterstereotypic of each group.  As a result, these attributes will maximally distinguish between the two groups.
    • The rating scales should be precisely balanced between stereotypic and counterstereotypic, and between positive and negative, attributes.  In this way, there is a true neutral point against which departures from neutrality can be measured.

With such a design in hand, Judd and Park (1993) argue that all sorts of accuracy assessments are possible, following the pattern set down by Cronbach (1955), as discussed in the lectures on Social Perception.

  • Elevation: Any departure in ratings from zero indicates over- or underestimation of that attribute.
  • Differential Target Elevation: In terms of the analysis of variance (ANOVA), this reflects the main effect of group membership.  If the discrepancy scores differ between the two groups, then the one group is stereotyping the other, by overestimation or underestimation.
  • Differential Attribute Elevation: In ANOVA terms, this reflects the main effect of rating scale.  Stereotypic attributes may be generally overestimated, while counterstereotypic attributes may be underestimated. 
  • Differential Target x Subject Elevation: In ANOVA terms, this is the two-way interaction between subject and target, indicating whether the discrepancy scores are greater for the outgroup, compared to the ingroup.
  • Differential Subject x Attribute Elevation: The two-way interaction between subject and attribute.  Do perceivers in one group show more stereotypic inaccuracy than the other?  
  • Differential Target x Attribute Elevation: The two-way interaction between target and attribute.  Do targets in one group suffer from more stereotypic inaccuracy than the other?
  • Differential Subject x Target x Attribute Elevation: The residual three-way interaction, in some ways, the heart of stereotyping.  Do perceivers in one group show more stereotypic inaccuracy in their perceptions of the outgroup? 


For purposes of demonstration, Judd and Park (1993) performed a study of stereotyping by Democrats and Republicans, using data from the 1976 National Election Study in which a representative sample of American voters stated their party affiliation or preference, and were also asked about their positions on 10 policy issues, such as school busing, aid to minorities, and government health insurance (in 1976!).  They were also asked where Democrats and Republicans "as a whole" stood on these same issues.  The actual positions of the Democrats and Republicans served as the criterion, for comparison with the respondents' impressions of the Democratic and Republican positions.  

  • As a whole, the respondents overestimated the liberalness of Democrats and the underestimated the conservativism of the Republicans.  That is, the respondents as a whole thought both Democrats and Republicans were more liberal than they actually were.
  • Democrats showed less overestimation of the liberalness of Democratic targets than Republicans did; and Republicans showed more overestimation of Democratic liberalness than Democrats did. That is, ingroups stereotyped the outgroup more than they did their own ingroup.
  • The ingroup-outgroup difference was stronger for subjects who had a relatively strong party affiliation, and this was especially true for the Republicans.

Judd and Park (1993) showed that, given sufficient data, it is actually possible to assess the accuracy of stereotypes, and to determine whether there are individual or group differences in the tendency to stereotype.  But, as they themselves point out, we really do not have a good sense of how accurate stereotypes are concerning social groups other than Democrats and Republicans -- largely because of the lack of objective criteria, but also because of the demands of the "full-accuracy" design.  Nor, for that matter, given the massive changes in both the Democratic and Republican parties since 1976, do we even have a good sense about stereotyping by Democrats and Republicans! 

Now, extending the full-accuracy design to other social groups this is a pretty tall order -- but essential, Judd and Park (1993) argue (I think correctly, though Judd is a former colleague and I may be biased), if we're going to get any handle on stereotype accuracy. 


The "Unbearable Accuracy of Stereotypes"?

A related, but slightly different, perspective on accuracy has been presented by Lee Jussim (2012, 2015).  Jussim argues that there are four aspects of stereotyping that need to be distinguished:

  • Discrepancy from perfection -- essentially, the accuracy of stereotyped beliefs about a group, similar to Judd and Park's (1993) notion of stereotype inaccuracy.
  • Correspondence with differences -- that is, the appreciation of variability within a group, similar to J&P's dispersion inaccuracy.
  • Personal stereotypes -- that is, an individual's beliefs about members of an outgroup.
  • Consensual stereotypes, or the average of ingroup members' belief about the outgroup.  Like Devine and Elliot, Jussim points out that an individual's personal stereotype about an outgroup may not be the same as the stereotype held by his or her ingroup as a whole.

Considering these four aspects of stereotyping yields four types of stereotype inaccuracy, depending on whether they refer to personal or consensual stereotypes:

  • Discrepancies from perfection:
    • At the personal level, the difference between the individual subject's beliefs and some criterion.
    • At the consensual level, the difference between the group mean of ingroup members and the outgroup criterion.
  • Correspondence with criteria:
    • At the personal level, the correlation of an individual subject's beliefs with the criterion.
    • At the consensual level, the correlation of the ingroup average with the outgroup criterion.

As in the sample scatterplots depicted above, discrepancies can be substantial, even though correspondence remains high.  But discrepancy and correspondence measures can yield quite different assessments of stereotype accuracy.

In fact, as Jussim points out, there have been very few studies of stereotype accuracy -- perhaps because, as he implies, most social psychologists have shared Lippmann's view that stereotypes are inherently false.  And somewhat surprisingly, he argues -- contra Lippmann and the consensus among social psychologists -- that, when accuracy has been assessed, it turns out that stereotypes appear to be a lot more accurate than we would have thought.

  • Summarizing 4 (four!) studies of ethnic and racial stereotyping, Jussim concludes that consensual stereotypes are mostly accurate.
    • Most estimates were less than 10% of .25 standard deviations (SDs) of the criterion.
    • Most of the others were "near misses", off by more than 10% but less than 20%.
    • Relatively few were "inaccurate", off by more than 20%.
    • There was little exaggeration of real differences between groups.
    • And the correspondence between stereotypes and reality was "very strong" by the standards of social-science research.
      • For personal stereotypes, most correlations (rs) lay between .36-.69.
      • For consensual stereotypes, most rs lay between .53-.93.
  • Summarizing 7 (seven!) studies of gender stereotyping, Jussim comes to much the concludes that consensual stereotypes are mostly accurate.
    • Again, most estimates were less than 10% of .25 standard deviations (SDs) of the criterion.
    • And again, most of the others were "near misses", off by more than 10% but less than 20%.
    • And again, relatively few were "inaccurate", off by more than 20%.
    • There was, again, little exaggeration of real differences between groups.
    • And the correspondence between stereotypes and reality was, again, "very strong" by the standards of social-science research.
      • For personal stereotypes, most correlations (rs) lay between .40-.60.
      • For consensual stereotypes, most rs lay between .34-.98.
Now, those findings really are surprising, considering the universal textbook view that stereotypes are, on their face, inaccurate representations of various groups.  Before, we jump to this conclusion, however, we should bear some considerations in mind.
  • There is, in fact, only a paltry number of studies of stereotype accuracy.  Perhaps this is due to the consensus that stereotypes are false, and if so it's certainly bad -- meaning unscientific -- not to have put this consensus to the test.  But it might be that, as more studies of this type accumulate, we'll find out that stereotypes are inaccurate after all.
  • There are wide variations in method across the few studies that have been done, making it hazardous to put them together in any sort of meta-analysis.  These studies are usually based on convenience samples of both perceivers and targets, and mostly rely on self-reports of the stereotyped group for the criteria.
    • For example, perhaps the best study of the accuracy of the African-American stereotype used only 50 African-American and 50 white subjects -- and they were college students, to boot.
    • Similarly, the largest study of gender stereotypes employed only a sample of 617 college students -- though, at least, it derived its criteria from a nationally representative sample of men and women, so that's something.
  • And then, just as important, the study may neglect characteristics that are highly relevant to the stereotype.  If, for any reason, the investigator is reluctant to include certain elements of a stereotype in his or her study, perhaps because they are inflammatory or "politically incorrect",
    • Consider, for example, the common stereotype of African-Americans as possessing superior athletic abilities (think of the movie, White Men Can't Jump!).  If a study attempts to evaluate the African-American stereotype with rating scales derived from the "Big Five" personality traits, which don't include anything remotely resembling athletic ability,  then perceivers' stereotypes of African-Americans, evaluated in terms of The Big Five, may well be accurate; but the more common stereotype, in terms of athletic ability, will be left out of the assessment entirely.
    • Or, to take another example, consider the common stereotype of women as overly emotional during their menstrual periods (think of Donald Trump and Megyn Kelly!).  This may or may not be true, but if gender stereotypes are assessed in terms of The Big Five, we'll never know how accurate, or inaccurate, the real gender stereotype is.


Elicitation of Stereotypes

A popular view of stereotyping is that stereotypes are automatically elicited by the presence of a member of the stereotyped group.  I'll discuss automaticity in more detail in the lectures on "Social Judgment and Inference", but for now the argument is simply that the mere presence of an outgroup member may be sufficient to activate the stereotype of that group in the mind of the perceiver.  It's like the "evocation" mode of the person-environment interaction, with a twist automatic processes are unconscious in the strict sense of the term: they operate outside of phenomenal awareness and voluntary control.  Thus, the presence of a member of the stereotyped group can evoke the stereotype in a perceiver without the perceiver even realizing what is happening.

The automatic elicitation of stereotypes was nicely demonstrated in a study of race-based priming by Devine (1989).  In a preliminary study, she employed a thought-listing procedure with white college students to elicit their stereotypes concerning blacks.  The procedure generated such terms as poverty, poor education, low intelligence, crime, and athletics (sorry, but there's no way to describe this study without detailing the stereotype; and, let's face it, stereotypes about African Americans have a lot more relevance to contemporary American society than do stereotypes about Germans -- which is precisely why Devine did her experiment this way).

In her formal experiment, Devine asked a new set of subjects to perform a vigilance task in which they had to respond whenever they saw a target appear on a computer screen.  At the same time, the screen flashed words associated with the black stereotype -- employing a "masking" procedure that effectively prevented the words from being consciously perceived.  Some subjects received a high density of stereotype-relevant words, 80%; other subjects received a lower density, only 20%.  

donald.JPG (71393
                bytes)After performing this vigilance task, the subjects were asked to read the "Donald Story" (Srull & Wyer, 1979), which consists of a number of episodes in which the main character, named Donald, engaged in a number of ambiguous behaviors that could be described as hostile, or would be given a more benign interpretation.  

 

 

Devine1989.JPG
                (67940 bytes)After reading the story, the subjects were asked to evaluate Donald on a number of trait dimensions.  The general finding was that, compared to a control group that received all "race-neutral" primes, subjects who were primed with words relating to the African-American stereotype rated Donald as more socially hostile -- and the more so, the greater the density of the race-based primes.  The results are especially interesting because the primes themselves were presented "subliminally", outside of conscious awareness.  Thus, the subjects could not consciously connect the primes to Donald.  Apparently, presentation of the negative racial primes activated corresponding representations in memory, and this activation spread to the mental representation of Donald formed when the subjects read the Donald Story. 

This unconscious race-based priming occurred even in subjects who scored low in racial prejudice, as measured by the Modern Racism Scale (caveat: the MRS is actually a pretty bad instrument for assessing racial prejudice; Devine knows this, but it was the only instrument of its kind available at the time).  This raises the possibility that people can be consciously egalitarian, but nonetheless harbor unconscious racial (and other) stereotypes and prejudices.  Unconscious prejudice is particularly difficult to deal with, because the stereotype operates automatically -- you just can't help thinking in terms of the stereotype; and the stereotype itself may not even be consciously accessible.

Based on studies like this, Anthony Greenwald, Mahzarin Banaji, and their colleagues have developed the Implicit Association Test (IAT), a procedure which, they claim, assesses unconscious attitudes, including unconscious prejudice toward various social groups.  For the record, I'm skeptical that the IAT actually does this -- but that's a discussion for another time.  Nonetheless, the IAT has become extremely popular in the study of stereotyping and prejudice.


Unconscious Stereotypes

Much, perhaps most, of the evidence bearing on the concept of implicit emotion comes from recent social-psychological work on attitudes, stereotypes, and prejudice. In social psychology, attitudes have a central affective component: they are dispositions to favor or oppose certain objects, such as individuals, groups of people, or social policies, and the dimensions of favorable-unfavorable, support-oppose, pro-anti naturally map onto affective dimensions of pleasure-pain or approach-avoidance. As Thurstone put it, "attitude is the affect for or against a psychological object" (1931, p. 261). Like emotions, attitudes are generally thought of as conscious mental dispositions: people are assumed to be aware that they are opposed to nuclear power plants, or favor a women's right to choose. Similarly, people are generally believed to be aware of the stereotyped beliefs that they hold about social outgroups, and of the prejudiced behavior that they display towards members of such groups. And for that reason, attitudes and stereotypes are generally measured by asking subjects to reflect and report on their beliefs or behavior. However, Greenwald and Banaji (1995) proposed an extension of the explicit-implicit distinction into the domain of attitudes. Briefly, they suggest that people possess positive and negative implicit attitudes about themselves and other people, which affect ongoing social behavior outside of conscious awareness.

Put another way, we have a blindspot which prevents us from being aware of our own prejudices.

Following the general form of the explicit-implicit distinction applied to memory, perception, learning, and thought in the cognitive domain, we may distinguish between conscious and unconscious expressions of an attitude:

  • Explicit attitude refers to conscious awareness of one's favorable or unfavorable opinion concerning some object or issue.

  • By contrast, an implicit attitude refers to any effect on a person's ongoing experience, thought, and action that is attributable to an attitude, regardless of whether that opinion can be consciously reported. From a methodological point of view, explicit attitudes would be assessed by tasks requiring conscious reflection on one's opinions; implicit attitudes would be assessed by tasks which do not require such reflection.

An early demonstration of implicit attitudes was provided by a study of the "false fame effect" by Banaji and Greenwald (1995).  In the typical false fame procedure (Jacoby, Kelley, Brown, & Jasechko, 1989), subjects are asked to study a list consisting of the names of famous and nonfamous people. Later, they are presented with another list of names, including the names studied earlier and an equal number of new names, and asked to identify the names of famous people. The general finding of their research is that subjects are more likely to identify new rather than old nonfamous names as famous. In their adaptation, Banaji and Greenwald included both male and female names in their lists, and found that subjects were more likely to identify male names as famous. This result suggests that the average subject is more likely to associate achievement with males than with females -- a common gender stereotype.

Similarly, Blair and Banaji (1996) conducted a series of experiments in which subjects were asked to classify first names as male or female. Prior to the presentation of each target, the subjects were primed with a word representing a gender-stereotypical or gender-neutral activity, object, or profession. In general, Blair and Banaji (1996) found a gender-specific priming effect: judgments were faster when the gender connotations of the prime were congruent with the gender category of the name. This means that gender stereotypes influenced their subjects' classification behavior.

In the area of racial stereotypes, Gaertner and McLaughlin (1983) employed a conventional lexical-decision task with positive and negative words related to stereotypes of Blacks and whites, and the words "black" or "white" serving as the primes. There was a priming effect when positive targets were primed by "white" rather than "black", but no priming was found for the negative targets, and this was so regardless of the subjects' scores on a self-report measure of racial prejudice. thus, the effect of attitudes on lexical decision was independent of conscious prejudice.

Similarly, Dovidio Evans, and Tyler (1986) employed a task in which subjects were presented with positive and negative trait labels, and asked whether the characteristic could ever be true of black or white individuals. While the judgments themselves did not differ according to race (even the most rabid racist will admit that there are some lazy whites and smart blacks), subjects were faster to endorse positive traits for whites, and to endorse negative traits for blacks. Thus, even though conscious attitudes did not discriminate between racial groups, response latencies did.

These studies, and others like them (e.g., Devine, 1989), seem to reveal the implicit influence of sexist or racist attitudes on behavior. However, at present, interpretation of these results is somewhat unclear. In the first place, the logic of the research is that stereotype-specific priming indicates that subjects actually hold the stereotype in question -- that, for example, the subjects in Blair and Banaji's (1996) experiment really (if unconsciously) believe that males are athletic and arrogant while females are caring and dependent. However, it is also possible that these priming effects reflect the subjects' abstract knowledge of stereotypical beliefs held by members of society at large, though they themselves personally reject them -- both consciously and unconsciously. Thus, a subject may know that people in general believe that ballet is for females and the gym is for males, without him- or herself sharing that belief. Even so, this knowledge may affect his or her performance on various experimental tasks, leading to the incorrect attribution of the stereotypical beliefs to the subject.

Moreover, most studies of implicit attitudes lack a comparative assessment of explicit attitudes.  Implicit measures of attitudes may be useful additions to the methodological armamentarium of the social psychologist, but in the present context their interest value rests on demonstrations of dissociations between explicit and implicit expressions of emotion. Accordingly, it is important for research to show that implicit measures reveal different attitudes than those revealed explicitly. Just as the amnesic patient shows priming while failing to remember, and the repressive subject shows autonomic arousal while denying distress, we want to see subjects displaying attitudes or prejudices which they deny having, and acting on stereotypes which they deny holding.

Wittenbrink, Judd, and Park (1997) performed a formal comparison of explicit and implicit racial attitudes. Their subjects, all of whom were white, completed a variety of traditional questionnaire measures of self-reported racial attitudes. They also performed a lexical-decision task in which trait terms drawn from racial stereotypes of whites and blacks were primed with the words black, white, or table. Analysis of response latencies found, as would be anticipated from the studies described above, a race-specific priming effect: white speeded lexical judgments of positive traits, while black speeded judgments of negative traits. However, the magnitude of race-specific priming was correlated with scores on the questionnaire measures of racial prejudice. In this study, then, implicit attitudes about race were not dissociated from explicit ones. Such a finding does not undermine the use of implicit measures in research on attitudes and prejudice (Dovidio & Fazio, 1992), but a clear demonstration of a dissociation is critical if we are to accept implicit attitudes as evidence of an emotional unconscious whose contents are different from those which are accessible to phenomenal awareness.


The Implicit Attitude Test

Beginning in 1998, Greenwald, Banaji, and their colleagues have introduced the Implicit Attitude Test (IAT), which is expressly designed to measure implicit attitudes.   The IAT consists of a series of dichotomous judgments, which we can illustrate with a contrived "Swedish-Finnish IAT" that might be used to detect prejudice of Swedes against Finns (or vice-versa). 



  • Phase 1: Is X a Swedish or a Finnish name?  The subject is asked to classify a series of surnames --  e.g., Aaltonen, Eriksson, Haapakoski, Lind, Numinnen, and Sundqvist -- as either Swedish or Finnish.  One response key (e.g., G on a keypad) is used to respond "Swedish", while another response key (e.g., J) is used to respond "Finnish".
  • Phase 2: Is Y a good or a bad thing?  The subject is asked to classify a series of words -- e.g., admiration, aggression, caress, abuse, freedom, crash) as positive or negative in connotative meaning.  One response key (e.g., V on a keypad) is used to respond "good", while another response key (e.g., B) is used to respond "Finnish".
Then, the two tasks are superimposed on each other, such that the Swedish-Finnish judgments are interspersed with the Good-Bad judgments.
  • In Phase 3, the "Swedish" response shares the same key (e.g., G) with the "Good" response, and the "Finnish" response shares the same key (e.g., J) with the "Bad" r
  • Phase 4 is a control condition that need not detain us here.
  • In Phase 5, "Swedish" shares a response key with "Bad", while "Finnish" shares a response key with "Good".

The logic of the IAT is based on a principle of stimulus-response compatibility discovered in early research on human factors by Small (1951) and Fitts and Posner (1953).  The general principle here is that subjects can respond to a stimulus faster when the stimulus and response are compatible.  So, for example, subjects will respond faster, and more accurately, with their left hand to a stimulus that appears on the left side of a screen, and with their right hand to stimuli on the right side.  By analogy, subjects who like Swedes will respond faster when the "Swedish" category shares the same response with the "Good" category, and slower when the "Swedish" category shares the same response with the "Bad" category.

Just to make it perfectly clear:

  • If subjects are required to make the same response (i.e., push the same key) to Swedish names and positive words, faster responses imply an association between "Swedish" and "Good".

  • If subjects are required to make the same response to Finnish names and negative words, faster responses imply an association between "Finnish" and "Bad".

In this way, by comparing response latencies across the different conditions, Greenwald and Banaji proposed to measure unconscious prejudices and other attitudes, independent of self-report.

Link to demonstrations of the IAT on the "Project Implicit" website.

The IAT is usually administered by computer, and most often online.  But there is a paper-and-pencil version of it, developed for purposes of classroom demonstrations.  A number of these are available, in various formats.  But before you take one, in an attempt to find out if you're an unconscious racist, read on.


 

Explicit-Implicit Dissociations (?)

For example, in one early study, Greenwald, Banaji, and their colleagues looked at white subjects implicit attitudes toward blacks.  Subjects responded faster when stereotypically "White" names shared a response key with "Positive", than when stereotypically "Black" names shared a response key with "Positive", thus implying that they associated White with good, and Black with bad.



Greenwald et al. also measured their subjects' explicit racial attitudes with a standard technique known as the attitude thermometer, which is basically a numerical rating scale with one pole labeled "positive" and the other pole labeled "negative".  The correlation (r) between IAT and the attitude thermometer varied from .07 to .30, depending on the sample.

In another study, they looked at attitudes among certain Asian ethnicities.  Koreans responded faster when "Korean" names shared a key with "Positive", and "Japanese" names shared a key with "Negative".  Japanese subjects did precisely the opposite, responding faster when "Japanese" and "Positive" shared a key than when 'Korean" and "positive" did.  Thus implying that Koreans associated Japanese with bad, while Japanese made the same associations with Koreans.



Greenwald et al. also measured their subjects' explicit ethnic attitudes with a standard technique known as the attitude thermometer, which is basically a numerical rating scale with one pole labeled "positive" and the other pole labeled "negative".  The correlation (r) between IAT and the attitude thermometer varied from -.04 to .64, depending on the sample.

By now a huge literature has developed in which the IAT has been used to measure almost every attitude under the sun.  Nosek (2007) summarized this literature with a graph showing the average explicit-implicit correlation, across a wide variety of attitude objects.  These correlations varied widely, but the median explicit-implicit correlation was r = .48.


Another review, by Greenwald et al. (2009), of 122 studies, showed that the IAT correlated with external criteria of attitudes about as well as did explicit assessments such as the attitude thermometer.



Critique of the IAT

Greenwald, Banaji, and their colleagues have claimed that the explicit-implicit correlations obtained between the IAT and the attitude thermometer and other self-report measures are relatively low, and this suggests that unconscious attitudes can, indeed, be dissociated from conscious ones.  But there are some problems with this argument.


  • In the first place there are a number of potentially confounding factors, the most important of which may be target familiarity.  Swedes may or may not like Finns, but by any standard a Finnish name like Aaltonen is going to be less familiar to him than a Swedish name like Eriksson, and this difference in familiarity, more than any difference in attitude, may account for the differences in response latency.

  • Similarly, it may be in some sense easier for a Swede to identify a Swedish name as Swedish; he may wonder whether a Finnish name is really Finnish, as opposed to Hungarian (they're related language groups) -- or, perhaps, belongs to a member of a group of Swedish-speaking Finns!  So, the issue may be task difficulty rather than attitude.

  • There is also a confound with task order.  In the Swedish-Finnish example, Swedish shares a response with Good in Phase 3, while Finnish shares a response with Good in Phase 4.  So, Swedish is paired with Good before Finnish is paired with Good.  This problem can be solved, over, many subjects, by counterbalancing the order of tasks and then averaging over many subjects.  Some subjects do Swedish-Good before Finnish-Good, others do the reverse.  But you can't counterbalance within a single subject.  The upshot is that, because counterbalancing can eliminate the order confound across subjects, it might be possible to say that, for example, Swedes in general are prejudiced against Finns.  But because counterbalancing can't eliminate the order confound within a single subject, it's just not possible to say that a particular Swede shares this prejudice -- or, perhaps, is a self-hating Swede who really likes Finns better.

There is also the problem of determining exactly what the person's attitude is.  It is one thing for a Swede to actively dislike Finns, but it is another thing entirely for a Swede to like Finns well enough, but like Swedes better.  The IAT cannot distinguish between these two quite different attitudinal positions.  All it does is make an inference of relative attitude from relative reaction times.

There is also the issue of what the psychometricians count as construct validity.  How well does the IAT predict some construct-relevant external criterion, such as a Swede's willingness to hire a Finn, or let him marry his daughter?  

Another aspect of construct validity has to do with group differences.  Koreans appear prejudiced against Japanese, and Japanese against Koreans, and that's what we'd expect if the IAT really measured prejudicial attitudes.  But, for example, there is little evidence concerning other ingroup-outgroup differences in IAT performance.  The problem is encapsulated in the title of a famous critique of the IAT, entitled "Would Jesse Jackson fail the IAT?".  If, for example, African-American subjects also "favor" whites when they take the IAT, it would be hard to characterize a Black-White IAT as a measure of prejudice against African-Americans.

The biggest problem, however, is that correlation between explicit and implicit prejudice, which Nosek reports at a median r of .48.  That's not a perfect correlation of 1.00, but it's also not a zero correlation.  In fact, it's a big correlation by the standards of social-science research -- and, in fact, it's about as big as it can get, given the test-retest reliability of the IAT.  If explicit and implicit attitudes were truly dissociable, we'd expect the explicit-implicit correlation to be a lot lower than it is.  The fact that explicit-implicit correlations are typically positive, and in many cases quite substantial, suggests, to the contrary, that people's implicit attitudes are pretty much the same as their explicit attitudes.  They're not dissociated, they're associated.  

This last problem is confounded by the fact that improvements in the scoring procedure for the IAT have actually led to increases in the correlation between explicit and implicit attitudes.  But if the IAT were really measuring unconscious attitudes, we'd expect psychometric improvements to decrease the correlation -- to strengthen the evidence that explicit and implicit attitudes are truly dissociable.

The fact that explicit and implicit attitudes are significantly correlated, and that the correlation increases with psychometric improvements, suggests instead that the IAT may be an unobtrusive measure of attitudes that are consciously accessible, but which subjects are simply reluctant to disclose -- something on the order of a lie-detector.  But an unobtrusive measure of a conscious attitude shouldn't be confused with a measure of an unconscious attitude.

But even that isn't entirely clear, because of the technical problems with the IAT described earlier -- issues of target familiarity, task difficulty, distinguishing between unfavorable attitudes and those that are simply less favorable.  For this reason, I think it's premature for its promoters to promote the IAT as a measure of any kind of attitude, conscious or unconscious.


The IAT Controversy Continues

From the beginning, the IAT has been an object of controversy -- and, partly because the IAT itself has become a feature of popular culture, informing millions of users that they all are, to one degree or another, unconsciously racist, sexist, etc., this controversy has made its way out of the scientific conferences and journals and into the public domain.  Two media articles are especially noteworthy (their titles say all that needs to be said about them):



Part of the reason for the public nature of the IAT debate is that the test has been aggressively promoted by its developers as a measure of implicit, unconscious attitudes.  Quoting Singal: "In a talk she gave at the American Psychological Society's convention, Banaji described the invention of the IAT as similar to the invention of the telescope -- it had ushered in a revolution for how we see the world".  That's a very strong claim, right up there with Perhaps alluding to the famous claim that "projective" tests like the Rorschach and the TAT were "X-ray" machines that could detect hidden aspects of personality.  The X-ray claim was exaggerated, and probably more wrong than it was right.  And it looks like the "telescope" claim is suffering the same fate. 

To be blunt, Walgreen and Banaji haven't been all that responsive to criticism.  Their popular-press book on the IAT, Blindspot barely discusses them, and sometimes G&B's responses to critics can be downright uncollegial -- implying, for example, that critics of the IAT are simply not with the program of modern social psychology, or -- worse -- are themselves racists or sexists who see the IAT as a threat to the hegemony of white males. See, for example, the exchange between G&B and Tetlock & Arkes in Psychological Inquiry (2004), and Banaji's comments to Jesse Singal about criticisms of the IAT.

For fuller, updated treatment of the IAT, see the page on "Implicit Emotion and Motivation" in the lecture supplements for my course on "Scientific Approaches to Consciousness".

 

The Psychologist's Fallacy

Frankly, the IAT brings us full circle, back into Freudian territory -- though without the lurid claims about primitive sexual and aggressive motives.  Freud was quite content to tell people what their problems were -- that, for example, they loved their mothers and hated and feared their fathers.  And when people would say it wasn't true, he would explain to them the concept of repression.  And when they continued to resist, he'd tell them that their resistance only indicated that he was right.  In much the same way, it's a little disturbing to find the promoters of the IAT using it to tell people that they're prejudiced, only they don't know it.  Because it isn't necessarily so.

This is the problem of what William James and John Dewey called the psychologist's fallacy -- the idea that, first, every event has a psychological explanation; and, second, that the psychologist's explanation is the right one.  Freud thought that he knew better than his patients what their feelings and desires were.  The "IAT Corporation" (yes, there really is one, offering the IAT to government and corporate personnel and human-relations departments concerned about workforce diversity) claims to know better than you do whether your prejudiced against African-Americans, or Hispanics, or Japanese, or Koreans.  

At this point it's important to be reminded of what William James wrote about the unconscious mind.  It's critical that assessments of unconscious motivation and emotion, no less than unconscious cognition, be based on the very best evidence.  Otherwise, unconscious mental life will become the "tumbling-ground for whimsies" that James warned it could be.

 


Effects of Stereotypes

Whether consciously or unconsciously, whether accurate or inaccurate, stereotypes exist in the mind of the perceiver, and clearly affect the judgments that the perceiver makes about target members of stereotyped groups.  Obviously, stereotypes have effects on the targets, as well.




  • Stereotypes can lead to behavior toward members of stereotyped groups that is outright prejudicial and discriminatory.  The deductive aspect of stereotypes -- the attribution to an individual member of a stereotyped group of the attributes believed to be characteristic of that group -- means that the target will be treated as a group member, and not as an individual.  As Martin Luther King, Jr., put it, it's treating people according to the color of their skin, rather than the content of their character, and it's distinctly un-American.  But it happens anyway.
  • More subtly, stereotypes can evoke the self-fulfilling prophecy. 
    • Behavioral Confirmation: Stereotypes can lead the person who holds the stereotype to treat the target in such a manner as to evoke, from the target, responses that are consistent with the stereotype. 
    • Perceptual Confirmation: Stereotypes can lead the perceiver to interpret the target's behavior as consistent with the stereotype, when it is ambiguous or even stereotype-inconsistent.

And, of course, if Devine and others are right, all of this can happen automatically and unconsciously, without the perceiver, or the target, realizing what is going on.

But it doesn't stop there.  Stereotyping can have a host of other effects on the stereotyped individual, not all of which can be considered either outright prejudice and discrimination, or the self-fulfilling prophecy. 

  • Attributional Ambiguity: A person subject to stereotyping doesn't ever quite know how to react to other people's behavior.  If a black person perceives a white person as unfriendly, is that hostility directed at him personally, for something he's said or done, or is it simply an act of prejudice?  Even positive behaviors can be ambiguous in this way.  Given the stereotype that girls and women have relatively poor abilities in math and science, if a woman receives a compliment on her math skills, is she to take that as a genuine expression, or an act of condescension?
  • Stereotype Avoidance:  A person who is subject to stereotyping may avoid behaviors that would tend to confirm the stereotype, thus blunting expectancy confirmation processes.  Given the stereotype of black men as highly athletic, a black male might stay away from the basketball court -- and thus miss out on an opportunity for an athletic scholarship.
  • Stereotype Vulnerability:  A person subject to stereotyping may feel anxious and frustrated in situations that contain stereotype-related cues.  Thus, simply knowing that he or she is a target of prejudice may make a black person feel anxious in the presence of whites.
  • Stereotype Threat: Finally, a person subject to stereotyping may fear that, by virtue of his or her behavior, he or she might actually confirm a negative stereotype.  This fear, in turn, might actually lead the person to engage in stereotype-confirming behavior.  Stereotype threat will be exacerbated when the situation contains cues that the stereotype is somehow at stake. 
    • Thus, women might paradoxically perform more poorly on a test of math skills when their attention is drawn to the stereotype of female math deficiency.
    • Or, members of minority groups might be reluctant to ask questions in class, for fear that a "stupid" question might reflect badly not just on themselves, but on their whole group.
  • Stereotype Lift: Returning to the perceiver for a moment, an ingroup member who endorses an outgroup stereotype may actually show an increase in performance when his or her attention is drawn to a stereotype.  Combined with stereotype threat, which depresses performance of outgroup members, stereotype lift, which increases performance of ingroup members, may actually inflate any ingroup-outgroup difference in test performance.


I've already discussed stereotype threat in the context of the self-fulfilling prophecy, in the lectures on The Cognitive Basis of Social Interaction.  Here's a quick review.

An early demonstration of stereotype threat was reported by Steele and Aronson (1995).  They recruited black and white Stanford undergraduates for a study of reading and verbal reasoning.  The subjects completed a version of the GRE verbal reasoning test under one of three conditions:

  • In the Diagnostic condition, the subjects were informed that the test results would provide a personal diagnosis of their own level of verbal ability.
  • In the Nondiagnostic condition, there was no reference to the assessment of individual subjects' verbal ability.
  • In the Nondiagnostic Challenge condition, there was also no reference to individual assessment, but the subjects were told that the test items were intentionally very difficult.

SteeleAronson1995.JPG (63463 bytes)The intention was that black subjects in the Diagnostic condition would experience stereotype threat, by virtue of the stereotype that black college students have, on average, lower intellectual abilities than white students.   And that's pretty much what happened.  Even though the black and white groups had been carefully equated for verbal ability, based on their SAT scores, the black students underperformed, compared to the white students, in the Diagnostic condition but not in the Nondiagnostic condition.  Although it initially appeared that blacks underperformed in the Nondiagnostic-Challenge condition as well, this difference disappeared after some statistical controls were added.  A second study confirmed the essential findings of the first one, while a third showed that stereotype threat actually activated the stereotype in the minds of the black subjects, increased their levels of self-doubt, and decreased stereotype avoidance.

Stereotype threat has now been documented in a large number of different situations -- blacks and intelligence, women and math skills, even Asians stereotyped as the "model minority".  It's a variant on the self-fulfilling prophecy, self-verification gone wildly wrong -- except that the "self" being verified is not the individual's true self, but rather an imagined self that conforms to the stereotype.

But stereotype threat doesn't have to occur.  A study by Marx et al. (2009) compared black and white performance on tests of verbal ability administered around the time that Barack Obama was nominated for and elected to the presidency.

  • When subjects were tested before Obama's acceptance speech, there was a significant difference in test performance between groups.
  • But when subjects were tested immediately after Obama's acceptance speech, the black-white difference in test performance was significantly reduced.
    • This was the case, though, only for black subjects who had actually watched the convention speech. 
    • Black subjects who did not watch the speech actually showed a further reduction in test performance.
  • When subjects were tested during the interval between the convention and the election, the black-white difference returned.
  • But when subjects were tested immediately after the election, the black-white difference was reduced again.

Marx et al. suggest that Obama provided a salient example, to the black subjects of a black person overcoming racial stereotypes, which in turn reduced the stereotype threat that would usually impair the performance of black subjects.  Marx et al. dubbed this "the Obama effect".

Stereotypes have both a cognitive and an affective component. That is, they consist of beliefs that people have about certain groups; but these beliefs come attached to a (generally negative) emotional valence.  This raises the question as to which has the stronger effect on the perception of (and thus behavior toward) individuals.  Jussim et al. (1995) employed structural equation modeling, a variant on multiple regression, to evaluate a number of possibilities.



  • A pure cognitive model in which the effects of stereotypes on judgment are mediate primarily by cognitive beliefs.
  • A pure affective model in which the effects of stereotypes on judgment are mediated primarily by emotional valence.  
  • A mixed model in which both affect and cognition are necessary to bias perception and judgment.

A series of studies, in which they controlled statistically for the effects of both belief and generally showed that affect was more important than any specific beliefs.

  • Controlling for affect eliminated the effect of beliefs on perception.
  • Controlling for beliefs did not eliminate the effect of affect on perception.

 

Changing Stereotypes

Social stereotypes can have such pernicious effects on social interaction, including both dyadic and intergroup relations, that psychologists have long sought ways to overcome or abolish them. 

  • According to Rothbart's (1981) bookkeeping model, stereotypes change gradually, as the perceiver becomes exposed to more and more stereotype-inconsistent information.
  • According to the conversion model, also proposed by Rothbart, a single instance of stereotype disconfirmation, if dramatic enough, can lead to sudden change in the stereotyped belief.
  • Brewer (1981) and Taylor (1981) have proposed that stereotype change occurs by means of category differentiation.  According to this model, a single, monolithic view of the stereotyped group becomes fragmented into a number of subtypes.  This sub-typing process may have been reflected in the Harry Reid-Barack Obama Episode: what Reid meant, perhaps was that there is not just one type of African-American politician (perhaps exemplified by Jesse Jackson and others like him), but another one as well (exemplified by Barack Obama and others like him).

Arguably, conscious awareness of the stereotype is critical to stereotype change.  In a sense, a social stereotype is a hypothesis, concerning the qualities associated with some group, that is continually being tested as the perceiver encounters members of that group.  We may not be able to avoid stereotyping entirely.  But because there is plenty of variability within any stereotyped group, a perceiver who is aware of his or her stereotypes, and attends to stereotype-disconfirming evidence, will eventually weaken their hold on social cognition. 



"Person-First" or "Identity-First"?

The issue of how we categorize other people has come to the fore with the "disability rights" movement, and the objection of people who have various disabilities to be identified with their disabilities (a similar issue has been raised in racial, ethnic, and sexual minority communities as well).

One important question is how to refer to people with various disabilities.  Put bluntly, should we say that "Jack is a blind person" or "Jack is a person who is blind"?  Or substitute any label, including black, Irish, gay, or schizophrenic

Dunn and Andrews (American Psychologist, 2015) have traced the evolution of models for conceptualizing disability -- some of which also apply to other ways of categorizing ourselves and others.  The current debate offers two main choices:

  • A "person-first" approach -- as in, "Jack is a person with a disability".  In this social model (Wright, 1991), disability is presented "as a neutral characteristic or attribute, not a medical problem requiring a cure, and not a representation of moral failing" (p. 258) -- or, it might also be said, as a chronic condition requiring rehabilitation.  Instead, disability itself is seen as a sort of social construction -- or, at least, a matter of  social categorization.
  • An "identity-first" approach -- as in, "Jack is a disabled person".  While this might seem a step backward, this minority model (Olkin & Peldger, 2003) "portrays disability as a neutral, or even positive, as well as natural characteristic of human attribute" (p. 259).  Put another way, disability confers minority -group status: it connotes disabled people, with their own culture, living "in a world designed for nondisabled people".

So it all depends on how you think about minority-group status -- that of other people, if you're the member of the majority; or your own, if you're a member of the minority (any minority).


Intersectionality

Intuitively, we think of people as belonging in one category or another.  But of course, individuals actually belong to several categories simultaneously.  President Biden is both a man and a white person; he is both a husband and a father; as a lawyer, he is a member of the "white-collar" professional class; but he also has working-class roots.  That is the basic idea behind intersectionality, a concept first articulated by the Black feminist legal scholar Kimberle Crenshaw (1989).  Crenshaw's insight is that, for example, Black women have some issues that they share with white women, but they also have a set of issues that are uniquely theirs by virtue of being Black women.  A business, for example, might discriminate against Black women but not white women.  She points out that first- and second-wave feminists were mostly white and middle-class, and tended to ignore the special concerns of poor Black women.  Whether we're Black or white, woman or man, native or immigrant, gay or straight, we have a particular set of overlapping social identities that we share with some people, but not others. 

 

Social Perception, Social Categorization, and Social Interaction

Categorization, as the final act in the perceptual cycle, allows us to infer "invisible" properties of the object, and also how we should behave towards it.  Just as Bruner has argued that Every act of perception is an act of categorization, so he has also noted that: 

The purpose of perception is action  (actually, this is also a paraphrase).

Some of these actions take the form of overt behavioral activity.  Some of them take the form of covert mental activity, in terms of reasoning, problem-solving, judgment, and decision-making.

Natural categories exist in the real world, independent of the mind of the perceiver, and are reflected in the perceiver's mind in the form of concepts -- mental representations of categories.

Some social categories are, perhaps, natural categories in this sense.  But other social categories are social constructs.  They exist in the mind of the perceiver.  But through the self-fulfilling prophecy and other expectancy confirmation processes, these social constructs also become part of the real world through our thought and our action.

 

This page was last revised 09/03/2024.