Psychological Development
So far in this course, we
have focused our attention on mental processes in mature,
adult humans. We have taken mind and personality as givens,
and asked two basic questions:
- How does the mind work?
- How does the mind mediate the individual's interactions
with the social environment?
Now we take up a new question:Where
do mind and personality come from?
Views of Development
In psychology, there are two broad
approaches to the question of the development of mind:
The ontogenetic
point of view is concerned with the development of mind in
individual species members, particularly humans. This is
developmental psychology as it is usually construed.
Reflecting the idea that mental development, like physical
development, ends with puberty, developmental psychologists
have mostly focused on infancy and childhood. More recently,
it has acquired additional focus on development across the
entire "life span" from birth to death, resulting in the
development of new specialties in adolescence, middle age, and
especially old age.
The phylogenetic
point of view is concerned with the development of mind across
evolutionary time, and the question of mind in nonhuman
animals. It includes comparative psychology, which (as
its name implies) is concerning with studying learning and
other cognitive abilities in different species (comparative
psychology is sometimes known as cognitive ethology);
and evolutionary psychology (an offshoot of sociobiology)
which traces how human mental and behavioral functions evolved
through natural selection and similar processes.Go to a discussion
of the phylogenetic view.
The
cultural point of view is concerned with the effects
of cultural diversity within the human species. This approach
has its origins in 19th-century European colonialism, which
asked questions, deemed essentially racist today, about
whether the "primitive" mind (meaning, of course, the minds of
the colonized peoples of Africa, Asia, and elsewhere) was
lacking the characteristics of the "advanced" mind (meaning,
of course, the minds of the European colonizers). Edward
Burnet Tylor, generally considered to be the founding father
of anthropology (he offered the first definition of "culture"
in its modern sense of a body of knowledge, customs and
values acquired by individuals from their native
environments), distinguished between those cultures
which more or less "civilized". Stripped of the racism,
the sub-field of anthropology known as anthropological
psychology is the intellectual heir of this work. More
recently, anthropological psychology has been replaced by cultural
psychology, which studies the impact of cultural
differences on the individual's mental life, without the
implication that one culture is more "developed" than another.Go to
a discussion of the cultural view.
The Ontogenesis of Personhood
Phylogenesis has to do with the
development of the species as a whole. Ontogenesis has to do
with the development of the individual species member. In
contrast to comparative psychology, which makes its
comparisons across species, developmental psychology makes its
comparisons across different epochs of the life span.
Mostly, developmental
psychology focuses its interests on infancy and childhood -- a
natural choice, given the idea that mental development is
correlated with physical maturation. At its core,
developmental psychology is dominated by the opposition of
nature vs. nurture, or nativism vs. empiricism:
- Is the newborn child a "blank slate", all of whose
knowledge and skills are acquired through learning
experiences?
- Or are some aspects of mental functioning innate, part
of the child's genetic endowment, acquired through the
course of evolution?
Nature and Nurture
The dichotomy between "nature" and "nurture"
was first proposed in those terms by Sir Francis Galton
(1822-1911), a cousin of Charles Darwin's who -- among other
things -- expanded Darwin's theory of evolution by natural
selection into a political program for eugenics --
the idea of strengthening the human species by artificial
selection for, and against, certain traits.
Galton
took the terms nature and nurture from
Shakespeare's play The Tempest, in which Prospero
described Caliban as:
A devil, a born devil, on whose nature
Nurture can never stick.
(Here, from an early 20th-century English
theatrical advertisement, is Sir Herbert Beerbohm Tree as
Caliban, as rendered by Charles A. Bushel.)
Galton wrote
that the formula of nature versus nurture was"a
convenient jingle of words, for it separates under two
distinct heads the innumerable elements of which personality
is composed".
Galton's focus on nature, and biological determinism, was
countered by Franz Boas (1858-1942), a pioneering
anthropologist who sought to demonstrate, in his work and
that of his students (who included Edward Sapir, Margaret
Mead, Claude Levi-Strauss, Ruth Benedict, and Zora Neale
Hurston),
the power of culture in shaping lives. It
was nature versus nurture with the scales reset: against
our sealed-off genes, there was our accumulation of
collective knowledge; in place of inherited learning,
there was the social transmission of that knowledge from
generation to generation. "Culture" was experience raised
to scientific status. And it combined with biology to
create mankind ("The Measure of America" by Claudia Roth
Pierpont,New Yorker, 03/08/04).
It is to Boas that we owe the maxim that
variations within cultural groups are larger than variations
between them.
Interestingly, Boas initially trained as a
physicist and geographer, and did his dissertation in
psychophysics, on the perception of color in water (don't
ask). But during a period of military service, he
published a number of other papers on psychophysics, some
of which anticipated the insights of signal detection
theory, discussed in the lectures on Sensation.
From his dissertation, he concluded that even something as
simple as the threshold for sensation depended on the
expectations and prior experiences of the perceiver: the
underlying sensory processes were not innate, and they
were not universal. They are in some sense acquired
through learning.
For a virtual library of Galton's works, see www.galton.org.
For a sketch of Boas' life and work, see "The
Measure of America" by Claudia Roth Pierpont,New Yorker,
03/08/04 (from which the quotes above are taken).
For more detail on Boas and his circle, see Gods
of the Upper Air (2919) by Charles King.
According to King, Boas and other early cultural
anthropologists "moved the explanation for human differences
from biology to culture, from nature to nurture". In
so doing, they were "on the front lines of the greatest
moral battle of our time: the struggle to prove that --
despite differences of skin color, gender, ability, or
custom -- humanity is one undivided thing". As Ella
Deloria, another of Boas's students (and sister of Vine
Deloria, Jr., the Native American activist and author of Custer
Died for Your Sins: An Indian Manifesto,
published in 1969), recorded in her notes on his lectures,
"Cultures are many; man is one". King says that his
book "is about women and men who found themselves on the
front lines of the greatest moral battle of our time: the
struggle to prove that -- despite differences of skin color,
gender, ability or custom -- humanity is one undivided
thing."
- Louis Menand, reviewing King's book in the New Yorker
("The Looking Glass, 08/26/201), notes that Boas and his
group shifted the meaning of culture from
"intellectual achievement" to "way of life". This is
something of a paradox, because cultural anthropologists
are often portrayed by critics such as Allan Bloom (in The
Closing of the American Mind, 1987) as cultural
relativists. It's OK, in this stereotype, for some
aboriginal people to eat each other -- after all, it's
their culture; and maybe, since they do it, we should try
it too. But as King and Menand point out, Boas and
his students weren't cultural relativists. They were
interested in describing different cultures from within --
in recovering cultural diversity that was, even in the
1930s and 40s, being lost to the homogenizing effects of
Westernization. They were also interested in holding
up other cultures a mirror through which they could
understand their own. As Menand points out,"The
idea... is that we can't see our way of life from the
inside, just as we can't see our own faces. The
culture of the "other" serves as a looking glass....
These books about pre-modern peoples are really books
about life in the modern West.... Other species are
programmed to "know" how to cope with the world, but our
biological endowment evolved to allow us to choose how to
respond to our environment. We can't rely on our
instincts; we need an instruction manual. And
culture is the manual."
- Jennifer Wilson, reviewing Gods of the Upper Air
in The Nation (05/18/2020), notes that "At a time
when the country's foremost social scientists... were
insisting that different cultures fell along a continuum
of evolution, cultural anthropologists [like Boas and his
circle] asserted that such a continuum did not
exist. Instead of evolving in a linear fashion from
savagery to civilization, they argued, cultures were in a
constant process of borrowing and interpolation.
Boas called this process "cultural diffusion", and it
would come to be the bedrock of cultural
anthropology...".
The opposition between "nature" and "nurture"
seems to imply that the "nurture" part refers to parental
influence: how children are brought up. As we'll see,
parental influence appears to be a lot less important than
we might think, compared to influences outside the
home. But more important,"nurture" is probably better
construed as experience, as opposed to whatever is
built into us by our genes. Those "experiences" begin
in the womb, and continue through old age to death; some
experiences happen to us, some we arrange for ourselves, but
whatever their origins they make us the people we are at
each and every point in the life cycle. This is the
point of Unique: The New Science of Human Individuality
(2020) by David J Linden a neuroscientist at Johns Hopkins
University. Being a neuroscientist, Linden focuses on
how nature and nurture, meaning genes and internal,
physiological changes instigated by events in the external
environment, interact to produce individual
differences. But his essential point goes much beyond
the genes and soup of neurogenetics and epigenetics.
Reviewing the book in the New York Times ("Beyond
Nature and Nurture, What Makes Us Ourselves?", 11/01/2020),
Robin Marantz Henig quotes Linden:
In ordinary English, "nurture" means how
your parents raise you, he writes. "But, of course,
that's only one small part of the nonhereditary
determination of traits." He much prefers the word
"experience", which encompasses a broad range of factors,
beginning in the womb an carrying through every memory,
every meal, every scent, every romantic encounter, evey
illness from before birth to the moment of death. He
admits that the phrase he prefers to "nature versus
nurture" doesn't roll as "trippingly off the tongue", but
he offers it as a better summary of how our individuality
really emerges: through "heredity interacting with
experience, filtered through the inherent randomness of
development".
Still,
a formula such as "genes and experience" doesn't come nearly
as "trippingly off the tongue" as "nature and nurture", so
stick with the latter. Just understand what "nurture"
really means. And be sure to include the "and":
it's not "nature versus nurture", or "nature or
nurture". As we'll see, nature and nurture work
together in the development of the individual.
(Cartoon by Barbara Smaller, New Yorker,
07/11-18/2022.)
Of course, neither physical nor mental
development stops at puberty. More recently, developmental
psychology has acquired an additional focus on development
across the life span, including adolescence and
adulthood, with a special interest in the elderly.
The Human Genome
The human
genetic endowment consists of 23 pairs of chromosomes.
Each chromosome contains a large number of genes.
Genes, in turn, are composed of deoxyribonucleic acid (DNA),
itself composed of a sequence of four chemical bases: adenine,
guanine, thymine, and cytosine (the letters A, G, T, and C
which you see in graphical descriptions of various genes).
Every gene is located at a particular place on a specific
chromosome. And since chromosomes come in pairs, so do genes.
For a nice historical
survey of genetics, see The Gene: An Intimate History
by Siddhartha Mukherjee (2016).
Corresponding pairs of genes, contain
information about some characteristic, such as eye color, skin
pigmentation, etc. While some traits are determined by single
pairs of genes, others, such as height, are determined by
several genes acting together. In either case, genes come in
two basic categories.Dominant genes (indicated by
upper-case letters) exert an effect on some trait regardless
of the other member. For example, in general the genes for
brown eyes, dark hair, thick lips, and dimples are dominant
genes.Recessive genes (indicated by lower-case letters)
affect a trait only if the other member is identical. For
example, in general the genes for blue eyes, baldness, red
hair, and straight noses are recessive. The entire set of
genes comprises the organism's genotype, or genetic
blueprint.
One of the most important technical
successes in biological research was the decoding of the human
genome -- determining the precise sequence of As, Gs, Ts, and
Cs that make us, as a species, different from all other
species. And along with advances in gene mapping, it has been
possible to determine specific genes -- or, more accurately,
specific alleles, or mutations of specific genes,
known as single-nucleotide polymorphisms, or SNPs
-- that put us at risk for various diseases, and which dispose
us to various personality traits. The most popular method for
this purpose is a genome-wide association study
(GWAS),s introduced earlier in our discussion of sexual
orientation. In what is essentially a
multiple-regression analysis, employing very large samples,
GWAS examines the relationship between every allele and some
characteristic of interest -- say, heart disease or
intelligence. Of course, a lot of these correlations
will occur just by chance, but there are statistical
corrections for that.
Some sense of this situation is given by
the "genetic" autobiography (A Life Decoded, 2007)
published by Craig Venter, the "loser" (in 2001, to a
consortium government-financed academic medical centers) in
the race to decode the human genome -- though, it must be
said, the description of the genome produced by Venter's group
is arguably superior to that produced by the
government-financed researchers. Anyway, Venter has organized
his autobiography around his own genome (which is what was
sequenced by his group in the race), which revealed a number
of such genes, including:
- on Chromosome 1, the gene TNFSF4, linked to heart
attacks;
- on Chromosome 4, the CLOCK gene, related to an evening
preference (i.e., a "night person" as opposed to a "day
person");
- on Chromosome 8, the gene CHRNA8, linked to tobacco
addiction;
- on Chromosome 9, DRD4, linked to novelty-seeking (Venter
is an inveterate surfer);
- on Chromosome 18, the gene APOE, linked to Alzheimer's
disease; and
- on the X Chromosome, the MAOA gene (about which more
later, in the lectures on Personality and Psychopathology),
which is linked to antisocial behavior and conduct
disorder (among genetics researchers, Venter has a
reputation as something of a "bad boy").
None of these genes means that Venter
is predestined for a heart attack or Alzheimer's disease, any
more than he was genetically predestined to be a surfer. It's
just that these genes are more common in people who have these
problems than in those who do not. They're risk factors, but
not an irrevocable sentence to heart disease or dementia.
Steven
Pinker, a distinguished cognitive psychologist and vigorous
proponent of evolutionary psychology (see below) went
through much the same exercise in "My
Genomic Self", an article in the New York Times
Magazine (01/11/2009). While allowing that
the DNA analyses of personal genomics will tell us a
great deal about our health, and perhaps about our
personalities, they won't tell us much about our personal
identities. Pinker refers to personal genomics, of the
sort sold on the market by firms like 23andMe, as mostly
"recreational genomics", which might be fun (he declined to
learn his genetic risk for Alzheimer's Disease), but doesn't
yet, and maybe can't, tell us much about ourselves that we
don't already know. He writes:
Even when the effect of some gene is indubitable,
the sheer complexity of the self will mean that it will not
serve as an oracle on what the person will do.... Even
if personal genomics someday delivers a detailed printout of
psychological traits, it will probably not change everything,
or even most things. It will give us deeper insight about the
biological causes of individuality, and it may narrow the
guesswork in assessing individual cases. But the issues about
self and society that it brings into focus have always been
with us. We have always known that people are liable, to
varying degrees, to antisocial temptations and weakness of the
will. We have always known that people should be encouraged to
develop the parts of themselves that they can (“a man’s reach
should exceed his grasp”) but that it’s foolish to expect that
anyone can accomplish anything (“a man has got to know his
limitations”). And we know that holding people responsible for
their behavior will make it more likely that they behave
responsibly. “My genes made me do it” is no better an excuse
than “We’re depraved on account of we’re deprived.”
A prominent GWAS by Daniel Benjamin, a
behavioral economist, and his colleagues studied 1.1 million
people and identified 1,271 SNPs which were, taken
individually, significantly associated with educational
attainment measured in years of schooling (Lee et al., Nature
Genetics, 2018). Taken together, these variants
accounted for about 11-13% of the variance in educational
attainment. However, in order to maximize the likelihood
of revealing genetic correlates (essentially, by reducing
environmental "noise"), the study was confined to white
Americans of European descent. When the same analysis
was repeated in a sample of African-Americans, the same
genetic variants accounted for less than 2% of the variance in
educational attainment. None of these SNPs should be
thought of as dedicated to educational achievement, especially
in populations other than those in which the GWAS was
conducted in the first place. In the first place, formal
schooling arose too recently to be subject to genetic
selection. In fact, the actual function of these SNPs is
unknown, and it is likely that many of them control traits
whose association with educational attainment is rather
indirect. So (to take an example from a story in The
Economist, 03/31/2018), children who inherit weak
bladders from their parents may do poorly on timed
examinations. In any event, the fact that there are SNPs
associated with educational attainment, or any other
psychosocial characteristic, doesn't mean that these traits
are directly heritable -- or that focusing on genes is the
best way to understand them, much less improve them.
Remember the lessons of the search for the "gay gene",
recounted above.
Notice the historical progression
here. To begin with, the "intelligence gene" is the Holy
Grail of behavior genetics. Psychologists have been
interested in finding the genes that cause schizophrenia, or
depression, or some other form of mental illness, but the
search for the intelligence gene goes back to the 19th century
-- that is, when Sir Francis Galton made the observation that
educational achievement and financial success, both presumed
markers of intelligence, tended to run in families (in Hereditary
Genius, first published in 1869). You might think,
maybe, that in 19th century England rich families were more
likely to send their children to Oxford and Cambridge and to
get elected to Parliament or elevated to the House of Lords,
than poor ones. You might think that. But Galton
concluded that intelligence was hereditary, and was passed
from one generation to another through bloodlines.
Galton knew nothing of genes: Gregor Mendel's papers setting
out the principles of heredity had been published in an
obscure journal in 1865 and 1866, but only garnered serious
attention when they were discovered and republished in 1900;
and Wilhelm Johannsen didn't coin the word "gene" until
1905).
The molecular structure of DNA wasn't
decoded until 1963 by Watson and Crick (with not-so-little but
very much uncredited help from Rosalind Franklin). But
even the discovery of the "double helix" didn't advance our
understanding of the genetic basis of intelligence.
Because, for the most part, we didn't know how many genes we
had, or where particular genes were located on the 23 pairs of
chromosomes that make up the human genome. All we had
was evidence from twin studies, of the sort reviewed in the lectures
on Thinking, which clearly established the genetic
contribution to individual differences in intelligence --
though the same studies also established the importance of the
environment, particularly the nonshared environment.
Even twin studies can be
misleading. For example, one group of behavior
geneticists used variants on the twin-study method
(technically, the adoption-study method) to show that there is
a genetic contribution to the amount of time that young
children spend viewing television (Plomin et al., Psych.
Science, 1990). But let's be clear: there is no
"television-viewing gene". TV wasn't invented
until the 1920s, so there hasn't been enough time to evolve
such a gene. But there may be genetic contributions to
traits that influence television viewing (for example, the
tendency of some children to just sit around the house; or the
tendency of some parents to ignore their children). But
any such influence is going to be indirect. Moreover,
even in this study the heritability of television-viewing
dropped from roughly 50% at age 3 to less than 20% at age 5,
so there are clearly other things going on.
At first, the hypothesis was that there
was one, or only a few, genes that coded for intelligence, and
that these genetic determinants could be identified by
correlating specific genes with IQ scores (or some proxy for
IQ, like years of schooling). That didn't work
out. For example, Robert Plomin's group at the
University of Texas (Chorney et al., Psych. Sci.,
1998) reported that a specific gene known as IGF2R (for
insulin-like growth factor-2 receptor), located on Chromosome
6, was associated with high IQ scores. Plomin is a
highly regarded behavior geneticist, and justly so, and he
knows how to do these studies right, so before he submitted
his research for publication he replicated his finding in a
second sample. As it happens, I was the editor of Psychological
Science, the journal to which Plomin's research group
submitted their paper. I was skeptical of the finding,
and told Plomin so: although I recognize that there is a
genetic component to intelligence (the twin studies reviewed
in the lectures on Thinking
establish that definitively) these kinds of findings, which
identify a specific gene with a specific psychological trait,
rarely hold up when replicated in a new sample. Still,
there was nothing wrong with the study, and I agreed with
Plomin that I'ostensibly m sceptical of these kinds of
studies: I was a graduat when the first "depression gene", or
at least the chromosome on which it was located, was
identified, and that fin
It should also be remembered that there
is a great deal of "genetic" material on our chromosomes
besides genes, and the GWAS technique doesn't discriminate
between gene variants -- sequences of nucleotides that code
for some trait (like intelligence) or disease (like
schizophrenia) -- and SNPs, which also come in variants (hence
the term polymorphism), but which aren't actually
genes. Anyway, with new powerful computers, we can
correlate individual SNPs with traits or diseases, just as
earlier investigators did with genes themselves. And the
result is that, indeed, some SNPs are correlated with
some traits. But -- and this is a big but -- the
correlations are very rarely statistically significant.
For example, the presence of one SNP, known as "rs11584700"
(don't ask), adds two days to educational
attainment. However, if you aggregate across a couple
dozen (or hundred) nonsignificant correlations, the overall
correlation between the entire package of SNPs -- known as a polygenic
risk score -- can rise to statistical
significance. Even so, the correlations, while they may
be statistically significant because of the large numbers of
subjects involved, may be practically trivial. More
important, however, remember the lesson of IGF2R. An
array of SNPs may correlate with intelligence (or years of
schooling) in one sample, but this relationship may not
replicate in another sample.
This is not to say that there are no
genetic correlates of intelligence (or even educational
attainment) or other socially significant traits. I'm
prepared to learn they are. But the really important
questions for me, when looking at genetic correlations, are:
- How strong is the association? Is it statistically
significant?
- If significant, is it replicable?
- If replicable, is the association strong enough to be of
any practical significance?
- If so, what do you plan to do with this knowledge?
For an enthusiastic summary of the
prospects for using GWAS to identify the genetic
underpinnings of intelligence, and a defense of the search
for the genetic causes of psychosocial phenotypes, see The
Genetic Lottery: Why DNA Matters for Social Equality
by Kathryn Paige Harden (2022), who directs the Twin Project
at the University of Texas. Her book was given a
scathing review by M.W. Feldman and Jessica Riskin, two
Stanford biologists, in the New York Review of Books
("Why Biology is Not Destiny", 04/21/2022). Feldman
and Riskin are appropriately skeptical about GWAS, and
behavior genetics in general (although if they had taken
this course, they wouldn't be quite so skeptical about
psychologists' ability to measure personality traits, and
they'd know where openness to experience comes
from). Like good interactionists, F&R argue that
it's impossible to separate genetic and environmental
influences, "because the environment is in the genome and
the genome is in the environment". Harden replied in a
subsequent issue (06/09/2022), followed by a rejoinder by
Feldman and Riskin. The whole exchange is very
valuable for setting out the terms of the debate.
Genetic Sequencing and Genetic
Selection
Since the human genome was
sequenced, and as the costs of gene sequencing
have gone down, quite an industry has developed
around identifying "genes for" particular traits,
and then offering to sequence the genomes of
ordinary people -- not just the Craig Venters of
the world -- to determine whether they possess any
of these genes.
The applications in the case of
genetic predispositions to disease are obvious --
though, it's not at all clear that people want to
know whether they have a genetic predisposition to
a disease that may not be curable or preventable.
- Research has identified a genetic mutation
for Huntington's disease, a presently
incurable brain disorder, on Chromosome 4, but
many people with a family history of
Huntington's disease do not want to be told
whether they have it (the folk singer Arlo
Guthrie is a famous example).
- Two genes, known as BRCA1 and BRCA2,
substantially increase a woman's risk for
breast cancer. Some women with these
genes, who also have a family history of
breast cancer, have opted for radical
preventive treatments such as double
mastectomy.
On the other hand, there are some uses of
genetic testing that are not necessarily so
beneficial.
- In certain cultures which value male
children more highly than female children,
genetic testing for biological sex may lead
some pregnant women to abort female fetuses,
leading to a gender imbalance. For
example, even without genetic testing, both
China and India have a clear problem with
"missing women" commonly attributed to
sex-selective abortion.
And there are other kinds of genetic testing
which also may not be to society's -- or the
individual's -- advantage.
- A variant of the ACTN3 gene on chromosome 11
is present in a high proportion of athletes
who compete at elite levels in "speed" and
"power" spots (as opposed to "endurance"
sports. Almost as soon as this finding
was announced, a commercial enterprise offered
a genetic test which, for $149, would indicate
whether a child had such a gene ("Born to Run?
Little Ones Get Test for Sports Gene" by
Juliet Macur, New York Times,
11/30/2010). The danger is that children
found to have the gene will be tracked into
athletics, when they'd rather be librarians;
(or, perhaps, librarians who run 10K races
recreationally) and other children will be
discouraged from sports, even though they'd be
able to have careers as elite athletes.
The point is that genes aren't destiny, except
maybe through the self-fulfilling
prophecy. So maybe we shouldn't behave as
though they were.
|
From Genotype to Phenotype
Of course, genes don't act in isolation
to determine various heritable traits. One's genetic endowment
interacts with environmental factors to produce a phenotype,
or what the organism actually looks like. The genotype
represents the individual's biological potential, which is
actualized within a particular environmental context. The
environment can be further classified as prenatal (the
environment in the womb during gestation), perinatal (the
environment around the time of birth), and postnatal (the
environment present after birth, and throughout the life
course until death).
Because of the role of the environment,
phenotypes are not necessarily equivalent to genotypes. For
example, two individuals may have the same phenotype but
different genotypes. Thus, of two brown-eyed individuals, one
might have two dominant genes for brown eyes (BB), while
another might have one dominant gene for brown eyes and one
recessive gene for blue eyes (Bb). Similarly, two individuals
may have the same genotype, but different phenotypes. For
example, two individuals may have the same dominant genes for
dimples, (DD), but one has his or her dimples removed by
plastic surgery.
The chromosomes are found in the
nucleus of each cell in the human body, except the sperm cells
of the male and the egg cells of the female, which contain
only one element of each pair. At fertilization, each element
contributed by the male pairs up with the corresponding
element contributed by the female to form a single cell, or zygote.
At this point, cell division begins. The first few divisions
form a larger structure, the blastocyst. After six
days, the zygote is implanted in the uterus, at which point we
speak of an embryo.
Personality, Behavior, and the Human Genome Project
Evidence of a genetic contribution to
individual differences (see below), coupled with the
announcement of the decoding of the human genome in 2001,
has led some behavior geneticists to suggest that we will
soon be able to identify the genetic sources of how we
think, feel, and behave. However, there are reasons for
thinking that behavior genetics will not solve the problem
of the origins of mind and behavior.
- In the first place, there's no inheritance of acquired
characteristics. So while there might be a genetic
contribution to how we think, there can't be a
genetic contribution to what we think.
- In any event, there aren't enough genes. Before 2001, it
was commonly estimated that there were approximately
100,000 genes in the human genome. The reasoning was that
humans are so complex, at the very apex of animal
evolution, that we ought to have correspondingly many
genes. However, when genetic scientists announced the
provisional decoding of the human genome in 2001, they
were surprised to find that the human genome contains only
about 30,000 genes -- perhaps as few as 15,000 genes,
perhaps as many as 45,000, but still far too few to
account for much important variance in human experience,
thought, and action. As of 2024, the count stood at about
20,000 protein-coding genes, of which about 6,000 had been
assigned a function: these are the genes for some
trait (e.g., Gregor Mendel's round or wrinkled pea-seeds;
blue or brown eyes; breast cancer; or, closer to home, IQ
or schizophrenia or Neuroticism). Another 20,000
genes do not code for proteins. However, there are
still a couple of gaps in our knowledge of the human
genetic code, and by the time these are closed, the number
of human genes may have risen; or it may well have fallen
again!
- The situation got worse in 2004, when a revised analysis
(reported in Nature) lowered the number of human
genes to 20-25,000.
- By comparison, the spotted green pufferfish also has
about 20-25,000 genes.
- On the lower end of the spectrum, the worm c.
elegans has about 20,000 genes, while the fruit
fly has 14,000.
- On the higher end of the scale, it was reported in
2002 that the genome for rice may have more than
50,000 genes (specifically, the japonica strain
may have 50,000 genes, while the indica variety
may have 55,600 genes). It's been estimated that
humans have about 24% of our genes in common with the sativa
variety of rice. It's also been reported that 26
different varieties of maize (corn) average more than
103,000genes.
If it takes 50,000 genes to make a crummy grain of
rice, and humans have only about 20,000 genes, then there's
something else going on. For example, through alternative
splicing, a single gene can produce several different
kinds of proteins -- and it seems to be the case that
alternative splicing is found more frequently in the human
genome than in that of other animals.
Alternatively, the human genome seems to contain more
sophisticated regulatory genes, which control the
workings of other genes.
- Before anyone makes large claims about the genetic
underpinnings of human thought and action, we're going
to need a much better account of how genes actually
work.
- Of course, it's not just the sheer number of genes,
but the genetic code -- the sequence of bases A, G, T,
and C -- that's really important. Interestingly in 2002
the mouse genome was decoded, revealing about 300,000
genes. Moreover, it turned about that about 99% of these
genes were common with the human genome (the
corresponding figure is about 99.5% in common for humans
and our closest primate relative, the chimpanzee). This
this extremely high degree of genetic similarity makes
sense from an evolutionary point of view: after all,
mice, chimpanzees, and humans are descended from a
common mammalian ancestor who lived about 75 million
years ago (ya). And it also means that the mouse is an
excellent model for biological research geared to
understanding, treating, and preventing human disease,
it is problematic for those who argue that genes are
important determinants of human behavior. That leaves
300 genes to make the difference between mouse and
human, and only about 150 genes to make the difference
between chimpanzees and humans.Not enough genes.
- Perhaps the action is not in the genes, but rather in
their constituent proteins:indica rice has about
466 million base pairs, and japonica about 420
million, compared to 3 billion for humans (and about 2.5
billion for mice).
Why are there 3 billion base pairs, but only
about 20,000 genes? It turns out that most DNA is what
is known as junk DNA -- that is, DNA that doesn't
create proteins. When the human genome was finally
sequenced in 2001, researchers came to the conclusion that
fully 98% of the human genome was "junk" in this
sense. But it's not actually "junk": "junk" DNA
performs some functions.
- Some of it forms telomeres, structures that "tie
off" the ends of genes, like the aglets or caps that
prevent ropes (or shoelaces) from fraying at their
ends. Interestingly, telomeres get shorter as a
person ages. And even more interestingly, from a
psychological point of view, telomeres are also shortened
by exposure to stress (see the work of UC San Francisco's
Elissa Epel).
So, stress does really seem to cause premature aging!
- Junk DNA also codes for ribonucleic acid,
or RNA.
- And junk DNA also appears to influence gene expression,
which may explain some of the differences (for example, in
susceptibility to genetically inherited disease) between
genetically identical twins.
The paucity of human genes, and the vast
amount of "junk DNA" in the genome, has led some researchers
to concluded that the source of genetic action may not lie
so much in the genes themselves, but in this other material
instead. Maybe. Or maybe DNA just isn't that important
for behavior in complex, sophisticated organisms like
humans. Maybe culture is more important.
Something to think about.
For an extended discussion of junk DNA and
its functions, see Junk DNA: A Journey Through
the Dark Matter of the Genome (2015) by Nessa
Carey. Or excerpts printed in Natural History
magazine, March, April, and May 2015). Carey also
wrote The Epigenetics Revolution: How Modern Biology
is Rewriting Our Understanding of Genetics, Disease, and
Inheritance (2012), also excerpted in Natural
History, April and May 2012).
In any event, for all the talk about the
genetic underpinnings of individual differences, as things
stand now and behavior geneticists don't have the foggiest
idea what they are.
"In the god-drenched eras of the past there
was a tendency to attribute a variety of everyday
phenomena to divine intervention, and each deity in a vast
pantheon was charged with responsibility for a specific
activity -- war, drunkenness, lust, and so on. 'How silly
and primitive that all was,' the writer Louis Menand has
observed. In our own period what Menand discerns as a
secular 'new polytheism' is based on genes -- the
alcoholism gene, the laziness gene, the schizophrenia
gene.
Now we explain things by reference to an
abbreviated SLC6A4 gene on chromosome 17q12, and feel much
superior for it. But there is not, if you think about it,
that much difference between saying 'The gods are angry'
and saying 'He has the gene for anger.' Both are ways of
attributing a matter of personal agency to some fateful
and mysterious impersonal power."
---Cullen Murphy in "The Path of
Brighteousness" (Atlantic Monthly, 11/03)
Embryological Development
What are the mechanisms of
embryological development? At one point, it was thought that
the individual possesses adult form from the very beginning --
that is, that the embryo is a kind of homunculus
(little man), and that the embryo simply grew in size. This
view of development is obviously incorrect. But that didn't
stop people from seeing adult forms in embryos!
In the 19th century, with the adoption
of the theory of evolution, the homunculus view was gradually
replaced by the recapitulation view, based on
Haeckel's biogenetic law that
"Ontogeny recapitulates
phylogeny".
What Haeckel meant was that the
development of the individual replicates the stages of
evolution: that the juvenile stage of human development
repeats the adult stages of our evolutionary ancestors. Thus,
it was thought, the human embryo first looks like an adult
fish; later, it looks like adult amphibians, reptiles, birds,
mammals, and nonhuman primates. This view of development is
also incorrect, but it took a long time for people to figure
this out.
The current view of development is
based on von Baer's principle of differentiation.
According to this rule, development proceeds from the general
to the specific. In the early stages of its development, every
organism is homogeneous and coarsely structured. But it
carries the potential for later structure. In later stages of
development, the organism is heterogeneous and finely built --
it more closely represents actualized potential. Thus, the
human embryo doesn't look like an adult fish. But at some
point, human and fish embryos look very much alike. These
common structures later differentiate into fish and humans. A
good example of the differentiation principle is the
development of the human reproductive anatomy, which we'll
discuss later in the course.
Ontogeny and Phylogeny
For an engaging history of the debate between
recapitulation and differentiation views of development, see
Ontogeny and Phylogeny (1977) by S.J. Gould.
Neural Development in the Fetus, Infant, and
Child
At the end of the second week of
gestation, the embryo is characterized by a primitive
streak which will develop into the spinal cord. By the
end of the fourth week, somites develop, which will
become the vertebrae surrounding the spinal cord -- the
characteristic that differentiates vertebrates from
invertebrates.
Bodily asymmetries seem to have their origins
in events occurring at an early stage of embryological
development. Studies of mouse embryos by Shigenori Nonaka
and his colleagues, published in 2002, implicate a structure
known as the node, which contains a number of cilia, or
hairlike structures. The motion of the cilia induce fluids
to move over the embryo from right to left. These fluids
contain hormones and other chemicals that control
development, and thus cause the heart to grow on the left,
and the liver and appendix on the right -- at least for
99.99% of people. On rare occasions, a condition known as situs
inversus, the position of the embryo is reversed, so
the fluids flow left to right, resulting in a reversal of
the relative positions of the internal organs -- heart on
the right, liver and appendix on the left. It is possible
that this process is responsible for including the
hemispheric asymmetries associated with cerebral
lateralization -- although something else must also be
involved, given that the incidence of right-handedness is
far greater than 0.01%.
- In the second month, the eye buds move to the
front of the head, and the limbs, fingers, and toes become
defined. The internal organs also begin to develop,
including the four-chambered heart -- the first
characteristic that differentiates mammals from
non-mammals among the vertebrates. It's at this point that
the embryo changes status, and is called a fetus.
- The development of the nervous system begins in the
primitive streak of the embryo, which gradually forms an
open neural tube. The neural tube closes after 22
days, and brain development begins.
- In the 11th week of gestation the cerebral cortex
becomes clearly visible. The cortex continues to grow,
forming the folds and fissures that permit a very large
brain mass to fit into a relatively small brain case.
- In the 21st week of gestation synapses begin to
form. Synaptic transmission is the mechanism by which the
nervous system operates: there is no electrical activity
without synapses. So, before this time the fetal brain has
not really been functioning.
- In the 24th week myelinization begins. The myelin
sheath provides a kind of insulation on the axons of
neurons, and regulates the speed at which the neural
impulse travels down the axon from the cell body to the
terminal fibers.
All three processes -- cortical
development, synaptic development, and myelinization --
continue for the rest of fetal development, and even after
birth. In fact, myelinization is not complete until late in
childhood.
- As far as the cerebral cortex is concerned, we're
probably born with all the neurons we're going to get. The
major change postnatally is in the number, or the density,
of interconnections among neurons -- a process
called synaptogenesis (as opposed to neurogenesis),
by which the axons of presynaptic neurons increasingly
link to the dendrites of postsynaptic neurons. Viewed at
the neuronal level, the big effect of normal development
is the proliferation of neural interconnections, including
an increase in dendritic arborization (like a tree
sprouting branches) and the extension of axons (so as to
make contact with more dendrites).
- At the same time, but at a different rate, there is also
some pruning, or elimination, of synapses -- a
process by which neural connectivity is fine-tuned. So,
for example, early in development there is considerable
overlap in the projections of neurons from the two eyes
into the primary visual cortex; but after pruning, the two
eyes project largely to two quite different segments of
cortex, known as "ocular dominance columns". This pruning
can continue well into childhood and adolescence.
- Neurons die. Fortunately, we are born with a lot of
neurons, and unless neuronal death is accelerated by brain
damage or something like Alzheimer's disease, neurons die
relatively slowly. When neurons die, their connections
obviously disappear with them.
- The connections between neurons can also be strengthened
(or weakened) by learning. Think of long-term
potentiation. But LTP doesn't change the number of
neural interconnections. What changes is the likelihood of
synaptic transmission across a synapse that has already
been established.
- LTP is an example of a broader phenomenon called functional
plasticity.
- Violinists, who finger the strings with their left
hands, show much larger cortical area in that portion of
(right) parietal cortex that controls finger movements
of the left hand, compared to non-musicians.
- If one finger of a hand is amputated, that portion of
somatosensory cortex which would ordinarily receive
input from that finger obviously doesn't do so any
longer. But what can happen is that the somatosensory
cortex can reorganize itself, so that this portion of
the brain can now receive stimulation from fingers that
are adjacent to the amputated one.
- And if two fingers are sewn together, so that when one
moves the other one does also, the areas of
somatosensory cortex that would be devoted to each
finger will now overlap.
- In each case, though, the physical connections between
neurons -- the number of terminal fibers synapsing on
dendrites -- don't appear to change. Much as with LTP,
what changes is the likelihood of synaptic transmission
across synapses that have already been established.
- Finally, there's the problem of
neurogenesis.
Traditional neuroscientific doctrine has held that
new neurons can regenerate in the peripheral nervous
system (as, for example, when a severed limb has
been reattached), but not in the central nervous
system (as, for example, paraplegia following
spinal cord injury). However, increasing
evidence has been obtained for neurogenesis in the
central nervous system as well. This
research is highly controversial (though I,
personally, am prepared to believe it is true),
but if confirmed would provide the basis for
experimental "stem cell" therapies for
spinal-cord injuries (think of Christopher
Reeve). If naturally occurring
neurogenesis occurs at a rate greater than the
rate of natural neuronal death, and if these new
neurons could actually be integrated into
pre-existing neural networks, that would supply
yet an additional mechanism for a net increase
in new physical interconnections between neurons
-- but
so far the evidence in both respects is
ambiguous.
Studies of premature infants indicate
that an EEG signal can be recorded at about 25 weeks of
gestation. At this point, there is evidence of the first
organized electrical activity in the brain. Thus, somewhere
between the 6th and 8th month of gestation the fetal brain
becomes recognizably human. There are lots of the folds and
fissures that characterize the human cerebral cortex. And
there is some evidence for hemispheric specialization:
premature infants respond more to speech presented to the left
hemisphere, and more to music presented to the right. At this
point, in the 3rd semester of gestation, the brain clearly
differentiates humans from non-humans.
Interestingly, survivability takes a
big jump at this point as well. If born before about 24 weeks
of gestation, the infant has little chance of survival, and
then only with artificial life supports; if born after 26
weeks, the chances of survival are very good. If born at this
point, the human neonate clearly has human physical
characteristics, and human mental capacities. In other words,
by some accounts, by this point the fetus arguably has personhood,
because its has actualized its potential to become human. At
this point, it makes sense to begin to talk about personality
-- how the person actualizes his or her potential for
individuality.
What are the implications of fetal
neural development for the mind and behavior of the
fetus? Here are some things we know.
First, the fetus begins to move in the uterus as early as
seven weeks of gestation. While some of this is random,
other movements seem to be coordinated. For example, the
fetus will stick its hands and feet in its mouth, but it will
also open its mouth before bringing the limb toward it.
- Taste buds form on the fetal tongue by the 15th week,
and olfactory cells by the 24th week. Newborns
prefer flavors and odors that whey were exposed to in the
womb, which suggests that some sensory-perceptual learning
is possible by then.
- By about the 24th-27th week, fetuses can pick up on
auditory stimulation. Again, newborns respond to
sounds and rhythms, including individual syllables and
words, that they were exposed to in utero --
another example of fetal learning.
- The fetus's eyes open about the 28th week of gestation,
but there is no evidence that it "sees" anything -- it's
pretty dark in there, although some light does filter
through the mother's abdominal wall, influencing the
development of neurons and blood vessels in the
eye.
The importance of early experience in
neural development cannot be overemphasized.
Socioeconomic status, nurturance at age 4 (as opposed to
neglect, even if benign), the number of words spoken to the
infant, and other environmental factors all are correlated
with various measures of brain development.
The Facts of Life
Much of this information has been drawn from
The Facts of Life: Science and the Abortion Controversy
(1992) by H.J. Morowitz and J.S. Trefil. Advances in medical
technology may make it possible for fetuses to live outside
the womb even at a very early stage of gestation, but no
advance in medical technology will change the basic course
of fetal development, as outlined here and presented in
greater detail in the book. See also Ourselves Unborn: A
History of the Fetus in Modern America by Sara Dubow
(2010).
Nature and Nurture in Personality Development
So where does personality come from? We
have already seen part of the answer: Personality is not a
given, fixed once and for all time, whether by the genes or
through early experience. Rather, personality emerges out of
the interaction between the person and the environment, and is
continuously constructed and reconstructed through social
interaction. A major theme of this interactionist view of
personality is that the person is a part of his or her own
environment, shaping the environment through evocation,
selection, behavioral manipulation, and cognitive
transformation. In the same way, development is not just
something that happens to the individual. Instead, the
individual is an active force in his or her own development.
The Developmental Corollary to the Doctrine of
Interactionism
Just as the person is a part of his or her
own environment, the child is an agent of his or her own
development.
The
development of the individual begins with his or her genetic
endowment, but genes do not act in isolation. The organism's
genotype, or biological potential (sometimes referred to as
the individual's "genetic blueprint", interacts with the
environment to produce the organism's phenotype, or what the
organism actually "looks like" -- psychologically as well as
morphologically.
The individual's phenotype
is his or her genotype actualized within a particular
environmental context:
- Two individuals can have different genotypes but the
same phenotypes. For example, given two brown-eyed
individuals, one person might have two dominant genes for
brown eyes (blue eyes are recessive), while the other
might have one dominant gene for brown eyes, and one
recessive gene for blue eyes.
- Two individuals can have the same genotype but different
phenotypes. For example, of two individuals might both
possess two dominant genes for dimples, one individual
might have cosmetic surgery to remove them, but the other
might not.
More broadly, it is now known that
genes are turned "on" and "off" by environmental events.
- Sometimes, the "environment" is body tissue immediately
surrounding the gene. Except for sperm and egg cells,
every cell in the body contains the same genes. But the
gene that controls the production of insulin only does so
when its surrounding cell is located in the pancreas. The
same gene, in a cell located in the heart, doesn't produce
insulin (though it may well do something else). This fact
is the basis of gene therapy: a gene artificially inserted
into one part of the body will produce a specific set of
proteins that may well repair some deficiency, while the
same gene inserted into another part of the body may not
have any effect at all.
- Sometimes, the "environment" is the world outside the
organism. A gene known as BDNF (brain-derived neurotrophic
factor), plays an important role in the development of the
visual cortex, is "turned on" by neural signals resulting
from exposure to light. Infant mice exposed to light
develop normal visual function, but genetically identical
mice raised in darkness are blind.
Nature via Nurture?
For some genetic biologists, facts like these
resolve the nature-nurture debate: because genes respond to
experience, "nature" exerts its effects via "nurture". This
is the argument of Nature via Nurture: Genes,
Experience, and What Makes Us Human (2003) by Matt
Ridley. As Ridley writes:
Genes are not puppet masters or blueprints.
Nor are they just the carriers of heredity. They are
active during life; they switch on and off; they respond
to the environment. They may direct the construction of
the body and brain in the womb, but then they set about
dismantling and rebuilding what they have made almost at
once -- in response to experience (quoted by H. Allen Orr,
"What's Not in Your Genes",New York Review of Books,
08/14/03).
The point is correct so far as it goes: genes
don't act in isolation, but rather in interaction with the
environment -- whether that is the environment of the
pancreas or the environment of a lighted room.
But this doesn't solve the nature-nurture
debate from a psychological point of view, because
psychologists are not particularly interested in the
physical environment. Or, put more precisely, we are
interested in the effects of the physical environment, but
we are even more interested in the organism's mental
representation of the environment -- the meaning that
we give to environmental events by virtue of such cognitive
processes as perception, thought, and language.
- When Ridley talks about experience, he generally means
the physical features of an environmental event.
- When psychologists talk about experience, they generally
mean the semantic features of an environmental
event -- how that event fits into a pattern of beliefs,
and arouses feelings and goals.
As Orr notes, Ridley's resolution of the nature-nurture
argument entails a thoroughgoing reductionism, in which the
"environment" is reduced to physical events (such as the
presence of light) interacting with physical entities (such as
the BDNF gene). But psychology cannot remain psychology and
participate in such a reductionist enterprise, because its
preferred mode of explanation is at the level of the
individual's mental state.
For a psychologist, "nurture" means the meaning
of an organism's experiences. And for a psychologist, the
nature-nurture argument has to be resolved in a manner that
preserves meaning intact.
The point is that genes
act jointly with environments to produce phenotypes. These
environments fall into three broad categories:
- prenatal, meaning the intrauterine environment of
the fetus during gestation;
- perinatal, referring to environmental conditions
surrounding the time of birth, including events occurring
during labor and immediately after parturition; and
- postnatal, including everything that occurs after
birth, throughout the course of the individual's life.
In light of the relation between
genotype and phenotype, the question about "nature or nurture"
is not whether some physical, mental, or behavioral trait is
inherited or acquired. Better questions are:
- What is the relative importance of nature and nurture?
- Or even better, How do nature and nurture interact?
The Twin-Study Method
Many basic questions of nature and
nurture in personality can be addressed by using the
techniques of behavior-genetics are used to analyze
the origins of psychological characteristics. Perhaps the most
interesting outcome of these experiments is that, while
initially intended to shed light on the role of genetic
factors in personality development. These behavior-genetic
analyses show the clear role of genetic determinants in
personality, but also reveal a clear role for the environment.
The most popular method in
behavior genetics is the twin study -- which, as its name
implies, compares two kinds of twins in terms of similarity in
personality:
- Monozygotic (MZ or identical) twins are
the product of a egg that has been fertilized by a single
sperm, but which subsequently split into two embryos --
thus yielding two individuals who are genetically
identical.
- Actually, MZ twins aren't precisely identical.
Research by Dumanski and Bruder (Am. J. Hum Gen
2008) indicates that even MZ twins might differ in the
number of genes, or in the number of copies of genes.
Moreover, failures to repair breaks in genes can occur,
resulting in the emergence of further genetic
differences over the individuals' lifetimes. This
discovery may have consequences for the determination of
environmental contributions to variance, detailed below.
But for most practical purposes, the formulas discussed
below provide a reasonable first approximation.
- Dizygotic (DZ or fraternal) twins occur
when two different eggs are fertilized by two different
sperm -- thus yielding individuals who have only about 50%
of their genes in common.
A qualification is in order. A vast
proportion of the human genome, about 90%, is the same for
all human individuals -- it's what makes us human, as
opposed to chimpanzees or some other kind of organism. Only
about 10% of the human genome actually varies from one
individual to another. So, when we speak of DZ twins having
"50%" of their genes in common, we really mean that they
share 50% of that 10%. And when we speak of unrelated
individuals having "no" genes in common, we really mean that
they don't share any more of that 10% than we'd expect by
chance.
Of course, you could compare triplets,
quadruplets, and the like as well, but twins are much more
convenient because they occur much more frequently in the
population. Regardless of twins or triplets or whatever, the
logic of the twin study is simple: To the extent that a trait
is inherited, we would expect MZ twins to be more alike on
that trait than DZ twins.
Born in Ontario
in 1934, the Dionne Quintuplets (all girls, all genetically
identical) were the first quintuplets to live past
infancy. Their parents already had four children, and
the birth of five more was completely unexpected. The
family's financial straits led their parents to sign over
custody of the girls to the Canadian Red Cross, which built
a special hospital and "observatory" for them across the
street from their family home; their parents also agreed to
display them at the Chicago World's Fair. "Quintland"
actually became a tourist attraction, complete with bumper
stickers. All of this was in an attempt to protect the
children -- and of course it misfired badly, creating, among
other things, a sharp division between the quints on the one
hand, and their parents and four older siblings on the
other. Their story has been told in many books: the
most recent, The Miracle and Tragedy of the Dionne
Quintuplets by Sarah Miller (2019), traces their
lives. Spoiler alert: the tragedy is that they were
exploited anyway; the miracle is that each girl grew up with
her own individual personality and interests.
In 2009, the
Crouch quadruplets, Ray, Kenny, Carol, and Martina, all
received offers of early admission to Yale ("Boola Boola,
Boola Boola: Yale Says Yes, 4 Times" by Jacques Steinberg, New
York Times, 12/19/2009). Now you don't have to
be twins to share such outcomes (and the Crouch quadruplets
obviously aren't identical quadruplets
anyway!). Consider The 5 Browns, sibling pianists from
Utah, all of whom, successively, were admitted to study at
Julliard. How much of these outcomes is due to shared
genetic potential? How much to shared
environment? How much to luck and chance? That's
why we do twin and family studies.
The
April 2018 issue of National Geographic magazine,
devoted to race, featured Marcia and Millie Biggs,
11-year-old fraternal twin sisters living in England (shown
here with their father, Michael; you'll see their photo from
the magazine cover later in these lectures). Their
mother, Amanda Wanklin, calls them her "one in a million
miracle". The girls' differences in physical
appearance are due entirely to the vicissitudes of genetic
chance. But while Millie is a "girlie" girl, Marcia is
more of a "tomboy" (those are Marcia's words, not
mine). Where those psychological differences come from
is a much more complex, and much more interesting, story --
as we'll see in what follows.
Twins in
space! Einstein's theory of special relativity
predicts that a twin traveling through space at high speed
will age less rapidly than his or her earth-bound
counterpart. That hypothesis hasn't been tested yet,
because we don't have the ability to put a twin in a
spacecraft that travels at close to light-speed. But
still, there are reasons to think that space travel,
including exposure to microgravity and ionizing radiation,
not to mention the stress and absence of a circadian clock,
might have substantial physiological and psychological
effects. Taking advantage of the presence of two
identical twins, Scott and Mark Kelly, on the roster of
American astronauts, the National Aeronautics and Space
Administration (NASA) subjected the pair to a battery of
physiological and psychological tests before, during, and
after one of them (Scott) completed a 1-year mission aboard
the International Space Station (ISS). The results,
reported in 2019 (Garrett-Bakelman et al., Science
04/12/2019) did indeed uncover a number of physiological
changes induced by a year in space. Interestingly,
Scott's telomeres -- portions of the chromosome that shorten
with age -- actually did lengthen during spaceflight: score
one, maybe, for Einstein. Psychologically, frankly,
the study was sort of impoverished, but revealed no
significant decrements in cognitive speed, accuracy, or
efficiency (speed-accuracy trade-off) inflight, compared to
pre-flight; post-flight, however, there was some diminution
in performance.
As far as personality goes, the usual technique
in twin studies is to administer some personality inventory,
like the MMPI or CPI or NEO-PI to a large sample of MZ and DZ
twins, thus obtaining scores representing each individual's
standing on each personality trait measured by the inventory.
Then we measure the similarity of the twins on each trait. The
most common measure of similarity is the correlation
coefficient, which summarizes the direction and strength
of the relationship between two variables -- for example,
between extraversion in one twin and extraversion in the
other.
- The correlation is positive if one individual has a high
score and his twin does too.
- The correlation is negative if one individual has a high
score and his twin has a low score.
- Correlations close to +1.0 or =1.0 indicate a strong
relationship, positive or negative.
- Correlations close to 0.0 indicate little or no
similarity between the twins.
If we assume that a
personality trait (or a physical trait like eye color, for
that matter) is solely determined by the genes, and the
environment has no effect, we would expect that following
pattern of correlations:
- For MZ twins,r = +1.0: the twins are genetically
identical, and thus identical in personality.
- For DZ twins,r = +0.50: because there is some
degree of genetic similarity between the twins, we would
expect some degree of similarity in personality as well.
- For genetically unrelated individuals,r = 0.0:
with no genetic overlap, there should be no similarity in
personality.
An alternative measure of similarity is
the concordance rate. Assuming that a person either
has a trait or does not, on the hypothesis of exclusively
genetic determination we would expect a concordance rate of
100% for MZ twins, and a concordance rate of about 50% for DZ
twins (we will meet up with concordance rates again in the
lectures on Personality and Psychopathology, when we discuss
the origins of mental illness).
More generally, to the extent that a
trait is inherited, we expect that MZ twins will be more
similar to each other than DZ twins -- regardless of whether
similarity is measured by the correlation coefficient or the
concordance rate.
Once in a while, you'll read some
extraordinary story about identical twins, separated at birth,
who turn out remarkably similar as adults. For examples,
see Born Together -- Reared Apart: The Landmark
Minnesota Twin Study by Nancy L. Segal (2012). For
example, one of the pairs of subjects in the Minnesota study
were separated at birth, but both were named "Jim" by their
adoptive parents, both married women named "Linda", both
subsequently divorced, and both later married women named
"Betty". Both were chain-smokers, both drove Chevrolet
cars, both were employed as deputy sheriffs, and both
preferred to vacation at the same beach in Florida.
These coincidences are fun to read about, but let's be clear:
there's no gene for driving Chevrolets, or for marrying women
named "Linda". Genes operate at an entirely different
level.
For more on twins, especially
identical twins, see How To Be Multiple: The Philosophy
of Twins (2024) by the philosopher Helena de Bres,
herself a twin, reviewed by Parul Sehgal in "Double Vision",
New Yorker, 01/29/2024, from which the following
quotations are taken. Also Twinkind: The Singular
Significance of Twins (2024) by William Viney (also a
twin, and also mentioned by Sehgal). Sehgal writes:
"De Bres invokes twins from life and legend... to examine
how multiples complicate our notions of personhood,
attachent, and agency. Twins have been critical to our
understanding of ourselves,[de Bres] argues.... And
they continue to unsettle our notions abut where bodies end
and begin, about whether personalities, even fates, are
forged or found.... For de Bres, to be a twin was to
be seen. It was, indeed, the social currency she
possessed...; how, in school, being a twin meant that
everyone knew who they were ('though not necessarily who we
each were'), giving them 'a lifetime backstage pass
to semicoolness'. De Bres herself writes: "We
identical twins, then, are tricky, disruptive, even
deditious creatures. We are the perfect
crime. Most people run into us only occasionally, but
the experience of doing so, or the simple idea of twins, can
enflame broader anxieties about the fragility of everyone's
capacity to identify anyone."
Genes, Environments, and the "Big Five"
In fact, a twin
study of the "Big Five" personality traits by Loehlin and his
colleagues (1992) showed that for each dimension of the Big
Five, MZ twins were more alike than DZ twins. Studies using
other personality inventories, such as the MMPI or the CPI,
have yielded similar sorts of findings. Taken together, this
body of research provides prima facie evidence for a
genetic contribution to individual differences in
personality.But genes aren't the only forces determining
individual differences in personality. If they were, then:
- the MZ correlations would be a perfect 1.0 and
- the DZ correlations would be 0.50.
Physical characteristics like eye color
and hair texture may come close to these values. Height and
weight show high MZ correlations, but even these aren't
perfectly correlated. The reason is that genotype alone is
never sufficient to determine phenotype. Phenotypes always
result from the interaction of genotypes with environmental
factors. The typical MZ correlations for physical traits range
upward from 0.50, suggesting high if not perfect
heritabilities. By contrast, the typical MZ correlations for
psychological traits range between 0.25 and .50 -- suggesting
that genetic influences on personality are relatively weak,
and environmental influences are correspondingly strong.
Note, however, that heritability can
be misleading. I owe the following example to Eric
Turkheimer, a prominent behavior geneticist at the
University of Virginia who has studied the heritability of
both IQ and major mental illnesses like schizophrenia.
Consider the human trait of having two arms and two
legs. This is without any doubt completely determined
by our genetic heritage (as Plato pointed out, we are
featherless bipeds). But if you look at the
concordance between MZ and DZ twins for "armedness", you'll
find almost perfect concordance: virtually 100% of the
siblings of people with two arms also have two arms.
If then you just compare MZ and DZ correlations, you get a
heritability of zero (0), or very close to
it. But we know that, barring rare incidents of
accident or surgery, the number of arms is completely
determined by the genes.
Of course, even perfect MZ correlations
of 1.00 wouldn't be enough to clinch the case of exclusive
heritability. Twins, especially MZ twins, share more than
genes. They also share environments, and it is possible that
MZ twins live in environments that are more alike than DZ
twins (perhaps because MZ twins are of the same sex, or
perhaps simply because they look more alike). This raises the
question: How do we tease apart the genetic and environmental
contributions to personality?
One way to address this question is to
study identical twins who have been separate at birth and
reared apart -- meaning that they share genes but not
environments. Such cases do exist, and they are interesting,
but the fact is that there are not enough of them to make a
satisfactory sample. Moreover, many twins ostensibly "reared
apart" really aren't. For example, twins might be "separated"
for economic reasons, because their parents can't afford to
raise both of them, and one of them reared by an aunt and
uncle down the road. Even when twins are actually adopted out,
adoption agencies often try to place adoptees with foster
parents who resemble their biological parents in terms of age,
educational levels, occupational status, and the like. Such
twins probably share more of their environment than not.
Separated at Birth!
Identical twins separated at birth, and
reared independently of each other, have often been taken as
providing interesting evidence regarding the role of
heredity and environment in personality and behavior.
Naturally, a fair amount of interest lies in the
similarities among the twins.
The study of identical twins reared apart has
a history going back to Sir Cyril Burt's twin studies of
intelligence (and the idea goes back to Sir Francis Galton),
but got a serious boost from the "The Jim Twins",
identical twin boys, born in 1939, and separated shortly
after birth. When they were reunited at age 39, they were
exactly the same height and weight. No surprise there, such
physical properties should be under a high degree of genetic
control. But it turned out that, as boys, they both had dogs
named Toy, and taken vacations at the same Florida resort;
they had both married and divorced women named Linda, and
remarried women named Betty; and they had sons named James
Alan and James Allan. Both smoked the same brand of
cigarette and drank the same brand of beer. They both
suffered from headaches, and they both bit their
fingernails. Formal testing, by a group at the University of
Minnesota, led by Thomas Bouchard (a prominent behavior
geneticist) revealed that the two men were highly similar in
terms of intelligence and other personality traits.
The Jim Twins started Bouchard and his
colleagues on a search for other identical twins who had
been raised apart, who were recruited for psychological
testing, and confirmed a high degree of similarity. And,
frankly, it's no surprise that IQ and other basic
personality traits are also under some degree of genetic
control. But let's get something straight: there's no gene
for marrying women named Linda or Betty. It's important not
to exaggerate what are, in fact, mere coincidences.
A more recent case is that of Tamara and
Adriana. From the time Tamara Rabi began her
undergraduate studies at Hofstra University, in New York,
people started telling her that they knew someone else who
looked just like her. It turned out that the other woman,
Adriana Scott, was Tamara's identical twin sister. The two
women had been born in Guadalajara, Mexico, and through a
series of bureaucratic snafus, separated at birth and
adopted by different American families. Adriana was raised
as a Roman Catholic, Tamara as Jewish. Neither knew she had
a twin.But both own a pair of large hoop earrings, both like
to dance, and they had similar nightmares when they were
children. Both their adoptive fathers died of cancer(see
"Separated at Birth in Mexico, United at Campuses on Long
Island" by Elissa Gootman,New York Times, 03/03/03).
Perhaps
the most amazing case of this sort is "The Mixed-Up Brothers
of Bogota", two pairs of identical twins, Jorge and William
and Carlos and Wilber, born on the same day in the same
Columbian hospital. They were accidentally switched so
that Jorge and Carlos were raised together as fraternal
twins, as were William and Wilber. Their story is told
in "The Mixed-Up Brothers of Bogota" by Susan Dominus, New
York Times Magazine, 07/12/2015). In her
article, Dominus briefly describes Bouchard's study, and its
continuation by Nancy Segal, a psychologist at CS
Fullerton. Dominus's article has a strong biological
cast to it: Similarities are explained by genetic
similarities, differences are explained by "epigenetic"
influences, about which there is more later in this
supplement. But there are thousands of genes, and
thousands of potential epigenetic influences, and, frankly,
the most parsimonious explanation of their differences is
that they were raised in different environments!
In
2019, Three
Identical Strangers, a fascinating documentary film
(directed by Tim Wardle), presented on CNN, drew
attention to the dramatic case of three identical triplet
brothers (actually quadruplets, although the fourth brother
died in childbirth) who were raised separately. The
men, Edward Galland, David Kellman, and Robert Shafran, were
born in 1961 to a teen-aged unwed mother and separated at 6
months by the adoption agency under whose care they had been
placed. Although standard practice would attempt to
keep adoptive siblings together, the brothers were
deliberately placed into different homes as part of a study
of the effects of different parenting styles. They
learned of their relationship only by accident, when one of
the brothers happened to enroll at a college which had been
attended by another; the third brother learned of the
situation by reading a newspaper account of the reunion of
the other two. The three brothers subsequently opened a
restaurant together. One of them eventually committed
suicide.
Despite having been designed by a prominent
academic psychiatrist (actually a psychoanalyst, which may
have been part of the problem), the study itself appears
to have been a classic case of Bad Science (not to mention
bad policy: after the triplets' situation came to light,
one of their adoptive families said that they would have
been happy to take in all three). In the first
place, it was too small to generate any meaningful
conclusions. Aside from the triplets, there were
apparently only five sets identical twins also
deliberately placed with different families. And
although the families themselves were blind to the fact
that identical siblings were being reared separately, the
researchers were not. For example, at one point, at
least, the same research assistant was assigned to do
followup testing of all three of the triplets -- allowing
plenty of opportunity for bias to creep into the
collection of data. The records of the study have
been placed under seal at Yale University until 2065, so
it will be a while until we find out just how bad a study
it was.
Identical twins raised apart also provide the
plot of a Walt Disney-produced movie,The Parent Trap,
starring Hayley Mills (1961; remade 1998), a television
sitcom (Sister, Sister), and an episode of the
X-Files ("Eve"). Other movies involving twins
separated at birth include:
- The Iron Mask (silent, 1929), remade as The
Corsican Brothers (1971) and the Man in the
Iron Mask (1997), all from the Dumas novel
- Start the Revolution Without Me (1970)
- Echo (1988)
- Twin Dragons (1990)
- Big Business (1988)
- A Merry Mix-Up (1957), starring the Three Stooges
- Twice Blessed
- Double Impact (1991)
- Equinox (1992)
See also:
- Twin Stories, a documentary film by Fredric
Golding (1997)
- Twin Stories, a book by Susan Kohl (2001), based
largely on interviews conducted at the annual Twin Days
festival in Twinsburg, Ohio.
- "A Thing or Two About Twins" by Peter Miller,National
Geographic, January 2012.
For a review of scientific studies of
identical twins reared apart, including Burt's and
Bouchard's studies, see:
- Identical Twins Reared Apart: A Reanalysis by
Susan L. Faber (Basic Books, 1981)
Link to
www.twinstuff.com
for a complete listing of films about twins -- and lots of
other information for and about twins and other
"multiples".
It's important to recognize that
heritability estimates are accurate only for the environment
from which a population was drawn. For example, as discussed
in the lectures on Thinking,
the heritability of IQ is higher in high-SES populations than
it is in low-SES populations. Apparently, high socioeconomic
status gives freer rein to genetic influences, allowing people
"to be all they can be", while low SES constrains them,
confronting individuals with artificial ceilings on
attainment.
Separating Genes and Environment(s)
The
genetics-versus-environment question can also be approached in
the context of standard twin studies. But the issue gets a
little complicated, because it turns out that the
"environment" comes in two basic forms:
- The shared environment, also known as
between-family variance, includes all the factors
that children raised in the same family share, which
differentiate them from children in other families. As a
rule, children in the same family are raised by the same
parents, share a single racial, ethnic, and cultural
heritage, live in the same neighborhood, go to the same
schools, and attend the same church, synagogue, or mosque.
The shared environment includes all the things that
siblings have in common.
- The nonshared environment, also known as
within-family variance, includes all the factors
that differentiate among children raised in the same
family. Even within a family, children differ in terms of
such factors as gender (boy or girl) or birth order
(first-born vs. latter-born). Children may have different
interactions with their parents (parents treat children
differently depending on age and sex), and develop
different networks of friends and acquaintances outside
the family. Different children within a family are also
distinguished by non-systematic factors, which
include all the things that happen randomly to one child
but not to his or her brothers and sisters --chance
encounters that can really make a difference in the
individual's life. The nonshared environment is an
umbrella term that refers to all the unique experiences
that siblings have.
As it happens, the relative strength of
both environmental components of personality, as well as the
genetic component, can be estimated from the observed pattern
of MZ and DZ correlations Falconer & Mackay, 1996).
Consider, first
the entire distribution of a trait within a population, from
those individuals with the lowest scores on Extraversion or
Neuroticism to those with the highest scores on these traits.
This distribution is typically represented by a more-or-less
"normal" distribution -- the famous "bell curve". Each
person's score on a trait measure -- Neuroticism,
Extraversion, whatever -- is a measure of the person's
phenotype -- how he or she "turned out" with respect to that
dimension of personality. The entire distribution of
individual scores within a population is the total
variance on the trait(s) in question.
This total
variance in the trait (100%, or a proportion equal to
1.0) is the sum of genetic variance (i.e., variance in
the trait that is accounted for by to genetic variability, or
individual differences in genotypes) and environmental
variance (i.e., variance in the trait that is accounted
for by environmental variability, or individual differences in
environments):
- Total Variance on a Trait (T) = Genetic Variance (G) +
Environmental Variance (E).
The environmental
variance, in turn, is the sum of variance due to the shared
environment and variance due to the nonshared
environment:
- E = Variance due to Shared Environment (ES)
+ Variance due to Nonshared Environment (ENS).
First, consider the comparison between
MZ and DZ twins. By definition, MZ and DZ twins are identical
with respect to the shared environment -- they are raised by
the same parents in the same household. But MZ and DZ twins
differ genetically: MZ twins are identical genetically, while
DZ twins are no more alike, genetically speaking, than any two
non-twin siblings. Thus, any difference in similarity between
MZ and DZ twins must be due to genetic differences.
Genetic variance is a function
of the difference between MZ and DZ correlations: the
greater the MZ correlation compared to that of DZ, the more we
can attribute similarity to shared genes than to shared
environments (don't worry about where the "2" comes from: this
is a technical detail):
G = 2 * (MZ - DZ).
Next, consider MZ twins raised
together. By definition, MZ twins are identical with respect
to both genes and the shared environment. They are the product
of a single fertilized egg, and they are raised by the same
parents in the same household. If the only contributions to
variance were from the genes and the shared environment, they
ought to be identical in personality. Therefore, any departure
from a perfect correlation of 1.00 must reflect the
contribution of the nonshared environment.
Variance due to the nonshared
environment is a function of the MZ correlation: MZ
twins share both genes and (shared) environment, so any MZ
correlation less than a perfect +1.0 must reflect the
contribution of the nonshared environment:
ENS
= 1 - MZ.
Once we've estimated the contributions
of the genes and the nonshared environment, variance due to
the shared environment is all that's left, so it can
be estimated simply by subtracting G and ENS from
1:
ES=
1 - G -E NS.
Here are some illustrative examples:
If the correlation for MZ
twins is 1.00, and the correlation for DZ twins is .50, we
have the situation described earlier: all the variance on the
trait is attributable to genetic variance, and no variance is
attributable to either sort of environmental factor, shared or
nonshared.
If we reduce the MZ
correlation substantially, but keep the DZ correlation pretty
much the same, most of the variance is now attributed to the
environment. There is some genetic effect, and some effect of
each sort of environment.
If we increase the MZ
correlation, but also increase the DZ correlation, most of the
variance is still attributable to the environment, but the
strength of the genetic contribution diminishes markedly.
If we use MZ and DZ
correlations that roughly approximate those found in Loehlin's
study of the Big Five, we find evidence of a substantial
genetic component of variance, but also a substantial
environmental component. Most important, we find that the
contribution of the nonshared environment is much
greater than that of the shared environment. In fact,
the effect of the shared environment is minimal.
In fact, if we
use Loehlin's data for Extraversion, that's exactly the
pattern we find.The MZ correlation is about twice as high as
the DZ correlation. The estimate for G is 48%, for ENS
it's 52%, with nothing left over for ES.
And also for Neuroticism.
A variant on the twin study is the adoption
study, which compares the similarity between adopted children
and their biological parents and siblings (with whom they
share 50% of their genes, on average) and adopted children and
their adoptive parents and siblings (with whom they share no
particular genes). To the extent that genes contribute to
individual differences on some variable, biologically related
individuals should be more alike than biologically unrelated
ones.
The two methods can be combined, in a
way, by comparing identical twins reared together (who
share genes and environment) and identical twins reared
apart (who, ostensibly, share genes but not
environment). I say "ostensibly", because some twins "raised
apart" simply live in different households (like with
grandparents or aunts and uncles) for economic reasons, but
still have a great deal of family, school, and social life in
common. So it's not necessarily the case that identical twins
reared apart don't share a family environment.
The Big Five
Considering
all five traits examined in Loehlin's (1992) study of The Big
Five, the results of actual twin studies of personality reveal
that genetic factors account for approximately 40% of the
variance on the Big Five traits; the nonshared environment
accounts for approximately 50% of variance; and the shared
environment accounts for less than 10% of variance. Apparently
the family environment is not decisive for adult personality,
and the nonshared environment is far more important.
To summarize, for each of
The Big Five dimensions of personality, there is:
- a substantial genetic component to variance;
- the contribution of the nonshared environment is even
greater than that of the genes;
- the contribution of the shared environment is relatively
trivial.
And now, perhaps, we have some
idea of where some of those genes are. In 2024, Daniel
Levey, a psychologist at Yale, and his colleagues published an
extensive genetic study of the Big Five personality traits
(Gupta et al., Nature Human Behavior, 2024).
"Extensive" is actually an understatement. They drew on
the the Veterans Administration's "Million Veterans Project"
(MVP), which has collected health data, including DNA
information, from a large number of US military veterans
receiving healthcare through the Veterans Administration (VA)
system (hence its name). A subsample of these
individuals, numbering about 270,000 -- including completed an
inventory of the Big 5 personality traits; Levey and his
colleagues then used GWAS methodology to identify specific
genetic loci that are associated with each of the Big 5
dimensions. These are not quite the same as genes, but
rather designate specific regions on chromosomes where genes
or genetic markers are located. Close enough for our
purposes. The MVP sample included about 240,000 subjects
of European ancestry (EUR), and another 30,000 subjects of
African ancestry (AFR). The EUR portion of the MVP
sample was also combined with other large samples collected in
England and elsewhere to create a huge database of almost
700,000 subjects of European ancestry.
Gupta, Levey, et al.'s initial analysis of the
MVP data revealed 34 genomic loci that were significantly
associated with one or another of the Big5 traits in the EUR
subsample -- 11 each for Neuroticism and Extraversion, 3 for
Agreeableness, 2 for Conscientiousness, and 7 for
Openness. Analysis of the AFR subsample yielded only 2
significant loci, both for Agreeableness. Interestingly,
none of the EUR loci were significant in the AFR
sample, and neither of the AFR loci were significant in the
EUR sample -- a point to which I'll return later.
Gupta et al. then combined the MVP-EUR sample
with three other large samples of European ancestry,
increasing the sample size to roughly 682K for Neuroticism and
roughly 250-300K for the other Big5 traits -- thus increasing
the power of the analysis, and their ability to detect
statistically significant correlations between the Big 5
traits and various loci. For Neuroticism, the enhanced
power yielded an enormous increase in the number of
significant loci, from 11 the MVP sample to 208 in the
combined sample. The combined sample also yielded 3 new
loci for Extraversion (plus the 11 from MVP-EUR, for the total
of 14; 2 loci for Agreeableness (where they had found 3
Agreeableness loci in the MVP-EUR sample alone; 2 for
Conscientiousness (where MVP-EUR had also yielded 2), and 7
for Openness (where MVP-EUR had also found 7).
Why the gains for Neuroticism were so much
greater than for the other four Big 5 traits isn't
clear. It might have been due to the fact that
the sample for Neuroticism almost trebled in size, while the
samples for the other Big 5 traits increased by only about
7-25% (long story here, too technical to get into in this
context). Still, it also isn't clear why the increased
power afforded by the larger sample resulted in a reduction
in the number of significant loci for Extraversion.
It would also be important to know the extent
to which the loci identified in the MVP-EUR sample were also
identified in the other samples, and vice-versa. Recall,
from the GWAS study of IQ discussed in the lectures on Thinking
and Reasoning, Judgment and Decision-Making, that, for
all the care and statistical power that went into that
experiment, the significant association between IQ and the
IGF2R gene, was not replicated in a subsequent study.
xxxxx
By aggregating the correlations between individual loci and
their corresponding traits, Gupta et al. estimated
heritabilities between 4-8% for each of the Big5 traits.
But recall the finding from Loehlin's twin study, just discussed
that genes account for about 40% of the variance in Big5 traits,
the nonshared environment about 50%, and the shared environment
less than 10%. In 2005, Loehlin and his colleagues --
including UCB's own Prof. Oliver John (
J. Res. Pers.,
2005) -- obtained somewhat slightly higher estimates for G and E
NS,
and even lower estimates for E
S. Now,
heritability estimates can differ depending on precisely how
they are obtained. In the domain of IQ, for example,
studies of identical twins reared apart tend to produce somewhat
higher estimates of heritability, compared to the more common
twin-study method (Loehlin & Plomin,
Beh. Gen.
1989); but 45% vs. 5% -- now, that's a
big
difference. Possibly, the heritabilities estimated from
twin studies are just wrong, and the heritability of the Big 5
traits is a low lower than previously thought.
Alternatively, the remaining 40% may be accounted for by the
cumulative effects of
other loci (there are, after all,
about 8,000 of them, and roughly 20,000 genes)? Stay
tuned, as the story of the genetics of personality continues to
unfold.
Temperament
A somewhat different pattern occurs for
individual differences in temperament, which some theorists
consider to be the most "innate" of all personality
characteristics. Allport (1961, p.34) defined temperament as
follows:
Temperament refers to the characteristic
phenomena of an individual's nature, including his
susceptibility to emotional stimulation, his customary
strength and speed of response, the quality of his
prevailing mood, and all the peculiarities of fluctuation
and intensity of mood, these being phenomena regarded as
dependent on constitutional make-up, and therefore largely
hereditary in origin.,
Based on this definition,
Buss, Plomin, and Willerman (1973) devised the EASI scale to
measure individual differences in four aspects of temperament
in MZ and DZ twins aged 4 months to 16 years:
- Emotionality: level of emotional arousal, or intensity
of emotional reaction (distress, anger, and fear) in
objectively upsetting situations.
- Activity Level: overall energy output, as indicated by
the vigor and tempo of behavior.
- Sociability: the tendency to approach other people,
share activities, get others' attention.
- Impulsivity: essentially, speed of response to stimulus.
The general finding was
that all four dimensions were, as predicted by Allport,
indeed, highly heritable, with heritability coefficients
averaging .58 -- though there were some differences by age and
gender.
- For example, Activity yielded a heritability coefficient
of .83 for boys under 55 months of age, but only .24 for
girls in that group.
- Similarly, Impulsiveness showed a heritability
coefficient of .87 for young boys, but no
heritability for young girls (the DZ correlation was
actually higher than the MZ correlation!).
Attachment
Temperament" is often considered to be an innate
characteristic of personality, so it is not particularly
surprising that it has a relatively large genetic
component. Other characteristics of the child, however,
show evidence of a clear environmental contribution. A
case in point is attachment style, a term coined by
John Bowlby, a British psychiatrist who was influenced by both
Darwin's theory of evolution and Freud's psychoanalytic theory
of personality. From Darwin, Bowlby got the idea that
the even the very young children have to learn how to survive
in their environment -- an environment composed principally of
their parents, and especially their mothers as their primary
caretakers. From Freud, he got the idea that the
parent-child relationship determined the character of the
child's later relationships with other adults, particularly
their spouses (and their own children). Bowlby's attachment
theory argues that attachment security is an
important feature of personality. At first glance, it
might seem reasonable to assess attachment security on a
single dimension, from insecure to secure, in fact attachment
theory describes four main types of attachment: "secure" and
three different types of "insecure" attachment. These
are typically measured by a behavioral procedure known as the
Strange Situation developed by Mary Ainsworth, an
American-Canadian psychologist who worked closely with
Bowlby.
The Strange Situation assessment is conducted over a series
of phases:
- The child (typically an infant, between 1 and 2 years old)
and parent (usually the mother) are brought into a
room.
- The infant is allowed to explore the room while the parent
sits by passively.
- A a stranger enters the room and converses briefly with
the parent.
- First Separation Episode: The parent leaves the
room, and the stranger begins interacting with the infant.
- First Reunion Episode: Infants are typically
distressed at the absence of the parent, and in any event
the parent soon returns to the room.
- Second Separation Episode: The parent and stranger
both leave the infant alone in the room.
- The stranger returns to the room and interacts with the
infant.
- Second Reunion Episode: Finally, the parent
returns, picks up the infant, and the stranger leaves.
During the Separation and Reunion episodes, various aspects
of the infant's behavior are observed and coded:
- The child's exploratory activity.
- The child's reactions to the departure of the caregiver.
- The child's anxiety level when alone with the stranger.
- The child's behavior during reunions with the caregiver.
On the basis of these behavioral observations, the child is
classified into one of four categories:
- Securely Attached: The child approaches the
caregiver when s/he returns, and responds positively to the
caregivers comforting behavior.
- Insecure Anxious: The child may approach the
caregiver, but when s/he does approach, doesn't appear to
have been comforted.
- Insecure Avoidant: The child does not become
distressed when the caregiver leaves, and they do not
respond when the caregiver returns.
- Insecure Disorganized: The child shows anxious and
avoidant responses in a haphazard fashion.
Obviously, attachment is a two-way street: between the child
and the caregiver and between the caregiver and the
child. Therefore, we'd expect the environment to play an
important role in individual differences in attachment
style. And this is just what is revealed by twin studies
of attachment style. The question is a little more
complicated to answer than for extraversion, IQ, or
temperament, because attachment style is a categorical
variable, not a continuous variable, but the logic of the
analysis is the same: are MZ twins more likely to share the
same attachment style as DZ twins? There have been a
number of studies addressing this question (reviewed by
Gervai, Child & Adolescent Psychiatry & Mental
Health, 2009).
- Finkel & Matheny (2000) found that genetic factors (G)
accounted for 25% of population variance, the shared
environment (ES) for 0%, and the nonshared
environment (EN) for 75%.
- O'Connor & Croft (2001): G = 14%, ES = 32%,
EN = 53%.
- Roisman & Fraley (2008): G = 17%. ES = 53%,
EN = 30%.
Roissman & Fraley concluded that the most important
determinant of attachment style was the quality of parenting
received by the child. Interestingly, given the relatively
large contribution of the
nonshared environment, the
quality of parenting differs even between identical twins!
Most recently, Dugan et al. (
JPSP:PPID, 2024) confirmed
the importance of the nonshared environment in a large-scale
study of elderly adults (
N = 678 twin pairs) who had been
enrolled in the Minnesota Twin Registry as children, and who
completed a questionnaire measure of attachment styles (known as
"Experiences in Close Relationships -- Relationship Structures")
as elderly adults. The ECR-RS measures two different
aspects of adult attachment: attachment avoidance (e.g., "I do
not feel comfortable opening up to people") and attachment
anxiety (e.g., "I worry that people may abandon me"). In
addition to examining these attachment styles in general, these
investigators also assessed the subjects' specific attachments
to their mothers, fathers, romantic partners, and best
friends. No matter which of the 10 aspects of attachment
was being measured (2 styles x 5 targets), the results were
strikingly consistent: genetic factors accounted for
approximately 36% of the variance in attachment style, while the
nonshared environment accounted for the remaining 64%.
Although the specific formula that Dugan et al. used did not
provide an estimate of the
shared environmental
variance, it's pretty clear (do the arithmetic) that it was
negligible. Similar findings were obtained from the
Relationship Scales Questionnaire, another measure of attachment
styles in adulthood: genetic effects accounted for 28% of the
variance in attachment anxiety, and 36% of the variance in
attachment avoidance.
All of the remaining
variance on both scales was accounted for by the nonshared
environment.
IQ and Education
There's a different picture for
individual differences in intelligence (as measured by
standard IQ tests) and education.
Family studies show that the correlation
between family members' IQ is, in turn, correlated with
genetic resemblance.
- MZ twins > DZ twins, even when the MZ twins are
raised apart.
- Biological siblings > adopted siblings.
- Parents and their biological offspring > parents and
their adopted offspring.
But there's also a clear
family influence:
- MZ twins raised in the same household are more alike
than MZ twins raised apart.
- Parents and their biological offspring raised in the
same household > parents and their biological offspring
raised apart.
Aggregating over
the best available studies, a reasonable estimate for the MZ
correlation for IQ is .86, and the corresponding estimate for
DZ is .60. Plugging these figures into the equations, that
means that about 52% of population variance in IQ is
attributable to genetic variance. About 34% is attributable to
the shared environment, and about 14% to the nonshared
environment.
When
you look at educational attainment, you get somewhat similar
correlations, and somewhat similar estimates.
Sex and Suicide
Here are two other examples of how
behavior genetics can shed light on the role of nature and
nurture in development.
A study
by Herden et al. (2007) looked at age of "sexual debut" --
that is, the age at which people had their first sexual
intercourse. Here, genes account for about 31% of population
variance, but the nonshared environment accounts for a
whopping 59% of variance.
And another
study, by Fu et al. (2002), examined genetic and environmental
contributions to suicidal behavior, defined broadly to include
both suicidal ideation and actual attempts at suicide. Genes
were more influential on suicidal ideation than suicide
attempt (probably due to a genetic influences on depression),
but in both cases the contribution of the nonshared
environment was much stronger than that of the genes, or the
shared environment.
Political Attitudes
There also appears to be a genetic
contribution to political attitudes. Evidence for this comes
from the Virginia 30K Twin Study, which followed 29,080
subjects residing in Virginia, and their first-degree
relatives, including 2,648 MZ twins and 1,748 DZ twins. Among
other questionnaires, these subjects were administered the
Wilson-Patterson Attitude Inventory of opinions on various
socio-political issues, such as school prayer, property taxes,
busing, and abortion. Half the questions were posed so that an
endorsement indicated liberal attitudes, and half were worded
in the conservative direction. 28 of the items were expressly
political in nature. The subjects were also asked to reveal
their political party affiliation, if any.
The
investigators devised two scales of liberalism-conservatism,
plus a scale of "opinionation" indicating how strong their
opinions were. On each scale, MZ twins were more alike than DZ
twins.
Calculating the components of
variance, the investigators identified a substantial genetic
contribution to both liberal-conservative attitudes and
opinionation, though not so much to party affiliation as such
(it's possible to be a relatively liberal Republican, and a
relatively conservative Democrat). The shared environment was
a relatively strong determinant of party affiliation:
apparently, children tend to join their parents' political
party. But in three out of four cases, the contribution of the
nonshared environment was stronger than that of either the
genes or the shared environment. Which just goes to show you
how important individual experiences are to things like this.
Hatemi
et al. (Behavior Genetics, 2014) obtained similar
results in an even larger study. Again employing the
twin-study method, these investigators examined "left-right"
or "liberal-conservative" political attitudes based on surveys
of more than 12,000 twin pairs living in five democracies
(Australia, Denmark, Hungary, Sweden, and the United States),
surveyed over four decades (1980-2012). They found that
genetic factors accounted for about 40% of the variance in
political attitudes; the shared environment, 12%; and the
nonshared environment, 42%. The only departure from this
pattern is political-party affiliation, which is
overwhelmingly determined by the shared environment.
Democrats tend to beget Democrats, and Republicans
Republicans; but peoples attitudes toward specific issues,
such as abortion or same-sex marriage, tend not to be passed
from one generation to the next.
Hatemi et al. went even further,
conducting a genome-wide association study in three of the
samples in an attempt to identify particular gene variants
that might mediate the genetic contribution. They found
no plausible candidates, and concluded that individual
differences in political attitudes, like individual
differences in intelligence and other aspects of personality,
were the product of large number of genes, each of which
played a small role. But from our point of view the most
important finding is that environmental factors outweighed
genetic factors, and the nonshared environment outweighed the
shared environment.
As Irving Kristol once joked, "A
neoconservative is a liberal who has been mugged by
reality". If his identical twin hasn't been similarly
mugged, he'll probably stay a liberal.
To sum up: Twin studies reveal genetic
influences on personality and attitudes, and these are
interesting, but by far the most surprising finding of
behavior-genetics research is the evident power of the nonshared
environment. We've been taught, at least since Freud (though
that should have been our first clue that something might be
wrong!) that the way children are treated in the family
determines how they'll grow up. In fact, this widespread
belief appears to be incorrect. There is an extensive
literature on child-rearing practices, considering such things
as age of weaning and toilet training (the sorts of things
that interested Freud a great deal), and these aspects of
childhood seem to have little influence on adult personality
(Sears, Maccoby, & Levin, 1957).
Happiness (Subjective Well-Being)
One of the enduring puzzles in
personality psychology is what accounts for individual
differences in happiness or life satisfaction. Current theory
favors the view that each of us has a sort of baseline level
of happiness - -think of it as a sort of "happiness set point"
around which we fluctuate, depending on what is going on in
our lives at the moment (e.g., Kahneman et al., 1999). But
where does this baseline level of happiness come from? The
obvious answers -- like, "the richer you are the happier you
are" aren't sufficient. Research shows a surprisingly low
correlation between socioeconomic measures like income or
wealth and self-rated happiness. A pioneering study by Lykken
and Tellegen (1996) suggested that baseline happiness levels
were substantially heritable -- meaning that we are born with
a tendency to be happy or unhappy. But Lykken and Tellegen
never claimed that happiness was all in the genes.
De Neve (2011; De Neve et
al., 2011) examined the genetic and environmental
contributions to happiness -- that is, one's sense of
well-being and satisfaction with life. in the National
Longitudinal Study of Adolescent Health (known as the Add
Health Study), a survey of almost 27,000 American students
enrolled in 80 high schools in that began in 1994-1995, and
continues with regular follow-up interviews. At one point, the
subjects were asked to indicate "How satisfied are you with
your life as a whole?" -- the standard formulation used in
such studies. Most respondents said they were at least fairly
satisfied with their lives, but the important point has to do
with the determinants of these ratings.
- For a sub-sample of over 400 pairs of identical twins,
the MZ correlation was .35.
- For a sub-sample of over 400 pairs of same-sex fraternal
twins, the DZ correlation was .13.
Using a somewhat more
complicated formula than our "double the difference"
heuristic, De Neve et al. calculated the following components
of variance:
- G = .33, meaning that approximately 33% of population
variance was due to genetic variance.
- ENS = .67, meaning that approximately 67&
of population variance was due to the nonshared
environment.
- ES = 0, meaning that there was no
contribution from the shared environment. Children from
the same family are no more similar in happiness or life
satisfaction than children drawn from different families.
So, these findings are consistent with
the general pattern found with the Big Five: a significant
genetic component, but also a significant contribution from
the nonshared environment.
Rumination and Depression in Adolescence
Depression is a serious problem in
adolescent (and adult) mental health, and rumination --
perseverative thinking about one's problems and feelings -- is
a known risk factor for depression. Moore and her
colleagues studied genetic and environmental contributions to
both rumination and depression. Adolescents enrolled in
the Wisconsin Twin Project completed a variety of
questionnaires, yielding the following correlations and
estimates of components of variance.
- Depression: MZ, r = .53; DZ, r = .26.
- Brooding: MZ, r = 22; DZ, r = .10.
- Reflection: MZ, r = .40; DZ, r = .14.
- Distraction: MZ, r = .40; DZ, r = .02.
Moore et al. used a slightly different
procedure to estimate the components of variance. In
each case, it's obvious that there was a significant genetic
contribution to variance. The contribution of the
nonshared environment was equally substantial, and the
contribution of the shared environment was virtually nil.
Economic Behavior
The financial crisis of 2008 led some
social scientists to try to understand how some bankers and
investors, working in such a "rational" environment as the
economy, could possibly have been so reckless. And, naturally,
the question got frames in terms of nature and nurture, genes
and environment. A study by Cronqvist and Siegel, two American
business professors, analyzed data on savings habits in a
sample of almost 15,000 Swedish identical and fraternal twins
-- the largest twin registry in existence (again, you can get
this kind of data in a country like Sweden, which has a
comprehensive and efficient national healthcare system). Of
course, Swedes are human, and so there was wide variation in
savings behavior -- that is, how much of an individual's
disposable income was saved for the future, instead of being
spent in the present. The primary measure chosen was change in
the individual's net worth between 2002 and 2006, adjusted for
individual differences in gross income.
Comparing identical and
fraternal twins, they got the following correlations:
- MZ twins,r = .33
- DZ twins,r = .16
Plugging these values into
our rough-and-ready formula, we get the following estimates:
Actually, Cronqvist and
Siegel applied a number of more sophisticated mathematical
models to their data, but they all led to the same conclusion:
- There was a significant genetic component to savings
behavior -- perhaps related to the Big Five factor of
conscientiousness.
- There was also a substantial environmental component,
but the nonshared environment was far more powerful than
the shared environment.
Musical Ability
The old joke goes: "How do you get to
Carnegie Hall?". and the punchline is: "Practice,
practice, practice!". And it's true: Anders Ericsson and
his colleagues (1993), surveying elite musicians, estimated
that it took about 10,000 hours of disciplined practice in
order to become an expert instrumentalist: this study gave
rise to the "10,000 hour rule" popularized by Malcolm
Gladwell in his book, Outliers (2008).
But it's not all practice. A group
led by Miriam Mosing, a behavior geneticist at the Karolinska
Institute in Sweden, finds that musical ability is a product
of both genes and the environment -- and the critical
environment is not the shared environment (e.g.,
whether the subject's parents were themselves musicians); it's
the nonshared environment.
- In a 2013 study, Fredrik Ullen, Mosing, and their
colleagues reported a study which tested subjects' ability
to make auditory discriminations of pitch, melody, and
rhythm.
- Taking advantage of the large databases available
through the Swedish health system, Ullen et al. tested
6,881 twins.
- They first established the validity of their test by
showing that scores were positively correlated with such
criteria as taking music lessons, playing an instrument,
and years of formal musical training.
- Collapsing across males and females, MZ twins had more
similar scores on all three subscales than did DZ twins.
- Rhythm: MZ, r = .51; DZ, r = .28
- Melody: MZ = .57; DZ = .32
- Pitch: MZ = .48; DZ = .29
- Applying a slightly different mathematical model than
our "double the difference" rule of thumb, Ullen et al.
obtained the following estimates of the components of
variance:
- Rhythm: G = .50; ENS = .48; ES
= .02.
- Melody: G = .59; ENS = .40; ES
= .00.
- Pitch: G = .30; ENS = .52; ES
= .19.
- In a further analysis, Mosing and her colleagues (2014)
also looked at the role of practice in a subsample of
twins who reported that they played a musical instrument
or actively sang (i.e., not just in church, or the
shower, or Friday nights at a karaoke bar). It turns
out that there's a genetic component to practice,
too.
- MZ twins self-reported practice times were more
similar (r = .63) than those of DZ twins (r
= .40).
- Applying their formula yielded the following
estimates: G, .41; ES, .21; ENS,
.38.
It makes sense that the shared
environment is a stronger determinant of practice than of
musical ability per se. It was probably the subjects'
parents who encouraged them to take up music in the first
place, and then insisted that they practice -- at least so
long as they were living at home!. But still, the nonshared
environment trumps the shared environment.
Putting it All Together
The most
comprehensive study of heritability was published by Tinca
Polderman, Danielle Posthuma, and their associates in Nature
Genetics for 2015. Indeed, it is the most
comprehensive genetic study of anything conceivable,
because it's a meta-analysis of virtually all twin studies
published from 1958 to 2012 -- covering 2,748 publications,
17,804 traits, and 14,558,903 twins (many of whom appeared in
more than one study). These investigators used a slightly
different method of estimated heritability, and the
contribution of the shared and nonshared environment, than
presented in this course, but the results they obtained are
entirely compatible with those discussed here. Still,
they determined that, across all domains, the average
heritability was 49%, and the contribution of the shared
environment 17%, with the non-shared environment accounting
for most of the rest of variation. The largest
heritabilities were associated with biological traits, with
smaller heritabilities associated with psychosocial traits
such as those we're concerned with in this course. and
the shared environment also had the largest influence in the
biological domain. However, all of the traits
showed significant heritabilities -- not a single trait,
biological or psychosocial, had an average heritability whose
confidence interval included zero (0). Most of the
findings were consistent with an "additive" genetic model in
which each trait is influenced by a number of different
genes. The table at the right shows the twin
correlations, and estimates of heritability (h2)
and shared environmental variance (c2) for the "Top
20" most-investigated traits in the literature.
There's a pattern here. For most
psychologically interesting behaviors, genes may account
for a significant proportion of individual differences,
but by far the most important determinant is the nonshared
environment.
Genes "for" Personality?
We'll turn to the nonshared environment
in a moment, but first, let's explore the genetic component a
little more. Is there really a "happiness gene"? Not exactly,
but De Neves et al. did identify a particular genetic
polymorphism that does seem to be involved in the genetic
component.
And it's certainly possible that there
are genes "for" certain basic individual differences in
temperament, such as speed and strength of emotional response.
There may even be genes for individual differences in some of
the "Big Five" personality traits, such as extraversion and
neuroticism. But there are unlikely to be genes for all
the important individual differences in personality, for the
simple reason that there aren't enough genes. The
human genome contains about 22,500 genes -- and remember that
about 90% of these don't vary from one individual to another,
which leaves about 2,250 genes to work with. And when you
consider that, in order to get the smooth bell-shaped normal
distributions of personality traits, you need a model of
polygenetic inheritance -- lots of genes making tiny
contributions to each individual-difference variable -- you
can see that the number of genes mounts up pretty quickly, and
you pretty quickly run out of genes. So there's got to be
something else going on, and that "something else" is going to
be powerful, not trivial.
Moreover, consider the
nature of some of the individual differences studied by
behavior geneticists.
- Based on an adoption study, Plomin and his colleagues
(1990) reported a genetic contribution to the amount of
time subjects spend watching television. But there can't
be a gene for TV-watching, for the simple reason that
genetic differences are a product of evolution, and there
hasn't been enough time since the invention of television,
by Philo Farnsworth and others in the late 1920s, for a
TV-watching gene to evolve. So if there is a genetic
contribution to TV-watching, it's rather indirect.
- There is a clear genetic contribution to individual
differences in openness to experience, one of the Big Five
dimensions of personality, but this "trait" was only
identified in the 1960s, when "openness" became a salient
feature of culture.Before that, this dimension was
characterized as "intellectance" (i.e., looking
intelligent) or culturedness. Personality characteristics
that are so rapidly changeable don't allow evolution time
to develop a relevant gene.
- Similarly, other investigators have reported a genetic
contribution to individual differences in political
attitudes, but the liberal-conservative dimension on which
these individual differences are measured had its origins
in 18th- and 19th-century political philosophy. Again,
there simply hasn't been enough time for a "political
gene" to have evolved.
Epigenetics
Identical twins, whether raised together
or apart, are strikingly similar on a host of psychological
variables. But they're also strikingly different, even when
they were raised together. Why should this be the case?
One answer lies in the complexities of
genetic inheritance: it turns out that individuals with the
same DNA might not be genetically identical after all.
According to epigenetic theory, genes carry epigenetic
tags, such as histones and methyl groups,
that do not change an individual's DNA, but rather enhance or
suppress the activity of particular genes -- turning them on
or off, if you will. It's by virtue of epigenesis that
embryological development happens -- how the single cell of
the newly conceived zygote begins to differentiate into the
specific cells that make up the various body parts of the
developing fetus. And it's the basic mechanism for many forms
of gene therapy -- inserting a gene into one (biochemical)
environment will have one effect, while inserting the same
gene into a different biochemical environment can have a quite
different effect.
But the scope of epigenesis goes beyond
embryological development. The result of epigenesis is that
while two individuals can have the same genotype, they can
have different phenotypes even at the cellular level of
analysis. The result can be -- here I'm taking an extreme
example for purposes of illustration -- that two MZ twins who
share the genotype for blue eyes, but one might end up with
brown eyes. Further, according to theory, environmental
factors like stress (war, child abuse, even social prejudice),
poor nutrition, or other forms of deprivation (such as the
economic deprivation of poverty), which can alter the chemical
environment of the gene, might affect the activity of these
epigenetic tags.
The interesting thing is that some of
these environment-induced changes in gene expression --
essentially, turning some genes on or off -- can themselves be
heritable. So, for example, the effect of stress on one
identical twin can alter that individual's genome, and perhaps
the genes that he or she passes on to his or her offspring --
providing a genetic mechanism for the intergenerational
transmission of the acquired characteristic.
Here's
an example of how epigenetic influences are studied (in mice,
in this case; obviously you can't do this kind of study in
humans), in an elegant experiment by Darlene Francis, a
neuroscientist in UCB's School of Public Health, and her
colleagues (Nature Neurosci. 2003).
Epigenetics is sometimes thought to
prove Lamarck's idea of the inheritance of acquired
characteristics -- that "acquired" modifications to the body
(or, for that matter, behavior) can be passed on from parents
to children. It doesn't and they can't. The
prenatal environment can affect whether certain certain genes
are turned on or off, and in this way what happens to a parent
can affect the consequences of a child's genetic
endowment. But that's not the same thing as the
Lamarckian vision -- in which giraffes grow long necks to eat
the tender leaves from the high branches of trees.
Besides, it's not at all clear that the altered genome can be
transmitted genetically. The altered genome will be
preserved during mitosis, or ordinary cell division,
but it is not at all clear that it will also be preserved
during meiosis, in which cell division produces sperms
and eggs. Whatever intergenerational transmission occurs
may be by virtue of pure environmental mechanisms --
biochemical, maybe, but environmental nonetheless.
In its broadest sense, epigenetics
refers to everything that determines an individual's
phenotype, other than his genotype. As such, it describes the
effects of the environment on gene expression, and might be
counted as an example of the person-by-situation
interaction discussed at length in the lectures on Personality and Social
Interaction. That is, some environmental factor,
such as stress levels, interacts with some aspect of the
individual's genetic endowment, to alter some phenotypical
aspect of behavior. ,
In general, the synergistic interplay of
genes and environments appears in two broad forms (for
examples in the context of attachment, see Dugan et al., JPSP:PPID,
2024):
- Gene-Environment Correlations (known as rGE
in the trade) occur when some gene -- or, more precisely,
individual differences in some heritable trait -- directs
the individual toward exposure to particular
environments. For example, a person carrying a gene
for aggressiveness tends to find him- or herself in
environments where aggression is likely to occur.
- In passive rGE, the genetic and
environmental contributions to an individual's behavior
have the same source -- e.g., parents who have a gene for
hostility may also create a hostile home environment.
- In active rGE, for example, individuals
actively select environments that are compatible with
their genetic dispositions -- e.g., an introverted
individual may prefer quiet, solitary environments like
libraries.
- In evocative rGE, individuals evoke
behavior from others that create an environment that is
compatible with their genetic tendencies -- e.g., an
extraverted person may lead others to engage in boisterous
behavior at parties.
- Gene-Environment Interactions (GxE) occur
when the presence or absence of a particular gene alters or
moderates the effect of the environment on the individual,
or vice-versa -- when a particular environment affects
(favors or inhibits) the expression of a particular
gene.
- A person who is genetically predisposed to alcohol abuse
may be more likely to develop alcoholism when exposed to
an environment in which there is a lot of heavy drinking,
compared to someone who assiduously avoids such settings.
- Alternatively, an environment characterized by a great
deal of hostile, aggressive behavior may be more likely to
bring about hostile and aggressive behavior in an
individual who is genetically disposed to hostility and
aggressiveness, compared to an environment which
encourages passivity and quietude.
We'll see more examples of the gene-by-environment
interaction in the lectures on Psychopathology and Psychotherapy.
But epigenetics is a biological construct, and it has only the
vocabulary of biochemistry -- histones, methyl groups, stress
hormones -- to describe that environment. But as
psychologists, we are primarily interested in a different
level of analysis -- the environment construed in
psychological terms, as the person's mental representation
of the environment. In psychological terms, what is important
about stress are the levels of unpredictability and
uncontrollability that cause it to occur; and whether the
person subjectively experiences an environment as
stressful -- regardless of whether it is "objectively"
stressful. And what goes for stress also applies to other
aspects of the environment: psychologists are always centrally
interested in the individual's mental representation of the
environment. How that mental representation is represented
neurally, and what effects it will have biochemically, reflect
a different level of analysis.
For a good account of epigenesis, see The
Epigenetics Revolution: How Modern Biology is Rewriting
Our Understanding of Genetics, Disease, and Inheritance
by Nessa Carey (2012). See also Carey's articles on
"Epigenetics in Action" in Natural History magazine:
Part 1, "Deciphering the Link Between Nature and Nurture",
appeared in the April 2012 issue; Part 2, appeared in May
2012.
See also "Epigenetics: The Evolution
Revolution" by Israel Rosenfield and Edward Ziff, New
York Review of Books, 06/07/2018 -- a sequel of sorts
to their earlier essay, "Evolving Evolution", NYRB,
05/11/2006.
For a more technical overview, see "Effects of
the Social Environment and Stress on Glucocorticoid Receptor
Gene Methylation: A Systematic Review" by Gustavo Turecki
and Michael Meaney, Biological Psychiatry, 2016.
In a somewhat similar manner, so-called
jumping genes are segments of DNA that can replicate
and insert themselves in new places in the genome, altering
the activity of their neighbor genes. Jumping genes appear to
be particularly active in the brain, leading some
neurogeneticists to suggest that they are the key to human
uniqueness: even identical twins are not, precisely,
genetically identical, and these small differences in the
genome are responsible for the differences in personality
observed in MZ twins -- including differences in the
concordance rates for various forms of mental illness, such as
depression.
Well, maybe. Epigenetic theory is
plausible, and it might even be true, in some cases.
Epigenetic factors have been invoked to account for the
relatively low concordance rates between MZ twins for certain
mental illnesses, such as schizophrenia and autism (as
discussed in the lectures on Psychopathology and
Psychotherapy). But note that epigenetic theory defines "the
environment" in purely biological, even biochemical terms --
not to put too fine a point on it, as the microenvironmental
soup that the gene sits in. As psychologists, we're primarily
interested in the social and cultural macroenvironment
in which the individual person lives.
Frankly, by focusing on epigenetic
factors, behavior genetics sometimes seem to want to have it
all -- to be able to explain even differences between
genetically identical individuals in genetic, biochemical
terms. I noted a similar trend with respect to junk DNA.
Put another way, there is a certain class of biologically
oriented individuals who want to discount the effects of the
social and cultural environment at all costs. That's fine, if
you're a biologist, because it's the very nature of the
biological level of analysis to construe the environment in
physical, chemical, and biological terms. But psychology is a
social as well as a biological science, and so psychologists
take a broader view of the environment -- one which considers
social and cultural influences on their own terms, without
reducing them to biochemistry.
Or, as
E.B. White and Carl Rose (and, later, Irving Berlin) put it,
"I say it's spinach (and the hell with it)". Let's look at the
macroenvironment, or the social and cultural world in which
the individual develops, from birth across the entire
lifespan.
For an accessible discussion of epigenetic
theory, see:
- "Hidden Switches in the Mind" by Eric J. Nestler,Scientific
American, 12/2011.
- The Epigenetics Revolution: How Modern Biology is
Rewriting Our Understanding of Genetics, Disease, and
Inheritance (2012) by Nessa Carey. Excerpted
in Natural History magazine, April-June 2012.
For a discussion of jumping genes, see:
- "What Makes Each Brain Unique" by Fred H. Gage (a
distant relative of the famous Phineas Gage) and Alysson
R. Muotri Scientific American, March 2012.
For excellent coverage of the modern science of heredity
and genetics, see:
- She Has Her Mother's Laugh: The Powers, Perversions,
and Potential of Heredity by Carl Zimmer, a
prominent science journalist.
Genetic Nurture -- or, "Unto the Next
Generation"
One of the paradoxes of behavior genetics is
that the same study that provides evidence of genetic
contributions to behavior also can give evidence of
environmental contributions. So, for example, the
twin-study method allows us to estimate the effects of
heredity, the shared environment, and the nonshared
environment. Similar insights can come from the
gene-wide association study (GWAS) method described earlier,
which allows us to identify particular genes -- or, more
precisely, particular alleles -- associated with particular
characteristics. For example, a recent study by
Augustine Kong and his colleagues (Science, 2018),
employing a huge sample of Icelandic probands (as subjects are
commonly called in genetics research), to identify a number of
different alleles which were significantly associated with educational
achievement (EA, measured by years of education
completed). Recall that children receive half their
genes from their fathers and half from their mothers.
Therefore, it follows that some of these alleles will be
passed down from parents to their children, but others will
not. For each subject, Kong et al. identified which EA
genes (as we'll call them) were transmitted by each parent to
each of his or her children, and then examined the
contribution of both transmitted and non-transmitted EA genes
to EA in their children. They found that direct
genetic transmission from parent to child accounted for a
significant proportion of the variance in the children's EA,
as would be expected from the twin studies summarized
earlier. But there was also a significant effect of the
parents' non-transmitted EA genes on their children's
EA -- an effect about 1/3 as strong as the direct genetic
effect. Kong et al. call this effect genetic nurture
-- that is, by virtue of the EA genes that they possess, the
parents create an environment that affects EA in their
children over and above the direct effect of the EA genes the
parents have passed down to them. This genetic nurture
effect is one component in the shared environment of the
children.
If you think about it, genetic nurture effects
involve more than parents and children. Children get
roughly 1/4 of their genes from each of their grandparents,
too. So some EA alleles are transmitted from
grandparents to grandchildren directly, through the parents,
but the non-transmitted genes also affect the environment in
which the parents were raised, and also -- to the extent that
the children are in contact with their grandparents -- the
environment for the children. And remember that children
in the same family share about 50% of their genes in
common. So some parental EA genes, will get passed to
proband John, but not to his siblings Jean and Don, and
vice-versa. But by virtue of the EA genes they have
received, Jean and Don will create an environment that affects
John's EA. And vice-versa.
Remember that children get half of their genes
from each parent on average. Now imagine that
John didn't get his fair share of EA genes, but that Jean and
Don got more than their fair share. John might be
genetically disadvantaged in this sense, but Jean and Don will
use their genetic advantage to create an environment that
promotes John's EA. And if John got more than his fair
share, and Jean and Don got less, they might create an
environment that holds John's EA back. This is one of the ways
in which the family environment ostensibly shared by siblings
is not the same for each of them. Now let's
explore some other ways.
The Nonshared Environment
The fact is that, with respect to the
major dimensions of personality, variability within a
family -- i.e., the variance among children raised in the same
family environment -- is almost as great as the variability between
families -- i.e., the variance between children raised in
different families. This result is sometimes misinterpreted as
meaning that parents have no influence on their children. But
it doesn't mean that at all.
What it means, first, is that there are
other forces at work besides the parents, and these become
increasingly important as the child begins to move beyond the
family (e.g., by going to school or joining the soccer league)
In addition, it means that parents
don't have the same effects on each of their children.
These differences in the nonshared
environment are critical for our individual uniqueness.
Put another way, the most important environmental determinants
of personality are the unique experiences that we have in our
lives.
The Nonshared Environment
One source of the nonshared environment,
of course, is the extra-familial environment. Even if all
children within a family are treated precisely alike by their
parents, once they begin to interact with the world outside
the family they will have experiences that tend to make them
different.
As examples of
extra-familial influence, consider a series of studied by
David Rowe and his colleagues (e.g., 1992), using
behavior-genetic methods, of the sources of various aspects of
adolescent behavior.
- In one analysis, Rowe found that parents who smoke tend
to have children who smoke, and the behavior-genetic
analysis indicated that this influence was mediated by
heredity (G), not the shared environment within the family
(Es).
- But it is also true that adolescents who smoke tend to
have peers who smoke -- an effect of the nonshared
environment outside the family (Ens).
Rowe and his colleagues found similar
causal patterns for alcohol consumption, delinquency, sexual
behavior, and pregnancy. In each case, the shared environment
had relatively little impact on behavior, but peer groups had
a powerful effect on whether adolescents experimented with
tobacco, alcohol, sex, and misbehavior.
Similarly, a study by Kindermann (1993)
of academic motivation among elementary-school children found
that students tended to group themselves into cliques such as
"brains" and "slackers" -- but also that membership in these
cliques tended to be somewhat unstable, with individual
children moving back and forth from one group to another.
Interestingly, the children's attitudes toward school changed
as they changed cliques. "Brains" lost interest in school if
they moved to a "slacker" clique, and "slackers" gained
interest if they moved to a "brains" clique -- despite the
fact that their IQs remained constant, and parental influences
presumably did so as well. It was the peer group that caused
the changes to occur.
Based
on results such as these, Harris (1995, 1998, 2006) has
proposed a theory of group socialization which argues
that peer groups and peer cultures, not parents, are the most
powerful socialization forces impinging on a child. In fact,
Harris argues that socialization is context-specific, and that
children may behave quite differently depending on whether
they are at home or away, and depending on which particular
extra-familial context they're in.
As an example, she points
to "code switching" among bilingual and bicultural children
(as discussed in the lectures
on "Language"):
- Children born to Spanish-speaking parents, for example,
may continue to speak Spanish inside the home, even though
they will speak English with their peers.
- Similarly, minority children from middle-class families
may succumb to pressure from their peer groups to downplay
academic achievement and other forms of "acting White"
(Fryer, 2006).
- As another example, consider food preferences. If
parents are such powerful socialization agents, why do
they find it so hard to get their children to eat what
they want them to eat? Children want to eat what their
friends like, not what their parents like.
Even
within the family context, however, there are important
differences among siblings. Harris (1995, 1999, 2006) has
classified these within-family differences into four
categories:
- child-driven,
- parent-driven,
- relationship-driven, and
- family context.
In addition to these within-family
differences, there are also extra-familial influences that
influence the development of personality and social behavior.
Harris on Parental Influence
In The Nurture Assumption: Why Children Turn Out
the Way They Do (1998), Harris reproduced a
famous bit of verse by Philip Larkin (1922-1985), an
English poet. Now, Larkin had issues: he once
quipped that "Sex is too much fun to share it with
others". In "This Be the Verse" (1971/1974)
Larkin reflected the dominant view of personality
development in his time, influenced as it was by
Freudian psychoanalysis -- that parental influence
dominates the development of personality.
They fuck you up, your mum and dad.
They may not mean to, but they do.
They fill you with the faults they had
And add some extra, just for you.
But they were fucked up in their turn
By fools in old-style hats and coats,
Who half the time were soppy-stern
And half at one another’s throats.
Man hands on misery to man.
It deepens like a coastal shelf.
Get out as early as you can,
And don’t have any kids yourself.
To which, Harris replied with a bit of verse of her
own:
Poor old Mum and Dad: publicly accused by their
son, the poet, and never given a chance to reply to
his charges. They shall have one now, if I may take
the liberty of speaking for them:
How sharper than a serpent’s tooth
To hear your child make such a fuss.
It isn’t fair—it’s not the truth—
He’s fucked up, yes, but not by us.
Then by whom? Peers and other elements of the
nonshared environment.
|
Child-Driven Effects
Child-driven effects, also known as
reactive effects, relate to the fact that each child brings
certain physical and behavioral characteristics into the
family, which in turn affect how he or she is treated by
parents and others.
Some reactive effects
reflect the environment's response to the physical
appearance of the child. These reflect the evocation
mode of the person-by-situation interaction.
- The clearest example, discussed in the lectures on
Personality and Social Interaction, concern how the
child's biological sex -- male or female -- affects
gender-role socialization. Here the physical appearance of
the child structures the environment, by evoking
differential treatment by others, according to cultural
prescriptions for the proper socialization of boys and
girls.
- There may be other examples, having to do with the
child's physical appearance:
- whether the child is conventionally "pretty", or
perhaps has some blemish or disfigurement;
- whether the child physically resembles the parents or
others in the immediate family.
Other reactive effects are
instigated by the behavior of the child, not just his
or her physical appearance. These reflect the manipulation
mode of the person-situation interaction: there is something
the child does, however unwittingly or involuntarily,
to alter the environment in which he or she is raised.
- The clearest reactive effects of the child's behavior
have to do with individual differences in temperament --
by which we mean the person's speed and strength of
emotional arousal. Temperament is usually thought of as a
product of genetic endowment and physiology, which combine
to give the child a generally "quiet" or "fussy"
disposition. The child's temperament-related behaviors
then interact with the parents to alter the environment in
which the child is raised.
- In a positive feedback loop, a child with a
pleasant temperament might elicit positive treatment
from the parents, while one with an unpleasant
temperament might elicit negative treatment. Of course,
the parents' response to the child will elicit
subsequent behavior from the child himself. In this way,
a vicious cycle can develop that strengthens the child's
initial behavioral tendencies -- making a quiet child
quieter, and a fussy child fussier.
- In a negative feedback loop, a child with a quiet
temperament might elicit "lower-limit control behaviors"
from the parents, intended to increase activity levels,
while one with an active temperament might elicit
"upper-limit control behaviors" intended to decrease
them.
Positive and Negative Feedback
Remember how positive and negative feedback
are defined: They don't refer to pleasant or unpleasant
consequences, like reward and punishment. Instead:
- Positive feedback refers to any response that
strengthens the stimulus that produced it.
- Negative feedback refers to any response that weakens
the stimulus that produced it.
Of course, such
child-driven effects may extend beyond the child's
interactions with his or her parents, as children begin to
venture outside the home to school, playgroups, sports
programs, and the like. Thus:
- aggressive children may elicit aggressive behavior from
other children;
- introverted children may be ignored by their teachers.
In general, child-driven effects are
unpredictable, because they greatly depend on the response to
the child by the people who make up the child's environment --
their own personalities, beliefs, attitudes, goals, and the
like.
Parent-Driven Effects
Parents don't just
react to their children's appearance, behavior, or
temperament. To some extent, parental behavior is independent
of the physical, mental, and behavioral characteristics of the
child. For example:
- Sad as it may be, some parents will reject a child who
is the product of an unplanned pregnancy (this doesn't
always happen, of course: unplanned pregnancies can be
joyful surprises for parents; but to think that they're
all blessed events is asking too much. Sometimes the
appearance of a child conflicts with the parent's other
plans, or makes life difficult for the parent in some way
(this is why it is so important that every child be
actively wanted by its parents).
- Some parents try to treat identical twins very
similarly, with respect to dress, activities, and the
like; other parents of identical twins will go out of
their way to treat them differently. To the extent that
identical twins are deliberately treated differently by
their parents, of course, they are raised in different
environments despite having identical genetic endowments;
they have little by way of shared environment, and a great
deal by way of nonshared environment.
- Parents who have more than one child sometimes
experience contrast effects on their perception of
their offspring.
- If the first child was "difficult", the second may be
perceived as "easier" to raise.
- If the first child was "easy", the second may be
perceived as "difficult to raise -- even if there are no
objective differences between the two children's
behavior.
- Fathers feel emotionally closer to children whom they
believe resemble them, compared to children who resemble
them less closely, or not at all (Heijkoop et al.,
2009). Perhaps this is because they suspect that the
children who don't look like them are not actually
theirs (mothers don't have this problem, obviously):
this would be the explanation given by evolutionary
psychologists. But perhaps the effect is simply a
consequence of the mere exposure effect: children who
resemble their parents look more familiar, and thus tend
to get higher preference ratings.
- These different perceptions will naturally translate
into different parenting behaviors, which will exaggerate
the differences in home environments between the children.
Parental Styles
MacArthur & Wilson (1967; that's E.O. Wilson,
the great evolutionary theorist) identify two major
patterns of population dynamics, are associated with
the abundance of resource sin the environment --
what evolutionary biologists call its "carrying
capacity".
- In r-selection, parents produce many
offspring but "invest" relatively few resources in
them, resulting in high levels of infant mortality
(the "r" stands for rate of reproduction).
- In K-selection, parents produce
relatively few offspring but devote lots of
resources to their nurturance -- theoretically
increasing their reproductive advantage (the
"K", with initial capitals, stands for "capacity
limit" in German).
The "r/K rule" seems to capture a lot of what
happens in nonhuman species; and at first glance,
seems to apply to humans as well. People
living in impoverished, underdeveloped societies
tend to have large families, with relatively high
levels of infant mortality, while middle- and
upper-class families have relatively few children
who tend to succeed in school and work (Mormons are
a salient exception: in general, Mormons believe
that parents should have as many children as they
can afford). However, a longitudinal study by
Goodman et al. of the Uppsala Birth Cohort, a sample
of about 14,000 people born in the Swedish capital
between 1915 and 1929, and their descendants, showed
that as SES improves, families grow smaller, and the
children do indeed do better at work and school; but
their children produced relatively few grandchildren
and great-grandchildren (Goodman et al.,
2012). This is contrary to evolutionary
theory, which (to make a long story short) defines
"reproductive success" in terms of the number of
grandchildren. It is, however,
completely compatible with a cultural
explanation. In developed societies, with
reduced infant mortality, there's little reason to
have lots of children, in the hope that a few of
them will survive.
At the psychological level of analysis, UCB's Diana
Baumrind (1971) has described four basic parenting
styles:
- Authoritative: Parents are warm and
responsive to their children, but set high
standards and set limits on their behavior.
- Authoritarian: Parents are emotionally
distant from their children, and set rules without
explanation or negotiation.
- Permissive: Parents are warm and loving,
but undemanding and fail to set limits.
- Indifferent: Parents interact very seldom
with their children, doing little more than
providing food and shelter.
According to Baumrind, these parenting styles are
linked to behavioral differences in the children
raised according to them. But the direction of
cause and effect is not entirely clear. It
might be, for example, that authoritative parents
produce well-behaved children, but it is also
possible that well-behaved children allow their
parents to behave warmly toward them, and set higher
standards for them.
A particular variant of the authoritarian parenting
style has captured much popular attention lately: the
"Tiger Mother" epitomized by Amy Chua, a Yale law
professor (and graduate of El Cerrito High School), in
her book, Battle Hymn of the Tiger Mother
(2011; see also her essay, "Why Chinese Mothers Are
Superior", Wall Street Journal,
01/08/2011). Chua, who never allowed her two
daughters to watch television or play computer games,
and never accepted any school grade less than an "A",
argued that "Chinese parents are better at raising
kids than Western ones". In her view, for
example:
- Western parents are primarily concerned about
their children's self-esteem, while "Chinese"
parents assume that their children are strong
rather than fragile -- for example, demanding good
grades because they believe that their children
are capable of getting them.
- Western parents believe they owe their children
everything, while "Chinese" parents believe that
their children owe them everything.
- Western parents are too permissive, while
"Chinese" parents believe that they know what is
best for their children and freely veto their
preferences and desires.
I Put "Chinese" in scare quotes because Chua
acknowledges that the Chinese "Tiger Mother" is
something of a stereotype, and that Tiger Mothers --
and, for that matter, Tiger Fathers -- are to be
found in all cultures. But authoritarian parenting
is more common in Chinese families than in American
families.
Tiger Mothering may be good for children's academic
and professional outcomes, but it also has
drawbacks, as Kim Wong Keltner, sister-in-law of
UCB's Prof. Dacher Keltner, has written in her
rejoinder to Chua, Tiger Babies Strike Back
(2014). Stephen Chen, working in
the laboratory of UCB's Prof. Qing Zhou, has found
that Chinese-American children raised by
authoritarian parents show, on average, higher
levels of anxiety and depression, and poorer social
skills, than those raised by American-style
authoritative parents (Chen, et al., 2014). Of
more importance, however, is the interaction between
parents' and children's cultural orientation --
another example of the parent-child match and
relationship-driven effect. What is
particularly deleterious is the mismatch that occurs
when parents retain the child-rearing assumption of
their "heritage" culture while their children adopt
the assumptions of their "host" culture. Zhou
and her colleagues are now running workshops for
Chinese-American parents in San Francisco, seeking
to persuade parents of the benefits of a blend of
"Chinese" authoritarianism (which is good for
academic success) and "American" authoritativeness
and, maybe, just a little permissiveness (which is
good for mental health).
|
Relationship-Driven Effects
In statistical terms, we
can think of child-driven and parent-driven effects as main
effects. So, with two main effects, come the interaction
between them. Relationship-driven effects have to do
with the "fit" between child and parent in terms of appearance
and temperament. For example:
- A quiet child may elicit quite different behaviors from
a parent who is also quiet, as opposed to one who is
active. In fact, home and school can be quite unhappy for
many introverted children, who are pushed by their parents
and teachers into social activities and group work,
because they think they need to "get out and socialize
more".
- Similarly, an introverted parent may react quite
differently to a quiet child, as opposed to one who is
active.
Relationship-driven effects are related
to the selection mode of the person-by-situation
interaction, in that they involve the degree of "fit" between
the child and the other people who make up his or her social
environment.
For a journalistic account of children
whose characteristics did not "match" with those of their
parents, either by virtue of some disability (e.g.,
deafness, Down syndrome, autism, or schizophrenia) or some
talent, see Far from the Tree by Andrew Solomon
(2012; reviewed by Nathan Heller in "Little Strangers", New
Yorker, 11/19/2012; also by Julie Myerson in "Coming
Into Their Own", New York Times Book Review,
11/25/2012). Parents and children adjust to each
other, and that's Solomon's point, but it isn't always
easy. For one thing, children may have an identity
that is different from their parents. The parents
might be able-bodied,. the child disabled in some way; the
parents might be hearing, the child might be deaf; might be
straight, but the child gay; the parents might be Catholic,
but the child has converted to Islam. These
parent-child differences in horizontal identity can
make it hard for parents and children to get along.
An interesting twist on
relationship-driven effects is illustrated by a recent
behavior-genetic study which shows how child personality can
affect parenting behavior (Ayoub et al., Social
Psychological and Personality Science, 2018). In a
study employing 1,411 children, twins and triplets, enrolled
in the Texas Twin Project, the researchers (using an
alternative genetic model to the one discussed earlier in
these lectures), first confirmed that there is a significant
genetic contribution to the Big Five personality traits -- but
also that the nonshared environment accounted for 62-72% of
the variance in childhood personality. But in a variant
on the usual twin-study method, the investigators also
examined MZ and DZ correlations on parental warmth and
stress. The finding was that identical twins received
more similar parenting than fraternal twins did, especially
with respect to parental stress variable. For example,
children who scored high on agreeableness received more
parental warmth and less parental stress than did those who
scored low. The children elicited different parental
behaviors by virtue of personality characteristics that were
partly heritable. Childhood personality accounted for
only a relatively small proportion of variance in parenting
warmth and stress, but the effect was significant. As
the authors conclude, "parenting is a dyadic and dynamic
process, whereby both parents and children influence each
other".
Family-Context Effects
Family context effects relate to the
children's "microenvironments" within a family. For example,
in my family, there was a father, a mother, a girl and two
boys. Therefore, my family microenvironment consisted of my
parents, my sister, and my older brother. But my brother's
family environment was different -- it consisted of my parents
(who of course were also his parents), but it also included my
sister and me, his younger brother. Similarly, my sister's
family environment consisted of our parents and my brother and
me. Different people in each environment. Put bluntly, I grew
up in an environment that included a very popular cheerleader
and a varsity basketball player; my sister and brother didn't.
It's the same for every child in every family.
Serendipity
Aspects
of the nonshared environment can be classified in many ways,
but one that almost defies classification is serendipity
-- chance encounters that shape our attitudes and
personalities, and almost by definition constitute unique
experiences. The word serendipity has its origins in
a folktale first related in 1754 by Horace Walpole, and
English writer, about three princes of Serendip, or Sri
Lanka, who "were always making discoveries, by accident and
sagacity, of things which they were not in quest of". In The
Travels and Adventures of Serendipity (Princeton,
2004), the late Robert K. Merton and Elinor Barber trace
both the origin of the word and the role that serendipity
has played in the history of science -- for example,
Alexander Fleming's accidental discovery of penicillin. In
much the same way that simple chance will lead a scientist
one way as opposed to another, simple chance can profoundly
affect our lives and the way we lead them.
Birth-Order Effects?
Among
the most controversial family-context effects involve
birth order -- that is, systematic differences in
personality between first-born and latter-born siblings in a
family. Because there are no systematic genetic differences
between first-borns and latter-borns (all brothers and sisters
share a random 50% of their genes in common), any systematic
differences between them must be due to their position in the
family constellation.
But are there any such
systematic differences owing to family constellation?
Until recently, most
researchers held that birth-order effects were weak or
inconsistent (Schooler, 1966; Ernst & Young, 1983). To be
sure, there were occasional studies that demonstrated
personality differences between first-borns and latter-borns,
but there were lots of confounding variables that made the
studies difficult to interpret:
- By definition, first-borns are older than latter-borns,
so any differences between them might be a product of age,
not family constellation.
- Also by definition, birth order is correlated with
family size. You can't be a latter-born unless there are
at least two children in the family, and you can't be the
fifth-born unless there are at least five. Family size, in
turn, is correlated with parents' education, occupation,
and socioeconomic status. As a general rule, in Western
countries at any rate, highly educated, wealthy,
professional people have fewer children than poorly
educated, poorer, working-class people. There are
exceptions, of course: for example, members of the Mormon
religion (Latter-Day Saints) are encouraged to have as
many children as they can afford. But the fact that family
size tends to be negatively correlated with socioeconomic
status means that, in most populations, subjects who are
first-borns will be from wealthier families, on average,
than subjects who are latter-borns (it takes a little
while to get your head around this, but you can do it).
For that reason, differences between early-borns and
latter-borns may be an artifact of differences in
socioeconomic status.
Birth Order and Personality
None of these problems are
intractable, however, provided that your sample sizes are
large enough. And there are reasons for being interested in
the possibility of birth-order effects on personality. For
example, Frank Sulloway (1996), a historian of science who
dabbles in evolutionary psychology, has argued that, in
Darwinian terms, siblings complete with each other for their
place in the family environment -- just like species and
organisms compete for their environmental niches in
nature. This is known as the Family Niche Theory.
- At least among males, Sulloway argues that first-borns
have first choice, which makes them more traditional and
acquiescent to authority.
- By contrast, Sulloway argues, latter-borns have to find
other ways of distinguishing themselves, making them more
egalitarian and anti-authoritarian. From Sulloway's point
of view, laterborns are "born to rebel" (which, not
coincidentally, is the title of his book on birth-order
effects).
Primogeniture
As an example of the sort of "competition"
process that Sulloway has in mind, consider the practice of
primogeniture, quite common among the titled lords and
landed gentry in England and elsewhere in Europe. In this
practice, the first-born son inherited the father's estate,
leaving the other sons to fend for themselves. As the saying
went (more or less): the first-born got the title, the
second-born son went into the military, and the third-born
son went into the church. Parents wanted to marry their
daughters off to first-born sons, so that they would not
have to provide so much of a dowry (see, for example,Little
Women or almost any 19th-century English novel). Note,
too, that in royal successions, the crown passes from the
king or queen to his or her eldest child (usually, in fact,
the eldest son) -- regardless of his abilities or desire for
the job (think about the House of Windsor in England:
Charles get to be King when Elizabeth dies, while Andrew
went into the Royal Navy and Edward became a filmmaker (not
exactly the Church, but you get the picture). If the
first-born son was disobedient, he could be disowned, and
have to work for a living. No wonder, if Sulloway is right,
that first-borns were more conservative and obedient to
authority!
Sulloway, trained as a
historian of science, found that scientists who made
revolutionary contributions to their fields tended to be
latter-borns. This led him to become interested in the wider
psychological literature on birth-order and personality. In
fact, a "meta-analysis" (i.e., a form of quantitative
literature review that summarizes and aggregates the outcomes
of many studies) performed by Sulloway, revealed systematic
birth-order effects on personality:
- Neuroticism: firstborns > laterborns:
Firstborns tend to be more jealous, more anxious, more
neurotic, more fearful, and more likely to affiliate with
others under conditions of stress.
- Extraversion: firstborns > laterborns:
Firstborns tend to be more extraverted, more assertive,
and more likely to exert leadership in groups.
- Agreeableness: laterborns > firstborns:
laterborns tend to be more easygoing, more cooperative,
and more popular with others.
- Conscientiousness: firstborns > laterborns:
Firstborns tend to be more responsible, more achievement
oriented, more organized, and more planful.
- Openness: laterborns > firstborns:
Firstborns are more conforming, more traditional, and more
closely identified with their parents' attitudes, beliefs,
values, and goals.
Technically speaking, Sulloway counted
the number of comparisons on each dimension that gave
positive, negative, or null findings with respect to his
Darwinian hypotheses. For each Big Five dimension, he found
that far more studies supported his hypothesis than
contradicted it. Lots of studies yielded unambiguous findings,
though, leading to some controversy over his interpretations.
- Of the total of 196 comparisons, only a minority of the
studies (72, or 36.7%) confirmed his hypotheses. However,
if you adopt the standard criterion for statistical
significance --p < .05, meaning that a result
has a 5% probability of occurring randomly, this is more
than we'd expect by chance (196 x .05, or 9.8).
Conversely, even fewer studies (14, or 7.1%)
yielded clearly negative results.
- Of the 86 comparisons that yielded definitive findings
one way or another, the vast majority (74. or 86.1%) were
positive, confirming his hypothesis.
The power of Sulloway's meta-analysis
comes from the fact that he made his predictions in
advance, based on his reading of Darwinian theory.
However, his analysis involved a lot of interpretation, and it
would be nice to see his findings confirmed in a study
expressly designed to test his hypotheses.
Such a study was performed by
Paulhus et al. (1999), based on samples drawn from student
populations in both California and Canada. Paulhus asked
subjects to think about themselves and their siblings, and to
nominate who in their family was the "achiever" and who was
the "rebel". In both samples, Paulhus found that subjects were
more likely to nominate the first-born child in their family
as the "achiever", and a laterborn child as the "rebel", than
we would expect by chance.
- For example, if there were two children in a family, we
would expect 50% (1/2) of firstborns to be nominated as
"achievers", and 50% of laterborns (the remaining 1/2) to
be nominated as "rebels", just by chance. On the contrary,
in the California sample Paulhus 65% of "achiever"
nominees were firstborns, and 61% of "rebel" nominees were
laterborns.
- If there were three children in a family, we would
expect 33% (1/3) of firstborns to be nominated as
"achievers", and 67% of laterborns (the remaining 2/3) to
be nominated as "rebels", just by chance. On the contrary,
in the California sample 37% of the "achiever" nominees
were firstborns, and 71% of the "rebel" nominees were
laterborns.
- The findings originally obtained in the California
sample were subsequently replicated in a Canadian sample.
In two other studies, Paulhus and his
colleagues also found other respects in
which firstborns differed from laterborns. For example, there
were more firstborns nominated as the "scholastic achiever" in
the family, and more laterborns nominated as the "liberal" in
the family, than we would expect by chance. These departures
from statistical expectations are sometimes small, but these
small effects accumulate to provide significant support for
Sulloway's hypotheses.
Note that Sulloway and Paulhus found
significant personality differences between first- and
laterborn children, but these differences do not necessarily
validate Sulloway's "Darwinian" theory. The personalities of
first- and laterborn children may differ in significant ways
for reasons that have nothing to do with competition for an
environmental niche. There may be other expectations. For
example, parents may impose their own expectations more
strongly on the firstborn, and give laterborns more freedom.
Royal families (like the House of Windsor in England) want
to produce "an heir and a spare", but once the heir proves
up to his assigned job, his younger sibling may be given a
great deal of freedom to pursue his own interests. In
England, Edward looks like a rebel only because Charles is
doing his duty.
Birth Order and Intelligence
Perhaps the most
controversial claim about birth-order is that firstborns are
more "intelligent", as measured by standard "IQ" tests, than
laterborns. It was this hypothesis that was specifically
rejected by the Schooler (1966) and Ernst & Young (1983)
studies cited earlier. However, a provocative study by Zajonc
and his colleagues has revealed an interesting (if small)
effect of birth order on general intelligence. These effects,
however small, has led Zajonc to develop a confluence
model of development that recapitulates the major themes
of these lectures:
- The person is a part of his or her own environment.
- The child is an agent of his or her own development.
The first study involved an analysis of
data collected in the Netherlands as part of a study of the
effects of the Dutch famine of 1944 (Zajonc & Markus,
1974). As part of routine testing for the military draft, the
Dutch government administered a nonverbal IQ test (Ravens
Progressive Matrices) to every Dutch male who reached 19 in
the years 1963-1966. Zajonc and Markus then plotted mean IQ
scores as a function of both family size and birth order.
The
results revealed a significant interaction of birth order with
family size on IQ. Specifically:
- Average IQ declines with family size.
- Within each family size, average IQ declines with birth
order.
- The last-born child shows a greater decline in IQ than
any other birth rank.
- The rate of decline in IQ diminishes with later birth
ranks.
- An only child has a lower average IQ than the firstborn
of a two-child family.
Before we go on, please note that the
effects just described are very small. Note the Y-axis
on the accompanying figure: the difference between the top and
bottom means amounts to only about 10 IQ points. The
differences noted above achieve statistical significance only
by virtue of the huge sample size involved.
Given the fact that individual differences in
IQ are only weakly related to social outcome in the first
place, the differences revealed in Zajonc & Markus's
analysis are of no practical significance. However, as we
will see, they are of considerable theoretical
import. Big theories can be built on small effects, and that
is as true in psychology as it is in physics.
The Confluence Model
In
order to explain the joint effects of family size and birth
order on IQ, Zajonc and Markus proposed a confluence model
of intellectual development. This model traces the
mutual intellectual influences among children, and their
parents, as they develop. The major features of the model are
as follows:
- The Dilution Effect: A newborn child effectively
diminishes the intellectual resources available within a
family. Newborns literally don't know very much, and their
lack of declarative and procedural knowledge drags the
family down.
- The Growth Effect: Each child contributes more
intellectual resources to the family as he or she grows
up. Over time, this growth brings the family average back
up.
- But if more siblings come into the family, each new
child is born into a progressively diminished
environment. This is an extension of the dilution
effect.
- At the same time, each child in the family is growing
up. The intellectual grown of early-born siblings
progressively enhances the intellectual environment for
the whole family, and counteracts the dilution effect
created by the laterborns.
The
actual effects of dilution and growth depend on the spacing
of the siblings.
- If siblings are spaced closely together, the
dilution effect is increased.
- If siblings are spaced farther apart, the dilution
effect is weakened.
- In large families, some earlyborns are much older
than some laterborns. Therefore, the dilution effect
is weakened for these laterborn children.
- The Teaching Effect: Earlyborns also profit from
the presence of laterborns, because they get intellectual
stimulation from teaching younger siblings.
- The Last-Child Handicap: The last-born child
doesn't get the benefit of the teaching effect, simply
because there are no younger siblings for him or her to
teach. Therefore, he or she is at a special disadvantage.
- The Only-Child Handicap: For the same reason, an
only child also doesn't get the benefit of the teaching
effect. This puts only children at a disadvantage compared
to the firstborns of small families. In a sense, the only
child is both a firstborn and a last-born.
The
theory makes the interesting prediction that twins and
triplets should be even more disadvantaged than only children,
because their birth produces a big dilution effect. This is,
in fact the case, but of course the outcome depends on details
of birth order, spacing, and the like. Some other implications
of the theory:
- Children from single-parent households may be at a
special disadvantage, because there is a stronger dilution
effect with only one parent in a household. Of course,
there may be other adults present, such as grandparents or
paramours, who can substitute for the missing parent.
- Children from extended families may be at a special
advantage, because there are lots of adults around to
counteract the dilution effect.
A Continuing Controversy
Although I use birth-order effects on
personality and intelligence to illustrate the general idea
of within-family differences, it has to be emphasized that
the effects of birth order are very small in absolute
terms. Judith Rich Harris, for example, thinks that
birth order is a relatively unimportant influence on
personality.
One thing for sure is that birth-order
research is complicated.
In the first place, we have to distinguish
between "between-family" and "within-family" designs like
Paulus's. In between-family designs, the individuals
are unrelated to each other. For example, in the
Zajonc and Markus study, all the subjects were 19 years old,
so one subject would be a first-born from one family while
another subject would be a latter-born from a different
family. Between-family designs are, typically, unable
to control for between-family differences such as
socioeconomic status. family structure, and number of
siblings (though Zajonc and Markus did). In
within-family studies, all these factors are controlled for,
except that birth order is necessarily confounded with age:
first-borns must, necessarily, be older than
latter-borns. There's no perfect study.
In the second place, birth-order studies are
necessarily large, involving a huge number of subjects, and
therefore have to make some compromises. For example,
most rely on self-reports of personality, rather than actual
behavior (though Zajonc and Markus had objective IQ test
data).
Zajonc's
essential findings were confirmed in a sample of
252 799 male Norwegian military draftees (Bjerkdahl et
al. Intelligence, 2014). The results were the
same regardless of whether the study analyzed data between
families (as Zajonc's study did), or within families (which
removed any potential confounds). Mean IQ declined
with birth order -- although, again, the differences were
too small to be of any practical significance.
The largest study of birth order to date, by
Damian and Roberts (J. Res. Personality, 2015), was a
between-family study involving some 377,000 American
high-school students who participated in Project Talent, a
long-standing, nationwide, longitudinal study, and found
that the average correlation between birth order and
intelligence was .04, and the average correlation between
birth order and personality was .02. Although these
correlations are small in absolute terms, at the level of
the population they are not insignificant. For
example, the correlation between aspirin and reduction in
heart attacks is actually relatively low, corresponding to a
correlation of .034 (Rosenthal, 1990) -- yet, on a
population level, the correlation is high enough that
physicians typically put patients at risk for heart attacks
on an aspirin regimen, and the first thing you should reach
for if you're having a heart attack is the aspirin
bottle.
Another large-scale study, involving three
multinational samples totaling more than 20,000 subjects,
confirmed a small correlation between birth-order and IQ
(Rohrer et al., 2015). There were no correlations
between birth order and personality, as measured by the Big
Five, except for openness to experience, where there was a
small effect of both order on the "intellect" facet only,
paralleling the effect on objectively measured
intelligence. This study was notable for its inclusion
of both between-family and within-family analyses.
Getting back to birth order, nobody says that
first-borns are necessarily smarter than latter-borns, or
that they're more conscientious and less rebellious.
You have to look at the family dynamics in question.
The trends that we see at the population level, small as
they are (and they are very small), nevertheless
illustrate a larger point about within-family
differences. There are lots of other potential
within-family differences besides birth order.
Like the question of birth order effects
itself, Sulloway's work is highly provocative, but it is
also highly controversial -- not just for the Darwinian
interpretation he puts on his findings, but also with
respect to the findings themselves. For critiques of
Sulloway's work, see:
For another view of within-family
differences, see The Pecking Order: Which Siblings
Succeed and Why by Dalton Conley (Pantheon, 2004).
The confluence theory
tells only part of the story of intellectual development .
There are lots of other things going on, both genetic and
environmental. But the major assumptions of the theory
illustrate the basic points made throughout these lectures on
personality, social interaction, and psychological
development:
- The individual is a part of his or her own environment.
- The environment itself is dynamically changing as
different individuals enter and leave it.
- The individual is constantly influenced by these
environmental changes.
- The individual reciprocally influences the environment
by virtue of his or her own changes, regardless of where
these changes come from.
The person is a part of his or her own
environment.
The person is an agent of his or her own
development.
The nonshared environment
is such a powerful force in personality development because
everyone creates a unique environment for him- or herself, in
interaction with other people, through the various modes of
person-by-situation interaction:
- Evocation;
- Selection;
- Manipulation;
- Transformation.
The uniqueness of the individual's
environment is the sum total of many different effects, acting
alone and in combination. And the uniqueness of the
environment, as it is shaped by the individual him- or
herself, in turn contributes to the uniqueness of the
individual.
There is a sort of paradox in
development, which is that the universal processes of
development are universal, but these widely shared processes
come together in such a way as to produce a unique individual.
The implications of these interactions
for development are profound, because they mean that, in
psychological terms:
Every child is born
to different parents, raised in a different family,
lives in a different
neighborhood, attends a different school, and worships in a
different church.
In medieval times,
philosophers debated the idea of contingentia
mundi -- the idea that the world world we actually live
in is not the only one that is possible. And we now
know, scientifically, that this is the case: if that comet had
not struck the Earth, killing the dinosaurs, things would have
been very different for us humans (and other mammals!).
What goes for the Earth goes for the individual as well.
The people we are are not the only ones that are
possible. Each of us is contingent: we have been shaped
by a whole host of forces, each of which could have been
different from what it was; and we have shaped ourselves, and
each of us could have made different choices than the ones we
made. Personality, like the world itself, is contingent.
Consider the Snowflake...
The
individual's unique personality is shaped by its
environment, which in turn is shaped by the person, in a
complex dynamic system. If you're looking for a metaphor for
this process, you might consider the snowflake, as Adam
Gopnik did in the following comment, which appeared in The
New Yorker ("All
Alike", 01/03/2011).
In the cold, thoughts turn to snowflakes, heralds of
winter.For the past three decades, at this time of year, a
twinkling snowflake has been hoisted above the
intersection of Fifth Avenue and Fifty-seventh Street.
It's a giant, galumphing thing, which makes the crossroads
of the world resemble the main intersection of a Manitoba
town. Closer[to the New Yorker offices], Starbucks
on Forty-seventh and Sixth, even has a sign that reads,
"Friends are like snowflakes: beautiful and different".
This thought seems so comforting, so improving and
plural-minded, that one begins to wonder whether it is
truly so. are snowflakes really different--or, rather, how
different are they, really?
A quick trip to the New York Public Library
and a few request slips (and, let it be said, a little
Googling) later, one arrives at the compelling figure of
Wilson (Snowflake) Bentley, the great snowflake-ologist,
hero of the best movie Frank Capra never made. Bentley was
a Vermont semi-recluse who had a lovely and inexplicable
devotion to snow. In 1885, at the age of nineteen, he
photographed his first snowflake, against a background
made as dark as black velvet.... Bentley, over his
lifetime, took portraits of five thousand three hundred
and eighty-one snow crystals (to give them their proper
scientific name; flakes are crystals clumped together) and
inserted into the world's imagination the image of the
stellar flower as the typical, "iconic" snowflake, along
with the idea of a snowflake's quiddity, its uniqueness.
It turns out, however (a few more slips, a
bit more Googling), that Bentley censored as much as he
unveiled. Most snow crystals -- as he knew, and kept quiet
about -- are nothing like our stellar flower: they're
irregular, bluntly geometric. They are as plain and as
misshapen as, well, people. The Fifth Avenue snowflakes
are the rare ones, long and lovely, the movie stars and
supermodels, the Alessandra Ambrosios of snow crystals.
The discarded snowflakes look more like Serras and
Duchamps; they're as asymmetrical as Adolph Gottliebs, and
as jagged as Clyfford Stills.
But are they all, as Starbucks insists, at
least different? Another flurry of catalogue searching
reveals a more cheering, if complex, truth. In 1988, a
cloud scientist named Nancy Knight (at the National Center
for Atmospheric Research -- let's not defund it) took a
plane up into the clouds over Wisconsin and found two
simple but identical snow crystals, hexagonal prisms, each
as like the other as one twin to another, as Cole Sprouse
is like Dylan Sprouse. Snowflakes, it seems, are not only
alike; they usually start out more or less the same.
Yet if this notion threatens to be
depressing -- with the suggestion that only the happy eye
of nineteenth-century optimism saw special individuality
here -- one last burst of searching and learning puts a
brighter seasonal spin on things. "As a snowflake falls,
it tumbles through many different environments," an
Australian science writer named Karl Kruszelnicki
explains. "So the snowflake that you see on the ground is
deeply affected by the different temperatures, humidities,
velocities, turbulences, etc, that it has experienced on
the way." Snowflakes start off all alike; their different
shapes are owed to their different lives.
In a way, the passage out from Snowflake
Bentley to the new snowflake stories is typical of the way
our vision of nature has changed over the past century:
Bentley, like Audubon, believed in the one fixed image; we
believe in truths revealed over time -- not what animals
or snowflakes are, but how they have altered to become
what they are. The sign in Starbucks should read, "Friends
are like snowflakes: more different and more beautiful
each time you cross their paths in our common descent."
For the final truth about snowflakes is that they become
more individual as they fall -- that, buffeted by wind and
time, they are translated, as if by magic, into ever more
strange and complex patterns, until, at last, like us,
they touch earth. Then, like us, they melt.
The metaphor isn't perfect, because the environment acts on
the snowflake, but the snowflake doesn't really act on the
environment. But it's not a bad start. the uniqueness
of the individual's personality is a product of the
individual's interaction with the environment, chiefly the nonshared
environment -- an environment that is very much the
individual's own making.
Gender Dimorphism
The interaction of nature and nurture can
be seen when we look at the development of gender dimorphism:
- gender identity (one's sense of oneself as male
or female);
- gender role (the person's adoption of
characteristically masculine or feminine aspects of
thought and action); and
- erotic orientation, also known as sexual
orientation or sexual preference (as heterosexual,
homosexual, bisexual -- or, for that matter, not sexual at
all).
Some role differences, known as the
procreative imperatives, are built into us by our
biology:
- impregnation in males,
- menstruation, gestation, and lactation in females.
But gender role goes beyond the demands
of reproduction to include, at least in this culture:
- typically masculine characteristics of agency and
instrumentality; and
- typically feminine characteristics of communality and
expressiveness.
It is now very clear that this
developmental process is not a simple matter of genetic
determination. Rather, it reflects a complex interaction
between genetic/biochemical processes (the phyletic
imprimatur) and social/environmental processes (the social
imprimatur) -- an interaction that is made more complex by the
fact that
the developing child
is both a target and an instigator of his or her own
development.
Gender Differentiation in Fetal Development
As
noted earlier, the normal human cell possesses 46 chromosomes,
arranged in 23 pairs. Two of these, making up a single pair,
are the sex chromosomes, known as X and Y. Normally, males
carry one X and one Y chromosome (XY), while females carry two
X chromosomes (XX). Genes for sex-linked traits are located on
these sex chromosomes. Note that because each parent
contributes one chromosome of each pair to his or her
offspring, and the mother can contribute only X chromosomes to
this process, in the final analysis the father determines the
sex of his child: if he contributes an X chromosome, the child
will be genetically female (XX); if he contributes a Y
chromosome, the child will be genetically male (XY).
Although the fetus is genetically male or
female from the beginning (because its cells carry the XX or
XY chromosome pairs from the beginning), early in gestation
the fetus is otherwise undifferentiated with respect to
gender. That is, although it carries the XX or XY chromosomal
endowment, it has no outward appearance of being male or
female. This is because at this stage the fetus's gonadal
tissue is undifferentiated. . Remember the debate
between recapitulation and differentiation as basic themes in
development? In these terms, the undifferentiated gonadal
tissue of the early fetus is a primordial structure
which will eventually differentiate into the more complex
structures representing the male and female reproductive
anatomy).
In
technical terms, the structures in this undifferentiated
gonadal tissue contain the anlagen (or foundation) of
the male and female reproductive systems:
- the outer cortex will become the ovaries
of the female;
- the inner medulla will become the testes
of the male;
- the Mullerian ducts will become the internal
reproductive organs of the female -- the uterus, fallopian
tubes, and inner portion of the vagina;
- the Wolffian ducts will become the internal
reproductive organs of the male -- the vas deferens,
seminal vesicles, and ejaculatory ducts; and
- the genital tubercle, situated above a single urogenital
slit (itself surrounded by urethral folds
and labio-scrotal swellings will become the
external genitalia: the vagina and clitoris of the female,
the penis and scrotum of the male.
After about six weeks of gestation,
sexual differentiation begins. In response to genetic messages
(carried on the X and Y chromosomes), one set of structures
begins to develop while the other one becomes vestigial. If
the fetus carries the XY genotype, the inner medulla will grow
into the testes of the male, and the outer cortex regresses;
if the fetus carries the XX genotype, the outer cortex will
grow into the ovaries of the female, while the inner medulla
vestigiates.
The
genes themselves appear to play no further role in what
happens. Rather, further sexual differentiation occurs by
virtue of hormones secreted by the gonads -- and in
particular, those male hormones secreted by the testes. There
may be a role for the female hormones in genetically XX
fetuses, but this is not clear at present. As a rough
approximation, further sexual differentiation appears to
reflect what the biologists call "nature's rule":
add something to
masculinize.
Without masculinization instigated by
the male gonadal hormones, the remaining gonadal tissue will
naturally differentiate into the female reproductive system.
So, in a sense, at this point the program for sexual
dimorphism passes from the genes to the hormones.
Simone de Beauvoir and The Second Sex
In 1949, Simone de Beauvoir (1908-1986), the
French writer and existentialist philosopher (and longtime
companion of Jean-Paul Sartre), published a book,The
Second Sex (1949; English edition 1953), which is
rightly regarded as instigating the feminist revolution of
the 1960s (these things take time: Mary Wollstonecraft
published The Vindication of the Rights of Woman in
1792, and the feminist revolution among middle-class women
in the United States didn't really begin until Betty
Friedan, who had once been a graduate student in psychology
at UC Berkeley, published The Feminine Mystique in
1963).
In her book, de Beauvoir begins with, and
details the various ways in which, throughout history and
across cultures, women have been relegated to subordinate
status. For example, in the Genesis myth, Eve was
created from one of Adam's ribs, as a kind of afterthought
by God. Closer to our own time, Freud held that women were
diminished men (that's why he thought they were obsessed
with penis envy). As de Beauvoir put it, so far as history
and culture is concerned, man was the essential "Subject",
an "Absolute"; woman the inessential "Other". In expressing
the fundamental doctrine of existentialism, Sartre had
written that "Existence precedes essence". Similarly (in
perhaps her most famous passage), de Beauvoir wrote "One is
not born, but rather becomes a woman". For de Beauvoir,
there is no "essence" to womanhood or femininity; the
details of gender role are imposed on the individual by the
culture, and can be accepted or declined by the individual
as a matter of free choice. Of all the many good books of
feminist theory and doctrine, de Beauvoir's remains perhaps
the most thorough and convincing, but in a sense she got the
title wrong. Biologically speaking, anyway, the female is
the "first" sex.
A Note of Caution: It turns out that
the only English edition of de Beauvoir's book is seriously
deficient, with many technical words and phrases simply
mistranslated, and large sections of the French original
simply cut out. The problem is that the editor who bought
the English rights thought that she was buying a sort of
French sex manual, and the person who took responsibility
for the translation was a zoologist whose knowledge of
philosophy was practically nonexistent and whose knowledge
of French dated from high school and college. You get the
gist, especially if you already know something about the
argument (or de Beauvoir, or existentialism), but if you
read the book very closely important parts of it don't
really make sense, which does a disservice to the quality of
de Beauvoir's thought and writing. See "Lost in Translation"
by Sarah Glazer,New York Times Book Review,
08/22/04).
The hormones secreted by
the testes have effects on other structures in the initially
undifferentiated gonadal tissue (again, there may be
independent effects of female hormones in genetically XX
individuals, but this is a controversial point in
endocrinology).
- In the third month of gestation, a Mullerian
inhibiting substance appears to stop the development
of the Mullerian duct system (I say "appears to" because
the MIS is at present known only by inference -- we know
this happens, but we don't exactly know what does it). At
the same time, fetal androgen promotes the
development of the Wolffian duct system into the male
internal reproductive system. In the absence of MIS and
androgen, the Mullerian ducts develop into the female
internal reproductive system.
- In the third and fourth months of gestation, we observe
more effects of fetal androgen. The genital
tubercle forms around the urethra into a penis rather than
a clitoris; and the labio-scrotal swelling fuses into a
scrotum rather than a vagina. Again, in the absence of
this dose of androgen, these structures will develop into
the clitoris and vagina of the female.
- In successive months, the vaginal canal will connect the
external and internal reproductive anatomy of the female.
- In the seventh month of gestation, the testes descend
from the abdomen into the scrotum of the male.
When everything goes as programmed, after
nine months of gestation a human baby is born with a set of
external genitalia that are recognizably male or female, and a
corresponding set of male or female internal reproductive
organs.
Anomalies of Gender Differentiation
But sometimes things don't run quite the
way they're programmed, and the child is born sexually
ambiguous -- individuals who are known technically as pseudohermaphrodites.
Chromosomal XX
Individuals. If a genetic female somehow experiences an
environment to which androgen has been added, she will be born
with female internal genitalia, but most likely an enlarged
clitoris and fused vaginal labia; rarely, such a girl will be
born with a normal penis and scrotum (of course, the scrotum
will be empty, because there are no testes to descend into
it). This occurs in two principal ways.
- In the female adrenogenital syndrome, there is a
natural failure of the adrenal glands to function
properly, resulting in the circulation of androgen to a
fetus that is genetically female. There are no effects on
the internal reproductive anatomy, but the external
genitalia are masculinized. These children receive
surgical correction of the external genitalia. At puberty
(because they have malfunctioning adrenal glands) they
also receive cortisone therapy to counter the adrenal
failure. As a result of this therapy, the girl develops a
characteristically feminine physique, menstruates, and can
conceive and bear children.
- The female adrenogenital syndrome is also known as congenital
adrenal hyperplasia. It is very rare, but
sometimes pregnant women whose fetuses are at risk for
FAS/CAH are prescribed dexamethasone, a steroid drug, to
prevent the condition. Although this drug is often
effective, such prescriptions are currently "off-label",
and not approved for this purpose by the Food and Drug
Administration.
- In progestin-induced pseudohermaphroditism, a
pregnant woman (with a personal or family history of
difficult pregnancy) receives synthetic hormones to
prevent miscarriage. In some cases, the hormone treatment
results in a masculinization of the external genitalia,
which is corrected surgically. Because there is no problem
with the endogenous hormones, there is no need for
cortisone therapy to feminize the physique or induce
menarche.
In both cases, the children are raised
as girls.
Chromosomal XY
Individuals. In a genetic male, the failure of the
Mullerian-inhibiting substance can leave the fetus with a set
of male external genitalia, but both the female and male
internal reproductive systems. Note, however, that such an
individual has only testes (the gonadal tissue becomes either
testes or ovaries); thus, he cannot menstruate or gestate. The
children are raised as boys.
- In the androgen-insensitivity syndrome, a
genetic defect causes the androgen which circulates
naturally to the male fetus to have no effect. The result
is that the child is born without male external genitalia
(except, perhaps an enlarged clitoris). Following surgical
correction, including removal of the testes, the children
are raised as girls. At the time of puberty, natural
estrogen (which circulates to males as well as females,
but which is suppressed by androgen in hormonally normal
males) feminizes the physique. However, because these
girls do not possess the internal reproductive anatomy of
females (under genetic control, the inner medulla
differentiated into testes while the outer cortex
regressed, and the Mullerian inhibiting substance works
even if the androgen does not!), they have no ovaries.
Therefore, they will not menstruate, and will be
infertile.
Guevodoces. An interesting
syndrome, originally discovered in an isolated area of the
Dominican Republic (but also documented in an isolated village
in Puerto Rico), involves genetically male individuals
(chromosomal XY) who are born with a particular defect in
their androgen system known as 5-alpha-reductase
deficiency syndrome. Because they do not undergo
masculinzation in utero, these children are born with
apparently female external genitalia; if the condition is
undiagnosed, they are raised as girls. At puberty, however,
the flow of natural testosterone induces masculinization: the
voice deepens, the child develops a typically masculine
muscular structure, breasts do not develop as expected and --
surprise! -- the child's scrotal tissue balloons, testes
descend, and what originally appeared to be a clitoris
enlarges into a functioning penis. Hence the popular name for
this condition --guevodoces.
What's especially interesting about
this syndrome is that the children readily shift their gender
identities, and corresponding gender roles, from feminine to
masculine. This is because their culture is prepared for the
possibility of a spontaneous "sex change" from previous cases
known in the village -- it's as if they say "Oh, that happened
to Uncle Jose, too!". The boys' adolescent and adult behavior,
including sexual behavior, is not are appreciably different
from that of "normal" males.
Before the arrival of modern medicine,
the condition went undiagnosed until adolescence. Now, the
condition is diagnosed at birth, either through chromosomal
testing or through palpation of the groin (which reveals the
undescended testes), and the children are identified and
raised as boys, from birth -- appearances to the contrary
notwithstanding.
Jeffrey Eugnedies' novel Middlesex
(Farrar, Straus & Giroux, 2002; he also wrote The
Virgin Suicides) uses the fictional life of Cal (nee
Calliope) Stephanides, a Greek-American with "guevodoces
syndrome", as a metaphor for the identity crises of immigrant,
"hyphenated" ethnic Americans, as well as places "like Berlin,
like Korea, like all the other places in the world that were
no longer one thing or the other".
Klinefelter's Syndrome, 47 XXY.
In another condition, a chromosomal male has an extra X
chromosome, thus 47XXY rather than 46XY (this occurs in fewer
than 1/500 male births). Common consequences include
"feminized" physique, infertility, delayed motor and speech
development, difficulties with reading and writing. As adults,
these individuals often gradually lose both sexual potency and
interest. Many of these symptoms can be reversed with hormone
replacement therapy, replacing the testosterone that is
missing naturally.
These anomalies of gender
differentiation sometimes raise the question of gender
identity: is a person male or female? And not just a question
about how the individual identifies him- or herself -- but
also issues of how s/he is identified by other people. Because
of previous gender-related controversies -- including the fact
that, in the 1936 Berlin Olympic Games, the Germans cajoled a
male athlete, Hermann Ratjen, into living as a woman for three
years before entering "her", renamed Dora, into the high jump
competition ("she" lost); and the practice, in certain
countries of Communist Eastern Europe, of doping female
athletes with testosterone and other steroids in order to
enhance their performance -- in the 1970s the International
Olympic Committee began testing female athletes to confirm
their "femaleness" by inspecting their chromosomal material
for the presence of a telltale X chromosome. But Stella Walsh,
a Polish sprinter who also competed in the 1936 Olympic games,
apparently suffered from androgen-insensitivity syndrome --
although chromosomally male, she identified herself as a woman
and had lived as a woman all her life. Under current rules,
she would have been disqualified from competition. At the same
time, since 2004 the IOC's rules have allowed transsexual
women -- that is, chromosomally male individuals who identify
themselves as women and have undergone sex-reassignment
surgery and post-operative hormone-replacement treatment -- to
complete as women (though as of 2008, no openly transsexual
individuals have qualified for the competition). Like Stella
Walsh, these individuals would also have failed a chromosomal
test of gender [see "The XY Games" by Jennifer Finney Boylan,New
York Times 08/03/08.].
All of which raises the questions: What
is the proper criterion for being male or female?
Chromosomal sex? Body morphology? Gender identity?
And how many categories of gender are there, anyway?
The usual two? Or are there at least two more, to cover
conditions like androgen-insensitivity?
The Case of Caster Semenya...
The
issue of gender ambiguity and gender determination crops up
in athletics from time to time. Consider, for example, the
case of Caster Semenya, a South African athlete who won the
Gold Medal in
the women's 800-meter race at the World Track and Field
Championships, held in Berlin, August 2009. She beat the
previous world record by a full two seconds, after which
some other competitors, and their coaches, revived the
question, which had long been on many observers' minds, of
whether she was, in fact, female. The IAAF, the sport's
governing body, conducted an extensive series of
biological and psychological tests, and decided that she
could continue to compete as a woman. However, the
Federation did not release the actual results of the
testing -- an admirable (and ethical) protection of
Semenya's privacy rights, but a move that deprived her
competitors of the right to understand the ruling,
professionals an opportunity to scrutinize the tests
themselves and their findings, and the public at large of
a "teachable moment" to learn about the complexities of
gender dimorphism.
Semenya took some time off from elite competition, but
returned to the track in August 2010, again winning her
event, though by less of a margin than in 2009. Still, the
questions persisted, and it probably would have been
better, in the long run, if the Federation had explained
its ruling in more detail. Diane Cummins, one of Semenya's
competitors in the 2010 meet, suggested that the issue
will not go away for a while:
"We have levels that we are not
allowed to test over, so even if she's a female, she's
on the very fringe of the normal female athlete
biological composition from what I understand in terms
of hormone testing. So from that perspective I think
most of us sort of just feel like literally we are
running against a man because what we know to be female
is a certain testosterone level. And if that isn't the
case, they need to change everything" [quoted in
"Semenya Returns, and so Do Questions", by Christopher
Clarey,New York Times, 08/23/2010).
Semenya
competed for South Africa in the 2012 Summer Olympic
Games, in London, carrying her country's flag in the
opening ceremony, and taking home a silver medal.
For a good article on the Semenya case, see "Either/Or:
Sports, Sex, and the Case of Caster Semenya" by Ariel
Levy, New Yorker, 11/30/2009.
See also "On the Basis of Testosterone" by Grace
Huckins, Scientific American, 02/2021.
Current
Olympic standards employ a cutoff of 10 nanomoles of
testosterone per liter of blood, but there is still some
overlap, with some elite female athletes falling within
normal limits (remember "The Rule of 66, 95, and 99") for
males, and vice-versa. In 2018, facing increasing
challenges from female athletes with excessively high
levels of natural testosterone (a condition called
hyperandrogenism), the International Association of
Athletics Federations issued a new standard for women
competing in certain track and field events where high
levels of testosterone (between 5 and 10 nm/l) give women
a clear advantage compared to women with lower
levels. Such athletes must keep their testosterone
levels within the "normal" range of 0.12 to 1.79 nm/l,
compared to the normal range for males of 7.7 to 29.4 nm/l
(remember 95% confidence intervals?) -- employing hormone
therapy, essentially doping with estrogen (which itself
may not be healthy), if necessary. If they cannot
keep within these standards, they will be disqualified
from international competition, or be required to compete
against men, or to change to events that are not covered
by the new rule.
Perhaps
a better standard is to have no standard at all, but
simply to distinguish between natural testosterone and the
artificial variety which might be used in doping (they're
chemically different). Males, after all, aren't
disqualified for having high testosterone -- why should
females? Maybe you should only be disqualified for
doping. But isn't taking hormone therapy to suppress
one's natural levels of testosterone a form of
doping?
On
the other hand... there's the other hand. Which is
to say that this is one of those controversies that
distinguishes between sex as a biological category, and
gender identify and role as psychosocial categories.
Although the ovaries do release some small amount of
natural testosterone, pretty much the only way to get
really high levels of natural testosterone is to be be
endowed with testes, and the only way to be endowed with
testes is to be genetically XY. And really high
levels of natural testosterone (i.e., in the typically
male range) have consequences for other "secondary" sex
characteristics, such as increased strength, muscle type
and mass, heart size, oxygen carrying capacity of the
blood, and muscle-to-fat ratio -- and a 10-12% performance
gap favoring males in track and field. For
this reason, Doriane Lambelet Coleman, a legal scholar who
herself was a world-class competitor in track and field,
has suggested that the IAAF is on the right track.
She writes ("Sex, Sport, and Why Track and Field's New
Rules on Intersex Athletes are Essential", New York
Times, 05/01/2018):
Advocates for intersex athletes like to say that sex
doesn't divide neatly. That may be true for gender
studies departments, but at least for competitive sports
purposes, they are simply wrong. Sex in this
context is easy to define and the lines are cleanly
drawn: You either have testes and testosterone in the
male range or you don't.... Pick your body part,
your geography, and your socioeconomic status and do
your comparative homework. Starting in puberty
there will always be boys who can beat the best girls
and men who can beat the best women. Because of
this, without a women's category based on sex, or at
least these sex-linked traits, girls and women would not
have the chance they have now to develop their athletic
talents and reap the many benefits or participating and
winning in sports and competition. Eric Vilain, a
geneticist who specializes in differences of sex
development, has been blunt about it: removing sex from
the eligibility rules would "be a disaster for women's
sport... a sad end to what feminists have wanted for so
long".
In
any event, in 2019 the Court of Arbitration for Sport, a
kind of Supreme Court for the athletic world, ruled that,
henceforth, women like Semenya, who have very high
testosterone levels, cannot compete in middle-distance
races, such as the ones she excels in, unless they take
medication to reduce their testosterone levels to the
"normal" range for women. Semenya herself sees this
as, essentially, a bill of attainder -- a ruling
specifically enacted to apply to her ("Caster Semenya,
Hero in South Africa, Fights Hormone Testing on a Global
Stage" by Karen Zraick, New York Times,
05/02/2019).
In
2020, in the run-up to the Tokyo Olympics (rescheduled to
2021 because of the Covid-19 pandemic), Human Rights
Watch, a non-governmental organization, issued a 120-page
report on athletes like Semenya with differences of
sexual development (DSDs), demanding that sex
testing of female track and field athletes be stopped see
"Rights Group Demands End to Sex Testing of Fmale Track
Athletes" by Jere Longman, New York Times,
12/05/2020).
For Semenya's own, very moving (and compelling),
perspective on this, see "Running in a Body That's My
Own" by Caster Semenya, New York Times
10/22/2023.
...and Johnny Saelua...
An
athlete who has gotten much less attention, not least
because he plays for a team that perennially loses, is
Johnny Saelua, a soccer player from American Samoa. He's
what's known in Polynesian culture as a fa'afafine
-- a "third sex" of biological males who identify
themselves as female. Unlike transgender individuals in
the West, however, Polynesian culture readily accepts this
gender-related category. Still, there are some
difficulties: Saelua plays for the men's team
(which won a World Cup match for the first time in 2011).
Still, as Saelua told the Sydney
Morning Herald,
''When I go out into the game, I put aside the fact that
I'm a girl, or a boy, or whatever, and just concentrate on
representing my country,'
...and Bradley/Chelsea Manning...
In
August 2013, Army
Pfc. Bradley Manning was convicted on charges of leaking vast
quantities of US government secrets and sentenced to 35 years
in prison. Manning immediately announced that he ha
always felt that he was "female", and that his transgender
status had caused him to experience considerable amounts of
emotional stress during two deployments in the war in
Iraq. Henceforth, he asked to be addressed as "Chelsea"
Manning. He also asked to receive hormone therapy to
help him achieve a more feminine physique. The Army has
no provisions for transgender soldiers, and neither military
nor civilian prisons typically provide hormone therapy for
transgender prisoners ("'I am a Female,' Manning announces,
Asking Army for Hormone therapy" by Emmarie Huetteman,
New
York Times, 08/23/2013).
The case also raised issues of journalistic practice: given
that Private Manning considers himself to be a woman in a
man's body, what pronoun should they use. The public
editor for the New York Times argued that,
henceforth, the Times should refer to Pvt. Manning
with "she" ("'He'? 'She'? News Media Are Encouraged to
Change" by Christine Haughney, New York Times,
08/23/2013).
...and Andrej/Andreja Pejic...
For
several years, beginning in 2010, the Australian fashion
model Anrej Pejic cultivated a distinctly androgynous look
modeling menswear for several European designers, including
Marc Jacobs and Jean Paul Gaultier -- and, for that matter,
modeling bridal wear for Gaultier as well. What Pejic
did not tell people was that he was transgender, taking
synthetic hormones to suppress his masculine
development. In 2014, Andrej completed
sex-reassignment surgery, and now seeks work as a women's
fashion model under the name Andreja. For more, see
"Will the Fashion World Accept Andreja Pejic as a Woman?" by
Matthew Schneier, New York Times Style Section,
09/07/2014).
...and Bruce/Caitlin Jenner...
In 1976, Bruce Jenner
went on the Wheaties box after winning the
gold medal in the 1976 Olympic Decathlon. He
subsequently married into the Kardashian family, of
reality-TV fame ("Keeping Up With the Kardashians"), having
two children with his wife, Kris. In 2015, after years
of rumors about his changing, more femininized appearance,
Jenner announced that he was transgender and renamed himself
Caitlyn. Apparently, he had experimented with
cross-dressing and hormone replacement therapy before
Marrying Kardashian; they separated in 2013 and divorced in
2015, citing "irreconcilable differences", before Jenner's
announcement; still, he remains part of the extended
Kardashian clan). The widespread publicity given to
Jenner's announcement, including an interview on ABC's 20/20
program with Diane Sawyer, a two-part special of Keeping
Up With the Kardashians, a fashion spread in Vanity
Fair magazine, and an eight-part TV documentary
planned for July 2015), and the overwhelmingly positive
reaction to Caitlyn's announcement, marks the widespread
public acceptance of transgender individuals.
...and Jonas and Wyatt/Nicole....
In
many ways, interest in sex-reassignment and transgender
individuals began with the famous John/Joan case presented
by John Money. You remember, John lost his penis
during a botched circumcision, and the decision was made to
raise him as a girl, along with his identical twin brother
Brian -- a decision that did not work out as planned.
The John/Joan case has an interesting parallel, with --
apparently -- a much better outcome: the case of Nicole
Maines, recounted in Becoming Nicole: The Transformation
of an American Family by Amy Ellis Nutt (2015).
Image: New York Times, 10/22/2015). Whereas
John lost his penis accidentally, and never wanted to become
Joan, Nicole was clear from a very early age: at the tender
age of 3, Wyatt told his father, "I hate my penis".
Later, though still a child, Wyatt hit his identical twin
brother Jonas: when asked why, he said "Because he gets to
be who he is and I don't". Later, with support from
her parents (and her brother), Nicole sued the state of
Maine for sex discrimination, because she was not allowed to
use a women's rest room at school, and used the award to pay
for gender-reassignment surgery.
...and John...
One
of the most interesting cases of gender fluidity appeared in
the "Ask Amy" personal advice column, written by Amy
Dickinson and syndicated by the Los Angeles Times,
published in the East Bay Times (03/09/2013).
The writer's gay son had married a transsexual man who had
not had surgical correction of his (female) genitalia, and
had become pregnant. This case makes it clear that we
have moved way beyond the Manichean dichotomies of male and
female, masculine and feminine, heterosexual and homosexual.
...and Pauli...
Anna
Pauline ("Pauli") Murray (1910-1985) was a pioneer in both
the civil rights and women's movements. Born in North
Carolina, she graduated from Hunter College (Jim Crow laws
kept her out of UNC), and as a law student at Howard
(Harvard Law rejected her, because at the time it did not
admit women, though she did obtain a master's degree in law
from UC Berkeley) devised the legal strategy that Thurgood
Marshall and other top civil-rights attorneys employed in Brown
v. Board of Education (1954) to overturn its notorious
decision in Plessy v. Ferguson (1896) that permitted
racial segregation in public facilities. A friend of
Langston Hughes and Eleanor Roosevelt, Murray was one of the
founders of the National Organization of Women. She
was the first African-American to receive the doctoral
degree in jurisprudence from Yale Law School -- and, later,
the first African-American woman to be ordained a priest in
the Episcopal Church. A residential college at Yale
now for her, and her childhood home in Durham, North
Carolina, is listed on the National Register of Historic
Places. And it may well be that Murray was also what
we would now call transgender. She was sexually
attracted to women -- which, at the time, was enough for her
to be labelled as "lesbian". But given the state of
medical and psychological knowledge at the time, that was
pretty much the only option. But Murray saw herself as
a blend of genders. "Maybe two got fused into one with
parts of each sex, male head and brain (?), female-ish body,
mixed emotional characteristics", she wrote of herself, "one
of nature's experiments; a girl who should have been a boy",
"very natural falling in love .e., as a man] [with the
female sex". As if to confirm the diagnosis, Murray
herself sought hormone treatments; and when she underwent an
appendectomy, she instructed the surgeon to look for signs
of male internal genitalia. For more information, see
The Firebrand and the First Lady by Patricia
Bell-Scott (2016) and Jane Crow: The Lie of Pauli Murray
by Rosalind Rosenberg (2017; reviewed in "Saint
Pauli" by Kathryn Schulz, New Yorker, 04/17/2017,
from which the quotations and image are drawn).
...and Lisa Davis's Daughter (a Cautionary
Tale)...
It's important not to confuse gender role with gender
identity. These days (I'm writing this in 2017),
with all the media attention on transgender individuals
(e.g., Caitlin Jenner), and the implications of the
increasing visibility of transgender individuals for public
policy (e.g., bathroom laws), it's sometimes tempting to
assume that someone who does not conform to traditional
gender roles must be either homosexual or transgender.
But this isn't true, not by a long short. Most boys
and men have a male gender identity and adopt a masculine
gender role, just as most girls and women have a female
gender identity and adopt a feminine gender role. Lisa
Selin Davis has the assumption of many people, including
teachers and physicians, as well as longtime family
acquaintances, that her unnamed, 7-year-old, admittedly
"tomboyish" daughter is transgender ("My Daughter is Not
Transgender. She's a Tomboy", New York Times,
04/19/2017). One of her teachers -- this is a
7-year-old --put it this way: "I just wanted to
check. Your child wants to be called a boy,
right? Or is she a boy who wants to be called a
girl? Which is it again?". Davis points out that
her daughter is not gender nonconforming, but
rather is gender role nonconforming. She's
what used to be called, as still is called, a
"tomboy". She's very clear about being a girl.
She's just not a girl the way many other girls are
girls. As Davis puts it (and the same goes for boys):
"The kids get it. But the grown-ups do
not. While celebrating the diversity of sexual and
gender identities, we also need to celebrate tomboys and
other girls who fall outside the narrow confines of gender
roles. Don't tell them they're not girls."
...and Alicia Roth Weigel's Surgery
(Another One)...
Earlier in these lectures, I discussed cases of children born
with ambiguous genitalia, who underwent gender-reassignment
surgery at a very young age, and in some cases were not
informed of this fact until much later in childhood. A
first-person account of this experience was provided by Alicia
Roth Weigel, a genetic male (XY) with complete androgen
insensitivity syndrome ("Intersex, and Erased Again",
New
York Times, 10/24/2018).
Imagine knowing that every aspect of your
physiology, from your height to your cup size, was chosen
off a menu — not by nature but by doctors and family
members.
From the second I was
born, decisions were made by medical professionals
about which of two gender categories my body should
fit into. For me, surgery to remove my gonads as an
infant was the first stop on the track to female — but
the train didn’t stop there. My family was consulted
about how 5 feet 8 inches seemed like an optimal
height, and informed on how hormone levels and
sequences could be measured to achieve just that. The
ideal breast size for my frame was also discussed; I
can still remember the male doctor nodding
approvingly. I was also given a dilator before even
hitting my teens, so my vagina would be ready for
penetrative sex.
“Disconcerting” would be
one — euphemistic — way to put it.
I was born intersex,
with XY chromosomes but Complete Androgen
Insensitivity. If you’re not sure what that means, I
don’t blame you. By some estimates, almost
2 percent of the world’s population is intersex
like me but is still living in the shadows because of
societal stigma and shame. Stigma knows no borders,
and neither did my body, apparently: I didn’t respond
to androgen hormones in the womb, and thus stopped
developing at a certain point — a point between what
we consider to be the binary sexes, hence “intersex.”
I was ultimately born with female anatomy on the
outside but with internal testes instead of ovaries.
As a result, doctors, alongside my parents, decided
when I was still a baby that I would be raised as a
girl. This decision has shaped the course of my entire
life but was made without my consent.
The gonadectomy surgery
performed on my body was internal but opened the
floodgates for a sequence of physical alterations that
would affect my appearance and identity. Any
subsequent decisions made about my body that involved
me, at an age of informed consent, were constrained by
this first choice: to render me traditionally female.
Regardless of my liminal genetic code — or rather,
regarding it as a threat to societal norms — the train
to my idealized gender presentation had already left
the station. Why were all these decisions fast-tracked
onto my body? Not because they were medically
necessary — I would have been perfectly healthy just
living and growing as little old me — but because they
were vital to “normalize” me.
The desire to force-fit
people into societally conditioned boxes has led to
sterilizing children and enacting medically
unnecessary surgeries on them. These surgeries are
irreversible, lead to physical and emotional scarring,
and their subjects are un-consenting. They are, to put
it bluntly, the coercive application of Western
cultural ideals to everyday human bodies....
I’ve experienced
firsthand the consequences of the gender binary in
what’s often a non-binary world. It isn’t good for
anyone. Certainly not trans people, but also not for a
population that’s larger than many think — and that
has spent years trying to convince people that our
bodies are good enough as they are.
...and Pamela Paul's Cautions.
The point of this is not that
gender-reassignment, including hormones and surgery,
is a bad thing. It may very well be the right
thing in a particular case. The point is that
gender and sexuality are confusing enough for many
people, and there's no one-size-fits everyone
solution.
Personal Change and Social Change
Although transgender individuals are
relatively rare, their increasing public visibility has led
to increasing social acceptance. What was once a
movement for "gay rights" became a movement for "gay and
lesbian rights", which then added bisexuality, and then
transgender -- yielding the acronym LGBT. And these
changes have also been reflected in the law and social
policy -- especially here in California.
- Beginning in 2014, in what is thought to be the first
such law in the United States (the "School Success and
Opportunity Act", so named because it is intended to make
school environments more comfortable for
gender-nonconforming students), the State of California
required all public schools, from kindergarten to grade
12, to allow transgender students to choose freely
whichever restrooms or locker rooms they choose, according
to their gender identity -- whether formally designated
for boys or girls, men or women. The law also
allowed transgender students to join any athletic team --
whether boys' football or girls' field hockey
- In September 2014, the University of California
announced a new policy to create single-stall
gender-neutral bathrooms on each of its 10 campuses.
The University also began to allow transgender students to
use a name (like Andreja) other than the one (like Andrej)
listed in their official campus records.
- Mills College, a private, women's college in Oakland
(the oldest such college in the West), which declined to
go coeducational when many other women's schools (such as
Vassar) did, announced new policies on transgender
students.
- Anyone who identifies as a woman, regardless of gender
assignment at birth, is eligible for admission.
- Applicants who received a female gender assignment
at birth, but who now consider themselves neither male
or female, may also apply.
- A person who was born female but is now legally
considered male may not apply.
- Enrolled students who change their gender identity
from female to male may continue to attend the school.
In 2015, the status of transgender individuals reached into
the level of public policy, when the US Department of
Education (DoE) ordered that schools "generally must treat
transgender students consistent with their gender
identity". In 2016, is strengthened the policy,
indicating that schools which discriminated against
transgender students, including in matters such as bathrooms
and locker rooms, could lose their federal subsidies.
Subsequently a federal judge in Texas issued an injunction
preventing the Obama administration from enforcing these
guidelines. Some other states followed suit, setting
up the conditions for bringing transgender rights before the
US Supreme Court.
In
2016, the Court agreed to hear just such a case: Gloucester
County v. G.G. (Docket #16-273). The case involved
high-school student, Gavin Grimm (image from the New
York Times, 10/29/2016), who was designated female at
birth but identifies as male. Originally, Grimm's high
school allowed him to use boys' bathrooms, but the local
school board overruled this decision. Interestingly,
the girls in the high school objected to this decision, on
the grounds that they identified Grimm as male, and didn't
want any boys in their bathrooms and locker rooms.
Grimm complained that the requirement that he use a private
bathroom was humiliating, and brought suit against the
school board. At issue is a 1975 regulation adopted
under Title IX of the Civil Rights Act of 1964 -- which,
while prohibiting discrimination on the basis of sex (itself
a "poison pill" originally slipped into the legislation by
opponents of racial desegregation, in an attempt to prevent
passage of the bill), prohibited In a split decision, the US
Court of Appeals ordered the school to allow Grimm to use
the boys' bathroom. The regulation clarified the
prohibition on sex discrimination by allowing institutions
to provide sex-specific separate bath and dressing
rooms. The appeals court, agreeing that the regulation
was ambiguous with respect to gender identity (not a big
issue in 1975!), sided with Grimm. The school board
appealed, and the Supreme Court agreed to hear the
case. In its decision, the Supreme Court vacated the
Appeals Court ruling, sending the case back to the lower
courts for reconsideration in light of the DoE's new 2016
policy guidance.
As another
example, consider Amy Schneider, who in 2021 and 2022 won 40
games and more than $1.3 million on Jeopardy!, the
classic running television game show -- the second-longest
winning streak in show history, and the longest winning
streak ever by a woman. After she had won a few games,
media reports revealed that Schneider was a trans woman,
having made the male-to-female transition in 2017. But
over 40 episodes, nothing was ever said about her status on
the program itself. For Jeopardy!, she was
just another contestant. “I am a trans woman, and I’m
proud of that fact,” she posted on Twitter, “but I’m a lot
of other things too!”
For more, see "The
Radical Normalcy of a Trans "Jeopardy!" Winner" by
Jennifer Finley Boylan, New York Times, from which
the quote is taken (011/07/2022); also ";Jeopardy!' Hasn't
Had a Player Like Amy Schneider" by Shane O'Neill, New
York Times, 01/27/22; images taken from other NYTimes
articles about Schneider's run.
So, rather than gender-nonconforming people having to
accommodate to cultural norms, cultural norms are assimilating
new gender identities.
For a journalistic account of how one all-women's
college, Wellesley, is adjusting to transgender students
(and vice-versa), see "Sisterhood is Complicated" by Ruth
Padawer, New York Times Magazine, 10/19/2014.
For a cross-cultural perspective on these issues, see Gender
Revolution, a special issue of National
Geographic magazine, January 2017.
Or maybe not. In 2018, it became known that the Trump
Administration was considering a new DoE policy that would
adopt a new, narrow definition of gender in strictly
biological terms ("'Transgender' Could Be Defined Out of
Existence Under Trump Administration" by Erica L. Green, Katie
Benner, & Robert Pear,
New York Times,
10/22/2018). According to the draft policy, "Sex means a
person's status as male or female based on immutable
biological traits identifiable by or before birth. The
sex listed on a person's birth certificate, as originally
issued, shall constitute definitive proof of a person's sex
unless rebutted by reliable genetic evidence."
The Times article notes that "The new
definition would essentially eradicate federal recognition
of the estimated 1.4 million Americans who have opted to
recognize themselves -- surgically or otherwise -- as a
gender other than the one they were born into.... The
move would be the most significant of series of
maneuvers, large and small, to exclude the population from
civil rights protections.... For the last year, the
Department of Health and Human Services has privately argued
that the term 'sex' was never meant to include gender
identity or even homosexuality, and that the lack of clarity
allowed the Obama administration to wrongfully extend civil
rights protections to people who should not have them."
Writing in response to the Trump Administration's decision,
Alicia Roth Weigel (see above) wrote:
I woke up Sunday morning to the news that the
Trump administration is planning changes to federal civil
rights laws that would define sex “as either male or female,
unchangeable, and determined by the genitals a person is
born with,” and that any confusion would be clarified
through genetic testing. Most people have interpreted this
effort as a blow to transgender rights — and it is. But amid
all this, the fate of intersex people seems to have been
forgotten.
Where would such a change
leave me? My body would throw this Trumpian test for a
loop — my naturally occurring genitalia don’t match the
“correct” genetic code in this forced-binary paradigm that
seeks to override biology.
Here’s another curveball:
What Mr. Trump’s memo defines as “unchangeable” is
anything but. I know this because the process of realizing
a gender via hormones and surgeries, analogous to the
process the administration is seeking to marginalize and
discourage among trans people, is one imposed on intersex
children all the time — but in our case, it’s done before
we can understand or agree. It’s not just the government
that is forcing an unnatural gender binary; medicine has
been doing so for ages.
Meanwhile, in 2019 the
Vatican's Congregation for Catholic Education, the office
which regulates Catholic education, issued a guidance
document entitled "Male and Female He Created Them: Towards
a Path of Dialogue on the Question of Gender Theory in
Education" which was less dialogical than doctrinaire in
nature. It argued that gender flexibility,
intersexuality, and transgenderism ignored essential
biological differences between the sexes, led to ambiguous
notions of masculinity and femininity, confused young
people, and threatened traditional families.
With all the controversy, it
sometimes seems as if transgenderism (or gender fluidity) is
a new cultural phenomenon, even some kind of fad, or perhaps
a symptom of a ultra-"liberal" society run amuck. a
fad. And it's true that the term transgender
dates back only to the mid-1960s. But in fact,
transgenderism and gender fluidity have been with us for a
long time, in both Western and non-Western cultures -- as
the historian Kit Heyam makes clear in Before We Were
Trans: A New History of Gender (2023). Heyam
shows that women who lived as men, and men who lived as
women, were known in Africa as long ago as the 17th century
in what is now Angola, and in the 20th century in what is
now Nigeria. In 17th-century Virginia, a person named
"Thomas or Thomasine Hall" was declared by the courts to be
both "a man and a woman". In the 18th and 19th
centuries, women served as men ("passing") in the British
army and navy. Reviewing the book in The Nation,
Stephanie Burt notes that "Trans people seem more visible
now than ever, and there's a lager-than-ever target on our
back" ("Beyond the Binary", 06/24/2023).
Hormonal Effects on Mind and Behavior
Pseudohermaphroditic
children are of interest because they are genetic females who
experience the effects of male hormones, and genetic males who
do not experience these effects. Therefore, at least in
principle, these the children provide an opportunity to
observe the effects of prenatal hormones on behavior --
observations that might provide evidence of a hypothetical
"masculinization of the brain" underlying the differences
between masculine and feminine gender roles. And, in fact,
there is some evidence for hormonal effects on behavior. Thus,
for example, girls with the female adrenogenital syndrome and
progestin-induced pseudohermaphroditism generally appear more
vigorous and aggressive -- in terms of gender role
stereotypes, more "tomboyish" -- than control girls; by
comparison, the genetic males with the androgen-insensitivity
syndrome, who are raised as girls, are behaviorally
indistinguishable from control girls. This evidence is
controversial, however, and shouting matches regularly erupt
when it is presented and discussed at scientific meetings.
Sometimes, pregnant women who suffer
from severe diabetes will be treated with exogenous estrogen
and progesterone to prevent miscarriage. These female hormones
do suppress the action of the androgen that would normally
circulate to genetically and hormonally male fetuses, but
appears to have no effects on the external or internal
reproductive anatomy. These children are, of course, raised as
boys. They are relevant to this discussion only because some
early literature indicated that they were somewhat less
aggressive and competitive than other boys. However, it should
also be noted that the mothers of these children are also
sicker than the mothers of control boys, and this by itself
may inhibit vigorous play -- and effect which has sometimes
been attributed to the lack of "masculinization of the brain".
When we add controls for race, age, social class, and
especially maternal illness during pregnancy, however, the
difference disappears. Therefore, the behavioral differences
appear to be caused by environmental, rather than biological
factors.
In
any event, this process, sometimes called the phyletic
imprimatur, leaves the developing fetus and newborn
child with a set of external genitalia, and an internal
reproductive system, that are more or less recognizably male
or female. At this point, the influence of biological factors
stops, temporarily, and the individual's biographical history
takes over as the parents and others structure an environment
corresponding to their conceptions of how boys and girls
should be raised -- an important part of gender-role
socialization referred to as the social imprimatur.
In other words, the program for sexual
differentiation (or gender dimorphism) passes from the
hormones to the (social) environment). However, the program
will be passed back to the hormones later, at the time of
adolescence (for both sexes), and again at the time of
menopause (for females). Actually, from birth on, it is
probably best to think of the program being passed back and
forth between the hormones and the environment. This is what
is meant by the interaction of nature and nurture.
I've Seen Both Sides Now....
In classical mythology, Hermaphroditus, the
son of Hermes and Aphrodite (get it?), shared a body with
Salmacis, a nymph.
Another mythological character, Tiresias, was
a blind soothsayer, the most famous prophet in ancient
Greece. His prophecies play a central role in the story of
Oedipus, who unknowingly killed his father and married his
mother (the story of Tiresias is told in Ovid's Metamorphoses,
and he appears in many other Greek tragedies, Homer's Odyssey,
and T.S. Eliot's poem,The Waste Land. As the legend
goes, Tiresias was out for a walk when he came upon two huge
snakes who were copulating. When he struck the female with
his staff, he was instantly turned into a woman. Seven years
later, he saw the same two serpents copulating again; this
time he struck the male, and was changed back into a man.
Because he had experienced both sides of love, Tiresias was
called upon to settle an argument between Hera and Zeus, as
to who enjoyed lovemaking more. Tiresias agreed with Zeus
that women enjoyed sex more than men, whereupon he was
blinded by Hera. To compensate Tiresias for his loss, Zeus
gave him the gift of prophecy and a long life. The gift was
apparently heritable: one of Tiresias' daughters became an
oracle at Delphi (or maybe this was an effect of the
environment).
In another version of the legend, Tiresias
was blinded by Athena when he accidentally saw her
bathing; as compensation, she gave him the gift of
prophecy. In another, Tiresias was blinded not by Athena
but according to the laws of Cronos, as punishment for
beholding an immortal without his or her consent.
- An excellent fictional treatment of intersexuality is
Middlesex by Jeffrey Eugenides, which traces the
life of Cal (also known as Calliope), who is
hermaphroditic due to 5-alpha-reductase deficiency. The
book won the Pulitzer Prize for Fiction in 2002.
- Another novelistic treatment of bisexuality and
intersexuality is John Irving's In One Person
(2012).
- Yet another, this one aimed expressly at young adults,
is None of the Above (2015) by Ilene Wong, a
physician who was inspired by a patient with
androgen-insensitivity syndrome. Kristin Lattimer,
the protagonist, is a genetic male with AIS, raised as a
girl with the aid of hormone treatments, who begins to
discover the truth about herself after a particularly
unfortunate incident in the back seat of her boyfriend's
car.
Tiresias is a myth, and Eugenides'
Calliope/Cal is a fictional character, and there are no true
hermaphrodites, but some cases of male pseudohermaphroditism
come close, at least in some respects:
- In one case described by John Money and Anke Ehrhardt
(Man and Woman, Boy and Girl, 1972), a child was
actually given the choice as to whether s/he would be a
boy or a girl.
- Some "transgendered" individuals undergo sex-change
operations after they have lived for some time as
adults. Famous cases include Christine (nee
George) Jorgensen, James/Jan Morris, the travel writer,
and Dierdre (nee Donald) McCloskey, the
economist.
- Jorgensen (1926-1989) was the first case of
gender-reassignment surgery to come to wide public
attention in the United States. After serving
in the US Army in World War II, Jorgensen underwent
sex-reassignment surgery, and hormone treatment, in
Denmark in 1952. She became engaged in 1959, but
was denied a marriage license because she was listed
as male on her birth certificate. See her book,
Christine Jorgensen: A Personal Autobiography
(1967).
- Morris (1926-2020), a wonderful travel writer (check
out books on Venice, Trieste, and Hong Kong, among
other cities -- plus my personal favorite, Last
Letters from Hav, a "travelogue" about a
fictional city) was the first to report (from 22,000
feet!) the 1953 conquest of Mt. Everest by Edmund
Hillary and Tenzing Norgay -- one of the great
journalistic scoops of the 20th century. See her
series of memoirs, Conundrum (1974), Pleasures
of a Tangled Life (1989), In My Mind's Eye
(2019), and Thinking Again (2021).
Reviewing this last book, Hermione Lee writes ("The
Wanderer", New York Review of Books,
02/25/2021):
Jan Morris's remarkable life was made up of
many journeys....[At the time James Morris joined the
Hillary expedition, having never climbed a mountain
before,] [t]his young man had been aware from the age
of about three of being in "the wrong body".
From 1964 Morris began the transition that culminated
in the publication of Conundrum in 1974 under
the name Jan Morris. Apart from the Everest
scoop [just in time for the coronation of Queen
Elizabeth II], the other thing Morris was famous for
was the journey to becoming Jan Morris, and Conundrum
gives a brave, tender, candid, and pragmatic account
of that difficult process, long before gender
dysphoria was a well-known condition, and long before
transitioning was the much-discussed public and
political issue that it is now. The painful
struggle she had in the first thirty years of her life
to understand and deal with her condition, when she
"was dark with indecision and anxiety," often involved
-- as for many others wrestling with this "conundrum"
-- profound depression and thoughts of suicide: "For
if there had been no hope of ending my life as a
woman, I would certainly have ended it for myself as a
man".
- McCloskey (1942-), an economic historian and an
expert on cliometrics, the application of
statistics to the study of history (Clio was the Greek
muse of history); after her transition, she also made
important contributions to feminist economics.
See Crossing: A Memoir (1999).
In the famous "John/Joan" case, also treated
by Money, "John", an infant boy (real given name: Bruce),
lost his penis through an accident during circumcision. He
was subsequently "re-assigned" to be raised as a girl,
renamed "Joan" (actually "Brenda"), castrated, and his
external genitalia surgically corrected. In their book,
Money and Ehrhardt describe this case as a successful
instance of gender-reassignment, but they lacked long-term
followup. The case was all the more interesting because the
child was one of a pair of identical twins; "her" identical
twin was uninjured during the circumcision, and was raised
as a boy. The case was also highly controversial, because
its early apparent success implied that masculinity and
femininity were learned, rather than based on biology --
thus contravening the doctrine (favored by both
psychoanalysis and evolutionary psychology) that "biology is
destiny", and that gender identity and role are encoded in
the genes (Milton Diamond was an especially vigorous
critic). But was it successful? Apparently, Brenda was never
comfortable as a girl, and as an adolescent chose to live as
a boy, changed his name to "David", and later underwent what
might be called "re-correction" surgery; he subsequently
lived as a man, married, and went public about his own case
(he appeared on "Oprah" in 2000). When he committed suicide,
at age 38, his family implied that he had been a victim of a
"botched medical experiment"; they also noted that he had
been depressed by the suicide, two years previously, of his
twin brother Brian, who suffered from schizophrenia, as well
as the recent loss of his job and separation from his wife
("David Reimer, 38, Subject of the John/Joan Case",New
York Times, 05/12/04; see also "Being Brenda" by
Oliver Burkeman & Gary Younge,the Guardian,
05/12/04).
The sad fate of David Reimer is commonly held
up as a demonstration that gender identity and role are
encoded in the genes, and can't be changed by environmental
manipulation. However, in considering the outcome of the
Reimer case, a few points should be borne in mind:
- Reimer's identical twin brother Brian suffered from
schizophrenia; because schizophrenia is to a large
extent heritable, David inherited some disposition to
schizophrenia as well. Although neither his parents nor
his doctors could have known this at the time (both
children were less than 2 years old), Bruce was probably
not the best candidate for involuntary sexual
reassignment surgery.
- Reimer was about 1 year old when he underwent
sex-reassignment surgery, but close to 2 years old when
her parents began to treat her as if she was a girl --
by, for example, making her wear dresses. However,
children start noticing their own and others' gender by
about 2 years of age. Brenda's initial resistance to
dresses might have been a product of her initial gender
identity as male. Or, perhaps more simply, perhaps she
wanted to be dressed in the same way as her brother
Brian. Little girls often express an interest in the
functional clothing, and more freedom in play, given
their brothers.
- Despite Brenda's parents' valiant efforts to treat
her as a girl, they may have been unsure about the
success of the "experiment", and thus given her mixed
messages. Along these lines, Money's own post-surgical
treatment of Brenda, which included psychotherapy
sessions intended to reinforce her new gender
assignment, may have backfired by drawing attention to
the fact that she was not, in fact, a "normal" girl.
The point of all of this is not to say that
Money was entirely right after all, but only that the
critics might not also be entirely right that gender
identity and role are encoded in the genes. The case of
David Reimer is more complex, on both sides, than it would
initially appear to be.
The John/Joan case is related in depth by
John Colapino in an article, "The True Story of John/Joan" (Rolling
Stone, 11/11/97), and a book,As Nature Made Him:
The Boy Who Was Raised as a Girl (HarperCollins,
2000). In general, Colapinto views Reimer as a victim of
radical environmentalism, if not (early) radical feminism.
For insight into the life and mind of an
adolescent female-to-male (FTM) transgender person, see ,
"About a Boy: Transgender Surgery at Sixteen" by Margaret
Talbot ( New Yorker , 03/18/2013). Written at a time when
the US Supreme Court was hearing two cases regarding
same-sex marriage, Talbot writes that "Transgenderism has
replaced homosexuality as the newest civil-rights
frontier...".
Joel Meyrowitz has provided an authoritative
overview of transsexualism in How Sex Changed: A History
of Transsexuality in the United States (Harvard,
2002).
Michael Bailey has argued that transsexualism
is not, as commonly portrayed, a case of "a woman trapped in
a man's body" (or the reverse). Bailey argues that
transsexualism comes in two forms:homosexual transsexuals,
gay men who are so effeminate, in terms of gender-role
behavior, that they want to take on a female gender identity
as well -- and the body that goes with it (Bailey also
argues, from his extensive survey data, that even gay men
who are not transsexuals tend to be extraordinarily
effeminate); and autogynephilic transsexuals,
heterosexual men who are sexually stimulated by the thought,
or the act, of a male-to-female sex change. Bailey presents
his arguments and data in the provocatively titled book,The
Man Who Would Be Queen: The Science of Gender-Bending and
Transsexualism (Joseph Henry Press, 2003).
Why do I spend so much time on questions
of gender identity and role, when other intro courses
spend most of their time on cognitive development?
Because, as a personality psychologist, I believe that
gender identity and role are central issues for self and
social interaction.
Postnatal Hormonal Influences
The
effects of the sex hormones do not stop with the
differentiation of the internal and external genitalia. They
come back on the scene at least two more times, at puberty and
in old age, each time interacting with the social environment.
Puberty. At puberty, the
program for gender dimorphism passes back to the hormones, as
indicated by such milestone events as menarche (onset of
menstruation) in girls and nocturnal emissions ("wet dreams")
in boys. The most obvious post-natal effects of the sex
hormones are the development of such secondary sex
characteristics as the deepening of the voice in males
and the development of breasts in females, and the
masculinization or feminization of overall body shape. These
physical changes are instigated by sex hormones, testosterone
and estrogen, secreted by the testes and ovaries,
respectively.
Interestingly, there is now evidence
that puberty may begin long before adolescence. In girls, for
example, the pituitary hormones associated with puberty, along
with secretions from the ovaries, are known to begin at about
age 9, and breast development often begins between 10 and 11
years of age. Martha McClintock and Gilbert Herdt, researchers
at the University of Chicago, have noted that children
experience a spurt in physical growth around age 6,
accompanied by the appearance in the skin of oil-producing
sebaceous glands similar to those associated with pimples in
adolescence (Current Directions in Psychological Science,
12/96). They also reported that signs of sexual attraction,
heterosexual or homosexual, can be observed in children as
young as 9 or 10 years of age; interestingly, this is also
about the time that girl-boy teasing begins. This attraction
should not be confused with sexual desire, much less sexual
activity; these kick in later, as the individual approaches
and enters adolescence. Rather, at this early age there
appears to be only a more or less clear "leaning" toward one
sex or the other. McClintock and Herdt suggest that these
changes are related to the secretion by the adrenal glands of
a form of androgen known as dihydroepioandrosterone
(DHEA), which begins at about age 6 and increases to a
critical level at about age 10 -- a point which they call arenarche,
by analogy to menarche. DHEA reaches adult levels at about age
18 before diminishing over the rest of the life course.
In any event, the hormonal effects of
puberty interact with the social imprimatur as parents and
others impose culturally bound standards for adolescent
behavior. In some cultures, for example, sexual
experimentation is permitted, even encouraged; in others, sex
is strictly prohibited until marriage. In some cultures, boys
and girls are permitted to date, and even engage in light
sexual activity ("petting"); in other cultures, boys and girls
can meet only under conditions of strict supervision; in still
other cultures, marriages are arranged by the parents, and the
engaged couple may have only minimal contact with each other
before their wedding day. In all cultures, parents and others
scrutinize adolescents for signs of "normality", and the
adolescents' gender identities, gender roles, and erotic
orientations are strengthened and challenged.
Middle and Old Age. Later in
life, there are further dramatic changes in hormone levels.
These are most obvious in women, particularly the sudden drop
in estrogen levels, and cessation of menstruation, known as menopause.
Recent evidence suggests that there may be a male version of
menopause as well, known as partial androgen deficiency in
aging men (PADAM), or andropause. Although men
produce sperm throughout their adult lives, they experience a
gradual decline in testosterone levels as they age (about 0.5%
per year after age 30), which in turn may be associated with
fatigue, loss of muscle tone and bone density, and "decreased
libido" (a term derived from Freud's term for the sexual
drive). Note, however, that the diagnostic criteria for
andropause overlap greatly with those for depression. In the
absence of laboratory tests showing abnormally low levels of
testosterone, it is not at all clear that andropause is a
legitimate diagnosis, or that HRT is a legitimate treatment.
Here again, the hormones interact with
the social environment. For example, the cessation of
menstruation, and the loss of child-bearing capacity, may
challenge women's gender identities and roles. Similarly,
age-related erectile difficulties may challenge those of men.
Natural Condition or Manufactured Illness?
As with menopause, it has been suggested that
andropause be treated with hormone replacement therapy, and
several drugs, such as Androgel (a product of Unimed
Pharmaceuticals), have been marketed for that purpose. In
fact, between 1997 and 2001 prescriptions for testosterone
almost doubled ("Male Hormone Therapy Popular But Untested"
by Gina Kolata,New York Times, 08/19/02).As with
menopause, this suggestion has most frequently been made by
the pharmaceutical industry -- giving rise to the suggestion
that, like menopause and its pharmaceutical treatment,
andropause and male HRT are diseases and treatments that
have been "manufactured" by Big Pharma for economic gain
(see "Hormones for Men" by Jerome Groopman,New Yorker,
07/29/02).In 2002, the results from a major study of
HRT in healthy women suggested that the treatment did more
harm than good, increasing the risk of heart problems, and
leading many physicians to discontinue the treatment and
many of their patients seeking alternatives. Similarly,
because testosterone can promote increase prostate cancer
and increase the risk of heart attacks and strokes, a
long-term study of HRT in men was also discontinued in 2002.
In women, menopause is a real condition, but
it is something that occurs naturally, in the course of
aging, and it not at all clear that it should be treated as
if it were a disease that could be cured with the right drug
-- especially, as in the case of HRT, the risks are so
great. This would constitute a "medicalization" of normal
aging. The situation is even less clear, and the danger of
inappropriate medicalization even greater, in the case of
andropause, in that levels of testosterone do not drop as
quickly, or with such consequences (such as hot flashes) as
estrogen levels drop in women. HRT is valid as a treatment
for hypogonadism, a rare genetic condition (also known as
Klinefelter's syndrome, or chromosomal XXY) that can also be
induced by chemotherapy or radiation therapy for cancer, and
for certain disorders of the pituitary gland. But it is not
at all clear that HRT should be considered a pharmaceutical
"fountain of youth" -- for either women or men.
In summary, the "program" for gender
dimorphism of identity and role begins with the sex
chromosomes (XY or XX), which differentiate the gonadal tissue
into testes or ovaries; and continues with hormones secreted
by the testes, which differentiate the internal and external
reproductive anatomy. This phyletic imprimatur, a
process of gender dimorphism that is common to all humans, and
is under direct biological control, endows the fetus with
characteristically male or female reproductive anatomy.
At birth the program for gender
dimorphism passes from the genes and the hormones to the
social environment, as the parents classify the newborn child
as male or female, and begin the process of raising him or her
according to cultural concepts of masculinity and femininity.
Early in childhood, the child recognizes his or her own
gender, identifies him- or herself as male or female, and
begins to model his or her attitudes, beliefs, and behavior on
others of his or her "own kind". These social learning
processes, which are under environmental control and vary from
one culture to another, is known as the social imprimatur.
Everyone undergoes gender dimorphism of identity and role, but
the outcome differs from one individual to the next, and from
one culture to the next, depending on the details of the
social imprimatur.
The social imprimatur
encompasses many different elements:
- The individual's biographical history.
- Gender-role socialization, in which parents and
others impose on the child culturally specific concepts of
masculinity and femininity.
- The development of the child's self-concept as
male or female, and consequent identification of others of
his or her "kind".
- Social learning processes, including:
- the direct experience of rewards and
punishments for gender-appropriate and -inappropriate
attitudes and behaviors;
- vicarious learning from the example of others;
- learning by precept, or deliberate instruction
in how to think and behave.
The point of all
this is that gender dimorphism extends far beyond the obvious
matters pertaining to reproductive anatomy. There are at least
three other aspects of gender dimorphism that we have to
consider.
- Gender identity, or the individual's private
experience of being male or female (or, in some cases,
transgender).
- Gender role, or the public expression of
masculine or feminine characteristics, as deemed
appropriate in one's culture.
- Erotic orientation, or one's sexual attraction
towards other people, whether heterosexual, homosexual,
bisexual, or asexual.
Gender Identity
Gender role socialization is not simply
imposed on the child from outside: the child is also an active
agent of his or her own gender socialization, as he or she
acquires an identity as a little boy or little girl.
Beginning at
about age 2, children notice (as it were) their own genitalia,
and identify themselves as boys or girls. (The cartoon on the
left captures the situation beautifully: In the original, the
caption reads "There is a difference!". I first
encountered this cartoon in the 1960s in a "QSL" card mailed
by an amateur radio operator, whose identity I have since
forgotten. Apparently, it's now available as a tattoo.)
This self-identification
has a number of consequences
- The child divides people into two categories, according
to sex, and identifies him- or herself as the same as
some, and different than others.
- The child attaches a positive affective valence to his
or her own gender.
- The child begins to actively model him- or herself on
others who are similarly endowed.
Children's recognition of gender is
perfected by about age 3. By this time, they strongly prefer
objects labeled as "for" their own gender.
Differences in gender-role
behavior are not reliably observed before age 2, but they are
well established by the time the child goes to school:
- Children prefer playmates of the same gender.
- Girls prefer to play in smaller groups than boys.
- Boys engage in more roughhousing than girls.
- Boys playing in groups are more likely to fight than
girls.
- Girls are more likely than boys to turn to adults as
resources.
Actually, children learn
both gender roles, and adopt the role that is appropriate to
their gender identity as male or female. Even so, there is
some asymmetry in their preferences:
- Boys actively avoid activities that have been
stereotyped as "for girls".
- Girls tend to show less stringent gender-role
differentiation.
- Similarly, fathers enforce stricter gender boundaries
than mothers do.
The active participation of the child
in his or her own gender-role socialization illustrates the
principle that the child is an active agent of his or her own
development. Once the child has categorized him- or herself as
male or female, he or she begins the active process of
learning and performing the roles deemed appropriate for his
or her gender. In this way, gender role socialization
illustrates the complex interactions that play out between
nature and nurture, and between the person and the
environment.
Gender Role
In western society, gender roles have
traditionally been divided into two major dimensions.
- The traditional masculine gender role can be
summarized in terms of agency and instrumentality.
Men, and even boys, are expected to be active and
independent, to exercise leadership, to be competitive,
and to be oriented toward achievement outside the family.
- The traditional feminine gender role emphasizes
communality and expressiveness. Women, and
even girls, are expected to be sensitive and empathic, to
be concerned for others' welfare, to seek cooperation and
interpersonal harmony , and to be oriented toward
achievement inside the family.
Link
to an interview with Janet Taylor Spence.
Bacha Posh in Afghanistan...
The
distinction between gender identity and gender role is
dramatically illustrated by the common practice, in
Afghanistan, in which pre-adolescent girls masquerade as
boys. In Afghanistan, there is considerable social pressure
on families to produce sons, and pity and even contempt
directed toward families that have none. Sons are valued
more highly than daughters, on sheer economic grounds; and
in Afghanistan's tribal culture, only sons can inherit the
family's wealth or continue the family name. So when a son
doesn't happen, parents will sometimes take a daughter
(sometimes the youngest, who may be seen as a kind of "last
try"), dress her as a boy, and present her a such to
outsiders. (There is even a superstition that doing so will
increase the family's chance of actually producing a male
child.) The children are generally referred to as bacha
posh, a Dari phrase meaning "dressed as a boy".
Inside the home, unless there are visitors,
the bacha posh are dressed and treated as girls. Outside the
home, of when there are visitors, they are dressed and
treated as boys. They will play with other boys, go to
school with boys, and even work as boys outside the home.
Nobody is fooled, exactly, but it does relieve some of the
social pressure. At puberty, most bacha posh return to their
culturally sanctioned roles as girls -- partly because the
fact that they're not boys becomes pretty obvious, and
partly because their parents fear the consequences of
pubescent girls being around pubescent boys (and men). But,
apparently, most they never lose their female gender
identities. How the boys they play with (and the men who
teach them in school) deal with this is an interesting
question.
While most bacha posh revert to a
feminine gender role, some -- having gotten a sense of the
privileges accorded to boys and men -- resist the
traditional feminine gender role to which they have been
re-assigned. Shukria Siddiqui, a former bacha posh
who is now employed as an anesthesiology nurse, had a
difficult time making the adjustment. "She had no idea how
to act in the world of women.... For years, she was unable
to socialize with other women and uncomfortable even
greeting them.
"I had to learn how to sit with women, how
to talk, how to behave", she said.... When you change
back, it's like you are born again, and you have to learn
everything from the beginning. You get a whole new life.
Again."
The bacha posh in the illustration is
Mehran Rafaat, age 6, pictured with her older twin sisters
Benefsha and Beheshta (when Mehran's mother produced three
girls in a row, the negative attitude from her mother-in-law
led her to raise Mehran as a bacha posh). At the
time the photo was taken, their mother, Azita Rafaat, was an
For her part, Mehran seems to have taken to the masculine
gender role quite well. Azita said of her daughter, Mehrat,
My daughter adopted all the boys' traits very soon You've
seen her -- the attitude, the talking -- she has nothing of
a girl in her".
Like the guevodoces, the bacha
posh constitute a natural, living laboratory for the
study of gender issues. It will be interesting to follow
them, and see if they constitute a force for social change
within Afghanistan. Once girls have had a taste of what it
is like to be boys, and once boys have had a taste of girls
who can behave like boys, what will happen to traditional
gender roles in Afghan culture. Interestingly, Mehran's
mother, Azita, herself functioned as a kind of bacha
posh when she was a girl, helping out in her father's
shop, and dressing like a boy to run errands. With the
overthrow of the Taliban, she became an elected member of
Afghanistan's parliament, with a salary of $2,000 per month
in salary (Mehran's father, by contrast, was unemployed, and
functioned as a "house-husband"). Coincidence? Maybe not.
(Photo by Adam Ferguson, from "Where Boys
Are Prized, Girls Live the Part" by Jenny Nordberg,New
York Times, 09/21/2010, which was the source for
this sidebar and is a terrific article, highly
recommended).
...and Wakashu in (Edo) Japan
Another example of gender fluidity was
a sort of "third gender" in Japan during the "Edo period",
beginning in 1603 and lasting until the "opening" of Japan
to Western influences in 1868. These wakashu
were the "beautiful youths", adolescent males, appreciated
as representing the ideal of beauty, for whom it was
permissible to engage in sexual relations with both men and
women. The wakashu were not necessarily homosexual
males (lesbianism This cultural practice was
discontinued in the late 19th century, as Japan adopted
Western notions of gender and sexuality. Homosexuality
(though not same-sex marriage) is legal in Japan, which has
a thriving gay subculture, but even homosexual men are
expected to marry women and produce offspring. Some
androgynous Japanese men identify as "genderless danshi".
For more, see "A
Third Gender: Beautiful Youths in Japanese Prints", an
exhibition at The Japan Society in New York, 2017, reviewed
in "The 'Indescribable Fragrance' of Youths" by Ian Buruma,
New York Review of Books, 05/11/2017).
In many ways, it would seem that
masculinity and femininity are polar opposites -- that they
anchor opposite ends of a single dimension of gender role.
And, indeed, that's how these personality characteristics are
measured in traditional questionnaire measures of personality,
such as the Mf (Masculinity-Femininity) scale of the
Minnesota Multiphasic Personality Inventory (MMPI) or the Fe
(Femininity) scale of the California Psychological Inventory
(CPI).
But, more recently, some feminist
psychologists have argued convincingly for an alternative
conception in which masculinity and femininity are construed
as independent dimensions of personality (recall that
a similar debate took place with respect to the relation
between positive and negative emotionality). Such a scheme
yields four basic categories of gender role:
- Masculine sex-typed individuals score high on
traditional measures of phenotypically "masculine" agency
and instrumentality, and low on traditional measures of
phenotypically "feminine" communality and expressiveness.
- Feminine sex-typed individuals score high on
phenotypically feminine traits, and low on phenotypically
masculine traits.
- Androgynous individuals score high on both
masculinity and femininity.
- Undifferentiated individuals score low on both
scales.
- Perhaps this is because they simply have not developed
a particular stance with respect to gender role.
- Alternatively, it is possible that these individuals,
or at least some of them, have transcended traditional
gender roles entirely.
Androgyny
was a new scientific concept in the 1970s, but it was
foreshadowed by one of the earliest documents of feminist
theory: Woman in the Nineteenth Century (1845) by
Margaret Fuller, the lone woman in the "Transcendentalist
Club" that included Ralph Waldo Emerson and Henry David
Thoreau. As Judith Thurman writes (reviewing a number of
biographies of Fuller):
Margaret was a strapping girl who preferred boys'
strenuous activities to girls' decorous ones.... [Her
platonic but] amourous friendships informed Fuller's
prescient notion of gender as a bell curve -- the idea that
there are manly women, womanly men, and same-sex
attractions, all of which would be considered perfectly
natural in an enlightened society.... It was an "accursed
lot", Fuller concluded, to be burdened with "a man's
ambition" and "a woman's heart", though the ambition, she
wrote elsewhere, was "absolutely needed to keep the heart
from breaking". ("An Unfinished Woman", by Judith Thurman, New
Yorker , 04/01/2013.)
The terms "tomboy" and its rough male equivalent, "sissy",
have long been used to label gender-nonconforming girls and
boys (though, in practice, "sissy" is generally more
pejorative. For a historical survey of "tomboyism" (with
occasional forays into sissyhood), see Tomboy: The
Surprising History and Future of Girls Who Dare to be
Different (2020) by Lisa Selin Davis (reviewed by Lisa
Damour in "'Tomboy' Looks at Gender Roles, and Role-Playing,
Through the Ages", New York Times Book Review,
11/08/2020).
How Big Are Gender Differences,
Really?
Men and women, and boys and girls,
differ from each other psychologically in obvious and subtle
ways. In addition to obvious differences in masculinity
(agency) and femininity (communality), it's also been claimed
that there are gender differences in cognitive ability: males
are generally held to be superior in mathematical and spatial
ability, for example, and females to be superior in verbal
ability. All of these differences fit the cultural
stereotypes (at least so far as Western culture is concerned),
but how big are these differences, really. Are
they big enough to justify obvious differences in social
outcomes, such as the fact that there are many more men than
women teaching math and science at the college level?
Oddly enough, for all the studies -- and
there are umpteen thousands of them -- documenting gender
differences in performance on this or that task, it was only
relatively recently that anyone viewed this literature from
the standpoint of effect size -- which, as you'll
remember from the lectures on Methods and Statistics,
measures the strength of an effect -- in this cases,
differences between two groups classified by gender. The
first comprehensive review of gender differences was published
by Eleanor Macoby and Nancy Jacklin in 1974, but these
researchers did not have the statistical tools of
meta-analysis available to them at the time, and they had to
present only a verbal, impressionistic summary of the
literature. Since then, however, the technique of
meta-analysis has been developed, allowing researchers to
combine the results of a large number of studies, and
summarize their findings in a single quantitative score
representing effect size -- that is, taking all of the studies
together, the magnitude of the difference between males and
females on various measures. These meta-analyses of the
literature confirm that many of these gender differences exist
-- but they also show how remarkably small most of them really
are.
Among the first researchers to notice
this, and to make a big deal out of it, was Janet Shibley
Hyde, a professor at the University of Wisconsin (whose PhD
was from UC Berkeley).
- Linn & Peterson (1984) conducted a meta-analysis of
172 studies of gender differences in spatial ability, and
found that there were, overall, moderate differences
favoring men, with an overall effect size of about d
= .43.
- But Hyde & Linn (1988) conducted a meta-analysis of
165 studies of gender differences in verbal ability, and
found that the effect size (d) was a "small" 0.11
favoring females.
- And Hyde et al. (1990) conducted another meta-analysis
of mathematics ability, and found an even smaller effect
-- a truly "trivial" effect -- favoring males, d =
.05.
- Hyde and Plant (1995), responding to Eagly (1995), found
that 60% of meta-analyses of gender differences yielded
only "very small" and "small" effect sizes, compared to
35% of meta-analyses of various psychological,
educational, and behavioral interventions; and only 13% of
gender studies yielded "large" or "very large" effects,
compared to 26% in the other topic areas.
OK, so there are differences, on
average, in spatial, verbal, and mathematical abilities
between males and females. The economist Lawrence
Summers, who was Secretary of the Treasury during the Clinton
Administration and later President of Harvard University (and
author of a paper famously entitled "There Are Idiots"),
pointed to this last difference, especially, in explaining why
there were so few women among the math and science faculty and
graduate students at Harvard -- implying that women just
didn't have the quantitative chops to succeed at the highest
level in these fields (in this he was supported by Steven
Pinker, a distinguished Harvard psychologist; in the resulting
brouhaha, Summers was forced to resign from the presidency,
though he remains a professor in the Economics
Department). But Summers left out two important points.
- Although there were more men than women on the faculty
of math and science departments at Harvard, there were
also more men than women on the faculty in the English
Department! So whatever accounts for the gender
imbalance, it has to be more than raw ability. Maybe
there's some gender discrimination as well (you think?).
- Although there are gender differences on average,
they're pretty small. Even allowing for the gender
difference, there are plenty of women available with
strong math abilities -- enough to occupy a fair share of
faculty slots.
Hyde's papers set
of an avalanche of meta-analyses of gender differences in
various abilities and traits. In 2005, Hyde summarized
the results of these analyses with her gender similarities
hypothesis, "that males and females are similar on most,
but not all, psychological variables. That is, men and women,
as well as boys and girls, are more alike than they are
different" (p. 581). Surveying the 46 published
meta-analyses available to her at the time, she found a mean
(unsigned) difference of d = .21 between males and
females. The figure at left depicts what a d of
0.21 looks like, in terms of the normal distribution.
Fully 78% of the psychological gender differences uncovered in
the literature were, in terms of effect size, "very small" or
"small" in magnitude; only 9% counted as "large" or "very
large". As Hyde pointed out, these differences hardly
justify the title (and argument) of John Gray's 1992
best-selling book, Men Are From Mars, Women Are From Venus!
(see also Hyde, 2007).
And to put the icing on the cake, Zell
and his colleagues (2015) reviewed a total of 106
meta-analyses, and confirmed Hyde's essential findings: an
unweighted average d score of 0.21, right in the
middle of the range of "small" effects; fully 85% of these
effects classified as "very small" or "small", and fewer than
3% classified as "large" or "very large".
- The largest effect size was observed in -- wait
for it! -- "masculine vs. feminine traits", with an
average d score of 0.73 -- right in the middle of
the "large" range.
- Actually, no other effect sizes classified as "large",
and none classified as "very large".
- The gender difference for aggression, favoring (if
that's the proper word) males, yielded an average d
of 0.45, classifying as a "medium" effect.
Zell et al. conclude, "We utilized data
from over 20,000 individual studies and over 12 million
participants to reevaluate the gender similarities hypothesis
and found that its core proposition receives strong support"
(p. 18).
So, males and females are "far more
similar than different" (Zell et al., p. 18) -- except on a
few variables, the most salient being gender role: masculinity
and femininity. So now let's ask where these differences
come from.
The debate over gender differences
just won't go away. In 2017, James Damore, a software
engineer at Google, took a page from Lawrence Summers's book
and argued that calls for increased representation of women
in silicon Valley ignored scientific evidence of gender
differences in cognitive abilities relevant to software
engineering. His
manifesto quickly went viral. Damore was
subsequently fired on grounds of incompatibility with
Google's culture, but the editors of The Economist
argued (08/12/2017) that Larry Page, the co-founder of
Google and the chief executive officer of Alphabet, Google's
parent company, should have written a "ringing, detailed
rebuttal" to Damore's manifesto instead. Page didn't
do that, but The
Economist did it for him (08/19/2017),
essentially echoing the points made here.
Heredity and Environment in Gender Role
A number of twin studies have been
conducted to determine the various sources of individual
differences in gender role. Unfortunately, these studies have
construed masculinity and femininity as polar opposites, not
independent dimensions of gender role, but their results are
still interesting.
In a pioneering study, Irving
Gottesman et al. (1965, 1966),
administered the masculinity-femininity scales of the MMPI and
CPI to a sample of adolescent twins. Robert Dworkin and his
colleagues repeated this testing with the same sample some 10
years later, when the subjects were adults. although these
investigators did not calculate components of variance, the
fact that the correlations for MZ twins were consistently
higher than those for DZ twins gives prima facie
evidence for a genetic contribution to individual differences
in gender role. This isn't terribly surprising. But the
relatively low magnitude of the MZ correlations suggests that
the environment also plays a role in shaping this aspect of
personality. In fact, if you apply the formulas discussed
earlier, it's clear that the nonshared environment is
by far the most powerful determinant of gender role. So much
for the easy equation of biological sex (which, of course, is
almost completely determined by the genes) and psychosocial
gender.
A
more recent study by Loehlin et al., with a larger, more
representative sample of subjects, came to much the same
conclusions (Loehlin et al. tested both Americans and
Europeans, but for purposes of comparison I show only the
American results here). MZ twins were more similar than DZ
twins, in terms of masculinity-femininity, but the nonshared
environment was a much more powerful
So, it seems that the
biological and social imprimaturs interact to produce the
individual's gender identity and gender role, but (setting
aside the issue of the masculinization of the brain) the chief
effects of the genes and hormones are anatomical and
physiological, not psychological. They endow the developing
fetus with reproductive anatomy that is more or less
recognizably male or female, and that is just about it.
- At birth, the physical appearance of the child's
genitalia literally structures the environment. The child
is identified as a little girl or a little boy, and raised
accordingly. In the process, the social environment
organizes itself so as to bring up a masculine boy or a
feminine girl. In a classic example of the evocation mode
of person-by-situation interaction, the appearance of the
child's genitalia literally structures the environment,
activating gender-role socialization processes by which
the environment constrains and supports the child's
development of the appropriate gender role.
- A lot of this socialization is imposed on the child from
outside forces:
- Based on the external genitalia, parents and others
(e.g., older siblings) perceive the child as a boy or
girl and raise him or her in accordance with cultural
concepts of masculinity and femininity.
- Parents and others engage in differential modeling of
gender roles.
- Differential socialization continues outside the home,
especially in the hands of peers (and their parents),
teachers, and other authority figures. In the last
several decades, television and other media have become
increasingly important to gender-role socialization.
In these and other ways the environment
constrains and supports the development of "appropriate"
gender roles.
Evidence with respects to gender comes
from classic studies of child-rearing practices (summarized by
Maccoby and Jacklin,The Psychology of Sex Differences,
1974).
- Furnishing of Rooms: Even before age 6, boys'
rooms have a wider variety of furnishings in them. Boys'
furnishings tend to be directed away from the home (e.g.,
sports, cars, animals, the military). Girls' furnishings
tended to be directed toward the home (e.g., dolls,
dollhouses, toy kitchens).
- Household Chores: Children are asked to help with
those tasks that are performed by the parent of the same
sex.
- Differential Rewards and Punishments: Girls
receive smiles, praise, and attention for dancing,
dressing up, playing with dolls, asking for help, and
following their parents; they receive criticism for
running, jumping, and climbing. The opposite trend holds
for boys, who receive praise for building with blocks, and
criticism for playing with dolls and asking for help.
- Differential Modeling by Parents and Other
Authorities: This is especially the case for
fathers, and especially for boys. Fathers make stronger
discriminations between the sexes: They are more concerned
about gender-role socialization, and more likely to issue
differential rewards and punishments for gender-typed
behavior. Siblings, peers, and teachers are also important
in this process. Recently, television has become central
to gender-role socialization, as characters on TV present
additional models gender-role socialization.
- Differential Socialization Outside the Home: The
people and institutions whom children encounter outside
the home also support their development of "appropriate"
gender roles.
Link
to an interview with Eleanor Maccoby.
In the wake of the women's liberation
movement that began in the late 1960s, the social environment
has become somewhat less rigidly structured according to
gender. Many people would like to think that gender-role
socialization has been loosened in these more "enlightened"
times. But while traditional concepts of masculinity and
femininity may have been loosened, they have not been
abolished.
As an
example, consider these toddlers' "training pants" bought in
2000 -- more than 30 years after the feminist revolution swept
America. The girls' version, in pink, portrays Minnie Mouse
singing songs with Daisy Duck; The boys' version, in blue,
portrays Mickey Mouse driving a car and flying a plane with
Donald Duck. These differences have nothing to do with the
physical differences between the sexes (which might well make
structural differences in training pants desirable), and
nothing to do with behavioral differences either. They are a
simple, and not too subtle, reminder of the social differences
-- the differences in gender role -- between boys and girls.
A 2012 doctoral dissertation by
Elizabeth Sweet, a sociology graduate student at UC Davis,
surveyed advertisements for toys in the Sears catalog over the
20th century. She summarized her findings as follows
("Guys and Dolls No More?", New York Times, 12/23/2012):
Gender has always played a role in
the world of toys. What's surprising is over the last
generation, the gender segregation and stereotyping of tows
have grown to unprecedented levels. We've made great
strides toward gender equity over the past 50 years, but the
world of toys looks a lot more like 1952 than 2012.
Gender was remarkably absent from the two ads at the turn of
the 20th century but played a much more prominent role in
toy marketing during the pre- and post-World war II
years. However, by the early 1970s, the split between
"boys' toys" and "girls' toys" seemed to be
eroding.... I found that in 1975, very few toys were
explicitly marketed according to gender, and nearly 70%
showed no markings of gender whatsoever. In the 1970s,
toy ads often defied gender stereotypes by showing girls
building and playing airplane captain, and boys cooking in
the kitchen. But by 1995, the gendered advertising of
toys had crept back to mid-century levels, and it's even
more extreme today. In fact, finding a toy that is not
marketed either explicitly or subtly (through the use of
color, for example) by gender has become incredibly
difficult... For example, last year [2011] the ego
Group, after two decades of marketing almost exclusively to
boys, introduced the new "Friends" line for girls....
Critics pointed out that the girls' sets are more about
beauty, domesticity and nurturing than building --
undermining the creative, constructive value that parents
and children alike place in the toys.
A 2014 "Big Data" analysis of anonymous
Google searches revealed persisting gender differences in
parents' concerns and expectations ("Google, Tell me. Is
My Son a Genius?" by Seth Stevens-Davidowitz, New York
Times, 01/19/2014).
- Parents were about 2-1/2 times more likely to ask "Is my
son gifted?" than 'Is my daughter gifted?".
- There were similar disparities with other terms
related to intellectual ability.
- Parents were about 1-1/2 times more likely to ask
whether their daughter is beautiful, and 3 times more
likely to ask if their daughter was ugly.
- There were similar disparities with other terms
related to physical appearance.
Other modes of the person-by-situation
interaction require more than the mere presence and appearance
of the person. In these modes, the person must do
something:
- either overtly, in terms of publicly observable
behavior;
- or covertly, in terms of privately experienced thought.
The Social Construction of Gender Role
Simone de Beauvoir was onto something.
The environmental forces shaping children's gender roles are
not by any means subtle. Consider, for example, the
different dolls -- because that's what they are -- offered
for girls and boys to play with.
- Girls get Barbie.
- Boys get GI Joe.
Barbie in particular raises concerns because her physical
proportions may give girls (and, for that matter, boys) an
idealized body image that is impossible to achieve (even if it
were desirable).
But the effects
of Barbie on gender identity and role are not restricted to
physical features of the body image. They also extend to
mental and behavioral aspects of the gender role. Consider,
for example, the controversy that arose in 1967 over the
first Talking Barbie, who was famous for saying "Gee math
class is hard". Talking Barbie plays directly into
gender-role stereotypes about sex differences in
mathematical ability, and may discourage girls from taking
advanced courses in mathematics.
Although certain aspects of masculinity and
femininity may seem "natural", many of them are socially
constructed.
Consider, for example, how little boys and
girls are dressed. Only a century ago, it was very common
for boys to be clothed if dresses until they were six or
seven years of age, at which time they would also get their
first haircut. Up until that time, a causal observer might
be forgiven for mistaking them for girls.
|
For example, here is a photograph of
Ernest Hemingway, the 20th-century American author,
winner of the Nobel Prize for Literature, who might
be considered a paragon of masculinity. |
|
And here is a photograph of Hemingway
as a child. Some psychoanalytically inclined authors
have suggested that Hemingway's rather aggressive
adult masculinity -- he seems never to have seen a
wild animal that he didn't shoot -- emerged as a
kind of reaction formation to the fact that
he was treated "like a girl" as a child. But the
point is that all little boys were treated
in the same way. |
More generally, it is now commonplace, almost
a cliche, for newborn boys to be swaddled in blue, and
newborn girls in pink. This somehow seems "natural". But
it's not, and the proof of this is that, until well into the
20th century, the colors were reversed. For most of history,
male and female infants were both dressed in white. In 1927,
an article in time magazine actually recommended
pink for boys and blue for girls. Only later did the color
preferences reverse. Later, under the influence of the
feminist revolution of the 1960s and 1970s, the clothing
industry stopped promoting gender-specific colors, but the
rule of "blue for boys, pink for girls" began to make a
comeback in the 1980s. For more details, see "When Did Girls
Start Wearing Pink?" by Jeanne Maglaty,Smithsonian,
April 2011.
Growing Up Male and Female
For authoritative reviews of the literature
on gender-role socialization, see:
- John Money & Anke Ehrhardt,Man and Woman, Boy and
Girl (Johns Hopkins University Press, 1972). Much of
my treatment of gender dimorphism is drawn from this book.
- Rebecca Jordan-Young,Brain Storm (Harvard, 2009).
- Donald Pfaff,Man & Woman: An Inside Story
(Oxford, 2010).
- Eleanor E. Maccoby & Carol Nagy Jacklin,The
Psychology of Sex Differences (Stanford, 1974).
- Eleanor E. Maccoby,The Two Sexes: Growing Up Apart,
Coming Together (Harvard, 1998).
- Carol Tavris,The Mismeasure of Woman (Simon &
Schuster, 1992).
- Jo B. Paoletti,Pink and Blue: Telling the Girls From
the Boys in America (2011)
- Alice H. Eagly & Wendy Wood, "The Nature-Nurture
Debates: 25 Years of Challenges in Understanding the
Psychology of Gender" [with commentary] (Perspectives
on Psychological Science, 2013).
Erotic Orientation
The last aspect of gender
dimorphism is erotic or sexual orientation --
whom one is attracted to as a sexual partner. Again, the
simpleminded story is that genetic males identify themselves
as boys, grow up to become masculine men, and are sexually
attracted to women; and genetic females identify themselves as
girls, grow up to become feminine women, and are sexually
attracted to men. But everything we've discussed about about
gender dimorphism so far has proved to be more complicated
than that. Genetic males don't always grow up with the
corresponding reproductive anatomy; they don't always identify
themselves as little boys; and they don't always become
stereotypically masculine. Genetic females are no different in
these respects. And so, as we might expect, erotic orientation
is also pretty complicated.
- Most men and women are, indeed, heterosexual in their
erotic orientation: these men are attracted to women but
not men, and these women are attracted to men and not
women.
- But some men and women, about 10% of the population are
homosexual in their orientation: gay men attracted to men
but not women, and lesbian women attracted to women but
not men.
- Some individuals are bisexual -- for example, some men
are attracted to both women and men, and vice-versa for
women.
- And other individuals are, frankly, asexual. They may be
biologically normal, masculine men or feminine women, but
just don't feel that spark -- for anyone.
Are Bisexuals Really BI?
The existence of bisexuality has been a
subject of considerable controversy. Manicheanism is very
attractive, and the same Manichean view that people are
either male or female, boys or girls or men or women, or
masculine or feminine, extends to the view that people are
either heterosexual or homosexual. Certainly, a nontrivial
number of men and women claim to be attracted to, and
sexually aroused by, partners of both sexes. But it's taken
a surprisingly long time to put these self-reports to
empirical test.
A pioneering study reported in 2005 by
Rieger, Chivers, and Bailey used a penile plethysmograph --
a psychophysiological device that uses a strain gauge to
quantify penile erections, which occur when the penis
becomes engorged with blood -- to monitor the sexual arousal
of a number of self-identified homosexual men. The finding
was that 75% of the subjects were aroused exclusively by
images of male homosexual sexual activity, while 25% were
aroused exclusively by heterosexual images. of heterosexual
activity. These investigators concluded, controversially,
that male bisexuality wasn't a distinct pattern of sexual
arousal, but rather an interpretation that some men placed
on their homosexuality -- that they weren't really
homosexual, or weren't homosexual exclusively.
However, later research, using the same sorts
of procedures, appears to have identified genuine
bisexuality after all. Another study from Bailey's
laboratory, by Rosenthal et al. (2011), found that some
bisexual men were, indeed, sexually aroused by both
homosexual and heterosexual imagery. These later findings
were supported by a second study by Cerny et al. (2011).
What made the difference between the earlier
and later studies? Apparently, differences in subject
selection. The subjects in Rieger's 2005 report were
selected based on their self-reported patterns of sexual
arousal; but in Rosenthal's 2011 report, the subjects were
required to have had sexual experiences with at least two
partners of each sex, and a romantic relationship with at
least one person of each sex.
To date (2011), there have been no comparable
studies of bisexual women -- although, to answer your
question, there does exist a vaginal version of the
plethysmograph, technically a photoplethysmograph, shaped
like a tampon, capable of recording blood flow in the vagina
as an index of female sexual arousal.
By the way, there's no comparable research on
asexuals -- apparently this topic doesn't arouse much
scientific interest.
For an article about the controversy over
bisexuality, see "The Scientific Quest to Prove -- Once
and For All -- That Someone Can be Truly attracted to Both
a Man and a Woman" by Benoit Denizet-Lewis, New York
Times Magazine, 03/23/2014.
There was a time when homosexuality was
classified as abnormal behavior, a sign of mental illness, and
homosexuals were frequently sent to psychiatrists and
psychologists in an attempt to "cure" them of their deviance.
In 1970, following a vote of the members of the American
Psychiatric Association, homosexuality was removed from the
listing of "sexual disorders" in the 3rd edition of the Diagnostic
and Statistical Manual for Mental Disorders (DSM-III).
Individuals can still seek psychotherapy if they are bothered
by their homosexual orientation (or by their heterosexual
orientation, for that matter). But that is not the same thing
as considering homosexuality per se to be a mental
illness.
Transsexualism, or transgender, has
undergone a similar evolution. In DSM-II,
published in 1968, transgender identity was listed as a
"sexual deviation". In DSM-III (1980), the
edition which eliminated homosexuality as a category of mental
illness, transsexualism was listed as a "psychosexual
disorder". In DSM-IV (1994) it was listed as a
"sexual and gender identity disorder".
In DSM-5 (2013), the listing was
changed to "gender dysphoria" and limited to individuals who
distress or dysfunction with respect to their gender identity,
male or female, trans or not. In 2017 the World Health
Organization proposed to remove transsexualism, or transgender
identity from its International Classification of Diseases
(ICD), the worldwide equivalent of the DSM.
Homosexuality was also commonly
considered to be criminal behavior -- which is why so many
homosexuals, especially homosexual men, stayed "in the closet"
until relatively recently. As recently as 2003, 14 states
still had "sodomy laws" on their books, proscribing such
"unnatural" sexual behaviors as oral or anal sex -- although,
frankly, these had rarely been enforced when practiced by
heterosexual couples). These remaining laws were invalidated
in a landmark Supreme Court ruling in Lawrence v. Texas
(2003).
The origins of homosexuality are a
persisting puzzle for psychologists and biologists. At first
blush, homosexuality would appear to be maladaptive, because
homosexuals do not procreate, the trait doesn't contribute to
the survival of the species, and one would think it would have
been erased from the human genome by now. For that reason,
evolutionary psychologists twist themselves inside out trying
to concoct "just so stories" to explain how homosexuality is
adaptive after all.
Evolution of
Homosexuality
Homosexuality poses
problems for evolutionary psychology, because, like altruism,
it seems maladaptive. If evolution favors traits that increase
reproductive fitness, how could a trait evolve that doesn't
lead to reproduction at all? Over the years, a number of
hypotheses have been offered to explain how a genetic basis
for homosexuality might have evolved:
- Based on the observation that bonobos, a subspecies of
chimpanzees (and thus closely related to humans) engage in
homosexual and bisexual behavior in order to form
strategic alliances, it has been suggested that
homosexuality might enhance fitness in a similar way for
humans.
- Homosexuals may have such strong sex drives that they
engage in heterosexual as well as homosexual activity, and
thus pass genes for homosexuality to the next generation
in the usual way.
- It may be that homosexuality is determined by not one
gene but many, and that the genes for homosexuality are
only activated in a particular intrauterine environment
(such as low levels of androgens). Thus, some
heterosexuals could carry genes for homosexuality the same
way some brown-eyed individuals carry genes for blue eyes.
Homosexuality would only occur if the person inherited a
critical number of homosexuality genes, and if these genes
encountered the "right" intrauterine hormonal environment.
- While homosexuals might not produce many children
themselves, they may gain an adaptive advantage by serving
as guardians of their kin. Because genes are shared among
family members, anything that homosexuals do to increase
the reproductive fitness of their heterosexual siblings
and cousins will also pass a genetic tendency toward
homosexuality into the next generation (a similar argument
from kin-selection has been offered for altruism).
- A similar explanation has been offered for the
existence of grandmothers, who, being post-menopausal,
can't procreate anymore either. Which, in my view, just
goes to show you how stupid both ideas are.
These hypotheses are all very
interesting, but they all appear to be predicated on the same
adaptationist fallacy -- the notion that traits must be
adaptive to evolve, and that traits evolve by virtue of their
adaptive value.
The answer to the mystery of
homosexuality may be simply that it is a mystery. If you think
about it, setting reproductive issues aside, the sexual
attraction that two people of the same sex feel for each other
may be no different, no more mysterious, than the sexual
attraction that two people of opposite sex feel for each
other.
As Hanne Blank puts in in her book,Straight:
The Surprisingly Short History of Heterosexuality
(2012):
We don't know much about heterosexuality. No
one knows whether heterosexuality is the result of nature or
nurture, caused by inaccessible subconscious developments,
or just what happens when impressionable young people come
under the influence of older heterosexuals".
Development of
Homosexuality
In many cases, the precursors of homosexuality can be seen
long before adulthood, or even adolescence, in childhood
behavior patterns (Bailey & Zucker, 1995). As a
group, both male and female homosexuals tend to show
cross-sex-typed role behaviors fairly early in
childhood. These children have sex-appropriate gender
identities, in that chromosomal boys identify themselves as
boys and chromosomal girls identify themselves as girls.
It's just that they tend to display characteristics associated
with the other gender role -- the boys more "feminine", the
girls more "masculine" in sports, play, toy choice, and
"pretend" play. However, not too much should be made of
these signs of "prehomosexuality". Children also engage
in a fair amount of role-experimentation, as they begin the
process of figuring out who they are.
Let's
look at our tried-and-true way of examining the origins of
some trait, which is the twin study. Michael
Bailey and his colleagues reviewed 12 large twin studies,
looking at the concordance rate for homosexuality. The figures differ
a little depending on whether the study specifically
recruited homosexual subjects, which injects some bias into
the sample, or rather was based on more representative
samples of the population.
In either case, the MZ concordance rate is higher
than the DZ concordance rate, which again provides prima
facie evidence for a genetic contribution to homosexuality. When the
researchers calculated an overall estimate, correcting for
sampling bias, it's very clear that when it comes to erotic
orientation, as with almost everything else we know about
personality and attitudes, genes are important but not
decisive: the environment, and particularly the nonshared
environment, makes a big difference.
More
recently, Andrea Ganna and her colleagues documented genetic
influences on homosexuality -- or, at least, same-sex sexual
behavior -- using the GWAS methods described earlier (Science,
08/30/2019, from which the graphics are taken). Based
on two large samples from the UK (the Biobank database of
408,995 individuals) and the US (68,527 individuals who
subscribed to the "23and Me" service), as well as three
smaller samples, they looked for specific gene loci
(technically, technically, single-nucleotide polymorphisms,
or SNPs; see more below) associated with subjects reporting
that they had ever had sex with a partner of the same
sex. But the investigators also had information about
the proportion of same-sex to total sexual partners, sexual
attraction, and sexual identity. After all this, they
found 5 such SNPs two for men (on chromosomes 11 and 15),
one for women (on chromosome 4), and two for both men and
women (on chromosomes 7 and 12). Of course,
having sex with a partner of the same sex isn't the same
thing as being homosexual -- there's a fair amount of
same-sex sexual experimentation that never gets beyond
that. Interestingly,
none of these loci were on the X chromosome -- a region
known as Xq28 famously identified as the "gay gene" by Dean
Hamer, a researcher at the National Institutes of Health, in
1993 (one wonders, then, how these 5 SNPs will fare in the
next study). That's five -- 5 -- genes out of 20,000 in the
entire human genome. And taken together, these genes
account for only 1% of population variance in same-sex
behavior. As the authors put it, "same-sex sexual
behavior, like most complex human traits, is influenced by
the small, additive effects of very many genetic variants,
most of which cannot be detected at the current sample
size".
The
researchers also examined the "genetic correlations" between
same-sex sexual behavior and other personal characteristics
-- that is, the amount of variance that two characteristics
share due to genetic influences. Among the strongest
associations were marijuana use, openness to experience (one
of The Big Five personality traits) and the number of sexual
partners. Interestingly, the genetic correlation
between same-sex sexual behavior and the ratio of the
lengths of the subjects' 2nd and 4th fingers -- the "2D:4D
digit ratio", famously reported to be a physical trait
correlated with androgen levels in men (leading perhaps
millions of men to get out their rulers and measure
themselves -- again).
It is
possible that hundreds or thousands of genes make a
contribution to same-sex behavior, but only these five -- 5!
-- turned up significant in a GWAS involving half a million
people. And again, they account for only 1% of the
variance. That's a long way from "the gay gene".
And when the investigators took account of all the
genome-wide correlations, even the ones that didn't reach
statistical significance, they obtained a "SNP-based"
heritability coefficient ranging between 8 and 25%.
That is, they estimated that all genetic influences, when
aggregated, accounted for as little as 8%, or as much as
25%, of population variance in same-sex sexual behavior,
broadly construed to include fantasies, attraction, and
identity as well as actual behavior. That's about the
same as for other personality characteristics, as estimated
by more conventional twin studies (and without all the
high-tech hoopla). And, it leaves the vast bulk of
population variance to be explained by environmental
influences, both shared and nonshared. Some of the
nonshared environment may, in turn, be influenced by genetic
tendencies. For example, there is a genetic
contribution to the tendency to have multiple sexual
partners -- and the more sexual partners one has, arguably,
the more likely at least one of them will be of the same
sex. And there is a genetic contribution to openness
to experience, and one of those experiences might be
same-sex sex.
The influence of genetics has been used
by some advocates to argue that homosexuality is not a choice
-- it's given by nature, just like our other physical
characteristics; and therefore, homosexuality should not be
criminalized, and homosexuals should not be discriminated
against (for example, they should have the same rights to
marry as heterosexual individuals). And that's a
point. But does the role of the (nonshared) environment
indicate that there's a role for personal choice as well --
that, in the final analysis, homosexuals choose to be
that way? Not necessarily.
At the same time, it's pretty clear
that nature and nurture interact to create an individual's
erotic orientation. How could this happen? We have no idea,
but think about how, as we've shown, nature and nurture
interact to produce other aspects of gender dimorphism --
biological sex, gender identity, gender role -- and erotic
orientation as well.
Here are two very interesting theories
that suggest how nature and nurture could interact to
determine whether one is heterosexual or homosexual. Even if
these theories are wrong in detail, they are important because
they suggest a way of thinking about the origins of
homosexuality that escapes the rigid confines of thinking of
it as a biologically determined trait on the one hand, or as a
personal choice on the other.
Michael Storms (1981) based his theory
on the phenomenon of imprinting, and the notion of a critical
period, discussed in the lectures on Learning.
Storms began with the assumption, for which there is some
supporting data, that gay men tend to reach puberty earlier
than heterosexual men (this claim is controversial, but it
doesn't matter for the purposes of this example). That is to
say, some boys reach sexual maturity, start getting sexually
attracted to anything, at a time in their lives when
they're hanging mostly with other boys (because, at that age,
girls are still yucky creatures to be avoided). Just as a
gosling follows the first thing that moves after it hatches,
so (the theory goes) a boy who enters puberty when there are
mostly other boys in his environment will become erotically
oriented to other males. A similar explanation would hold for
lesbian women.
More recently, Daryl Bem (1996, 1998)
proposed a theory of erotic orientation that is similar to
Storms' in form, but differing in detail. Bem began with the
assumption that we are sexually attracted to people who are
very different from ourselves. Yes, similarity breeds liking,
but -- to quote Bem's phrase -- "the exotic becomes erotic".
So, a boy with stereotypically masculine personality
characteristics will, when he reaches puberty, begin to be
attracted to someone with stereotypically feminine personality
characteristics -- which is, in all likelihood, a female. But
a boy with stereotypically feminine personality
characteristics will, when he reaches puberty, be
attracted to someone with stereotypically masculine
personality characteristics -- which will, in all likelihood,
be a male. the same kind of process unwinds for girls. You can
work out other possibilities for yourself.
Note that both Storms' and
Bem's theories are compatible with a genetic contribution to
homosexuality -- but, critically, neither one assumes that
there is anything like a "gene for homosexuality"; and, thus,
neither gets caught up in the Darwinian paradox of same-sex
attraction.
- Rather, for Storms, it might be that certain genes code
for age of sexual maturity: a boy who gets a genetic
endowment that leads to an early puberty might be more
likely to develop homosexuality.
- And similarly, for Bem, it might be that certain genes
code for stereotypically masculine or feminine personality
traits -- for agency and instrumentality or communality
and expressiveness -- or, at least, for activity level and
aggressiveness.
And also note that both
theories are compatible with a large contribution of the
nonshared environment.
- For Storms, whether a precociously pubescent boy becomes
homosexual will depend on whether his environment is full
of boys or has some girls in it.
- And for Bem, whether a stereotypically feminine boy
becomes homosexual will depend on whether there are some
stereotypically "masculine" girls around to be perceived
as exotic, and thus eroticized.
It should be noted that, in part, the
development of homosexuality has a cultural component to
it. It's been argued that gay people, much like deaf
people -- for that matter, much like Catholics and Jews and
New Yorkers, inhabit a distinctive culture that is different
from the one inhabited by straights (Protestants, Muslims, San
Franciscans). Part of identifying yourself as a member
of a group involves learning and partaking of that culture
through the process of social learning. This would be
true even if homosexuality were completely determined by
genetic endowment. This is not to say that vulnerable
children are recruited into the "gay lifestyle" by
scoutmasters and gym teachers. Wherever homosexuality,
or any other part of one's identity and personality comes
from, you have to learn how to be who you are. It's also
to say, paraphrasing David M. Halperin, an English professor
at U. Michigan who has taught a course entitled "How to be
Gay: Male Homosexuality and Initiation", which became infamous
among conservative pundits, that "gay [people] acquire a
conscious identity, a common culture, a particular outlook on
the world, a distinctive sensibility....Queer [people] are
different, and we should hold on to our culture" ("How to Be
Gay", Chronicle of Higher Education, 09/07/2012; see
also his 2012 book, How to Be Gay).
It should be noted that this general
framework -- nature and nurture interacting, through genes and
the nonshared environment -- is not specific to aspects of
gender dimorphism. Other aspects of personality are probably
acquired in much the same way. In other words, the development
of gender differences in identity and role (not to mention
erotic orientation) serve as models for the development of
personality in general.
For a comprehensive overview of the
development of homosexuality, and the implications of this
research for public policy, see "Sexual Orientation,
Controversy, and Science" by J. Michael Bailey et al.,
published in Psychological Science in the Public
Interest (2016).
Gender Polymorphism
All of this should have made
clear that, when it comes to issues of sex and gender, the
story is not merely one of straightforward genetic
determinism. Contrary to what Freud said, anatomy isn't
destiny after all.
- There may be as many as five categories of biological
sex.
- And, if you include transgender individuals, there are
at least four categories of gender identity.
- And if you include androgynous and undifferentiated
individuals, there are at least four categories of gender
role.
- And if you include bisexuality and asexuality, there are
at least four categories of sexual orientation.
So, if you do the math, that's 5 x 4 x
4 x 4 = 320 different gender-related categories. And
that's a minimum estimate -- especially when you consider that
these aspects of gender dimorphism may not be organized as
discrete categories at all, but rather as continuous
dimensions, so that there is an infinite number of locations
in the four-dimensional "gender space". Never mind that some
children and adults are gender fluid, moving back and
forth between male and female gender identity and masculine
and feminine gender role.
And, to make things even
more interesting, very recent research adds a fifth
set of gender-related categories, what you might call erotic
or sexual identity, analogous to gender identity,
but having to do specifically with sexual attraction and
behavior. So, just as you can have a person who is
biologically male but has a feminine gender identity, or
someone with a male gender identity who adopts a feminine
gender role, so there appear to be individuals who are, say,
homosexual in erotic orientation but who identify themselves
as heterosexual (Haldeman, 2003, 2004;. That is, they
acknowledge their homosexual leanings, but for some reason
wish to identify themselves as heterosexuals -- and sometimes
seek treatment to help them conduct themselves accordingly.
This is not the same as forcing a homosexual to undergo
"conversion therapy". Rather, this appears to be a matter of
how individuals choose to identify themselves -- often for
religious reasons, but sometimes for nonreligious personal
reasons.
- So, if we add four categories of erotic identity to the
categories listed above, that makes 320 x 4 =1,280
different gender-related categories. As I said at
the beginning, it's complicated.
But it's complicated in very
interesting ways. And these complications have implications
for how we think about development, and especially personality
development, in general. For example, Hyde et al. (Am.
Psych., 2018) have argued that emphasis on the "gender
binary" -- the idea that there are only two types of
people, male and female, has thoroughly distorted scientific
research on all aspects of gender. They identify five
sets of research outcomes that challenge the gender binary,
and should lead us to think differently about gender in the
future (quoting from their abstract):
- neuroscience findings that refute sexual dimorphism of the
human brain;
- behavioral neuroendocrinology findings that challenge the
notion of genetically fixed, non-overlapping, sexually
dimorphic hormonal systems;
- psychological findings that highlight the similarities
between men and women;
- psychological research on transgender and nonbinary
individuals’ identities and experiences; and
- developmental research suggesting that the tendency to
view gender/sex as a meaningful, binary category is
culturally determined and malleable.
The Latest on Sex and Gender
For more on
biological, psychological, and sociocultural aspects
of sex and gender, see the Special Issue on Sex and
Gender published by Scientific American,
09/2017. As the editors say on the cover, "It's
Not a Women's Issue: Everybody has a stake in the new
science of sex and gender". For example:
- "Promiscuous
Men, Chaste Women, and Other Gender Myths", by
Cordelia Fine and Mark A. Elgar, takes issue with
the classic position of evolutionary psychology
that behavioral differences between males and
females are hard-wired into the nervous (and
endocrine) system by natural selection. On
the contrary, they show that environmental and
experiential factors play a major role in many of
these differences, that many of these differences,
besides being small, are not immutable. In
their view, :progressive cultural shifts"
"rewrite" nature.
- "Is
There a 'Female' Brain?", by Lydia Denworth,
answers its own question with a fairly vigorous
"No", and that "most brains are a mosaic of male
and female characteristics". As with
behavioral and psychological sex differences,
their is considerable overlap between males and
females in the distribution of various features of
the brain (e.g., the volume of the left
hippocampus).
- "When
Sex and Gender Collide", by Kristina R. Olsen,
summarizes the results of the TransYouth Project,
a study which has followed more than 300
transgender and gender-nonconforming children for
20 years. A major finding of the study is
that a fairly firm "trans" identity develops
fairly early in these children.
- "Beyond
XX and XY", by Amanda Montanez, consists of a
fabulous chart, much expanded from Money and
Ehrhardt's charting of the phyletic and social
imprimatur, showing all the different biological
factors that can affect biological sex (click on
the image below for a full-size
reproduction).
-
"The Brilliance Trap", by Andrei
Cimpian and Sara-Jane Leslie", takes on the
question of whether the (small) sex differences
in math ability justify the under-representation
of women in STEM fields. It doesn't -- not
least because a whole host of psychosocial
factors, including gender stereotyping (and the
sex discrimination it leads to) and stereotype
threat (and the self-handicapping it leads to)
are probably more important. They also
make the interesting argument that
stereotypes about scientific or artistic
"brilliance", coupled with the myth of gender
difference (and the stereotype threat that comes
with it) discourage girls and women from even
entering some fields -- even though most of
those who work successfully in these fields are
not "brilliant" -- only very smart. The
bottom line is that there are plenty of women
soldiers, pilots, and engineers to go around, if
only the psychosocial context were more
favorable.
In
their introduction, the Editors note that "Sex is
supposed to be simple -- at least at the molecular
level. The biological explanations that appear in
textbooks amount to X+X=[female\ and X+Y=[male], Venus
or Mars, pink or blue. As science looks more
closely, however, it becomes increasingly clear that a
pair of chromosomes do not always suffice to distinguish
girl/boy -- either from the standpoint of sex
(biological traits) or gender (social identity)."
They're right. The biology of sex and gender,
taken alone, is complication enough. Add the
psychology, not to mention the anthropology, sociology,
and all the other social sciences, and you've got plenty
more.
|
Maturation: Development as Quantitative Change
The earliest theories of psychological
development focused on maturation and learning.
In general, these theories offered a view of the child as a short,
stupid adult who grows smarter as he or she grows
bigger. Viewed in this way, there is a continuum between
childhood and adulthood, with no abrupt, qualitative changes.
Maturation may be defined as the progressive,
inevitable unfolding of certain patterns of behavior under
genetic control. These behavior patterns occur in a regular
sequence, unaffected by practice or environmental change.
Maturation
is a good description of certain developmental processes, such
as walking. We speak of children "learning" to walk,
but we know from the stepping reflex in infants (see the
lecture supplement on Learning) that walking occurs naturally,
requiring only that the child be able to support itself. Thus,
walking occurs as soon as the skeletal musculature develops
sufficiently to provide that support.
A classic study of
maturation involved traditional Hopi and Navajo children, who
are swaddled and bound to a cradle for the first year of life.
This severely restricts motor behavior, but once released from
the cradle there is little retardation in the emergence of
walking.
In another classic study, by Arnold
Gesell, some children were trained in walking and climbing
stairs. They did, in fact, show these behaviors earlier than
untrained children. But the untrained controls quickly caught
up, and the further progress of the experimental group was not
accelerated by their training. Both groups advanced beyond
walking at the same pace.
The continuous view of development is
exemplified by the measurement of intelligence in terms of IQ:
- Alfred Binet simply estimated the individual's mental
age.
- William Stern calculated IQ as the ratio of the
individual's mental age to his or her chronological age,
imposing an artificial ceiling of 18 years on both ages.
An individual with an IQ of 100 is exactly as old mentally
as he is chronologically.
- This ratio IQ was also adopted by Louis
Terman.
- David Wechsler substituted the deviation IQ for
Terman's "ratio IQ", so that an individual with an IQ has
the same IQ test score as the average person in his age
group -- even if he's older than 18.
The implication is either method of
measuring intelligence is that children continuously grow
smarter as they grow older.
Despite debates over whether IQ is
heritable, the classical "continuity" view is that the child
gradually acquires knowledge through learning, where learning
was construed as tantamount to classical or instrumental
conditioning. John Locke, an English philosopher of the 18th
century, famously argued that the infant is a tabula rasa,
or "blank slate", which is "written on" by experience. In the
Lockean view, development is a matter of learning more than
you already know.
Whether the theoretical focus is on
maturation or learning, the process of development is viewed
as a matter of continuous, quantitative change: the
infant starts out small, physically and mentally, and gets
bigger, physically and mentally, as he or she grows up.
Is Childhood a Recent Cultural Invention?
In
his book,Centuries of Childhood (1960), Philippe
Aries, a French social historian, created quite a stir by
arguing that what we call "childhood" did not exist for most
of history, and instead was a creation by modern liberal
thought (by which he meant the liberalism of the
Enlightenment of the 17th and 18th centuries). In medieval
times, Aries argued, children joined the adult world pretty
much as soon as they could walk, talk, and eat solid food.
They married early (think of Romeo and Juliet, and
they went to war (think of the Children's Crusade), and they
earned money (in the fields or as domestic servants or
apprentices). They were, in fact, short, stupid adults. It
was only in the modern era that children were seen as
different, as innocents who were sent to school, or
protected at home (in what became the nuclear family), as
opposed to going out to work with the rest of their extended
family. The prime mover in this shift, according to Aries,
was Jean-Jacques Rousseau, who promoted a sentimental view
of children. Other historians quickly picked up the general
thrust of his argument, particularly Edward Shorter (in The
Making of the Modern Family, 1975) and Lawrence Stone
(in The Family, Sex, and Marriage in England, 1500-1900,
1977).
Relatedly, it has been
argued that adolescence is largely an invention of the 20th
century, with its laws against child labor and compulsory
schooling.
However, there were also
dissenters, such as Steven Ozment (in Ancestors,
2001), who argued that children have always been children,
pretty much as they are today.
Eventually,
the dispute spawned a great deal of historical research on
childhood and the family. Joan Acocella, reviewing the
three-volume History of the European Family (2001,
2003, 2004), shows how family life did indeed change
radically in the 17th and especially the 18th centuries. But
it turns out that Aries generalized way too far beyond his
limited data, which he generally selected in ways that would
support his theory. The actual picture is pretty complex,
but in the end Acocella generally sides with Ozment: "there
had been a culture of childhood ever since there were
documentable children" ("Little People",New Yorker,
08/18-25/03, from which the illustrations, by Saul
Steinberg, are also taken).
Supportive evidence also comes from Nicholas Orme, a social
historian, in Medieval Children (2001) and Tudor
Children (2023), a study of childhood in Tudor England
-- between 1485 (the accession of Henry VII to 1602 (the
death of Elizabeth I). Orme shows, for example, that
analysis of archeological digs and journal entries reveal a
considerable wealth of children's toys and games.
Reviewing the latter book, Catherine Nicholsonwrites that
"In addition to examining what childhood was like in the
sixteenth century, Orme allows us too glimpse what the
sixteenth century was like for children" ("Right Busy with
Sticks and Spales", New York Review of Books,
06/22/2023).
How About Adolescence?
At least with
adolescence, there's a physical marker -- the onset of
puberty. But even so, in pre-industrial societies --
including Western societies -- "adolescents" were often
engaged on adult-like work behavior -- for example, as
apprentices. The concept of adolescence
was introduced by G. Stanley Hall, a pioneer of
developmental psychology, in 1904. Hall thought that
adolescence was universal, but cultural anthropologists find
that it is far from that: many cultures don't have anything
like it -- when puberty hits, "adolescents" take up adult
roles in work and family life.
Or "Emerging Adulthood"?
For better or worse, adolescence is
now firmly entrenched in Western culture. But some
theorists, taking their inspiration from Erik Erikson (whose
theory is detailed below), have argued for an intermediate
stage between adolescence and adulthood, covering 18-25
years of age. Jeffrey Jensen Arnett (2000) has
vigorously promoted this view in a series of books, such as
Adolescence and Emerging Adulthood: A Cultural Approach
(2009) and Debating Emerging Adulthood: Stage or
Process" (2011). Arnett argues that, just as
adolescence delays the transition from childhood to
adulthood, so "emerging adulthood" delays the transition
from adolescence to adulthood. For example, the median
age of first marriage increased by 4-5 years between 1970
and 1996; so has the median age of entry into the
workforce. The concept brings to mind the remark of
Paul Ryan, Republican vice-presidential candidate in 2012,
that “College graduates should not have to live out their
20s in their childhood bedrooms, staring up at fading Obama
posters and wondering when they can move out and get going
with life”. On the other hand, a 2012 study by the Pew
research Center indicates that most American
twenty-somethings don't live with their parents; they have
jobs and, in many cases, have formed stable
relationships. But still, as with adolescence, there
may be something to the notion. Just because a stage
of life is, to some extent, a cultural artifact doesn't make
it any less real.
As Opposed to "Established Adulthood"
After "emerging" adulthood, then what?
Mehta et al. (Am Psych 2020) have argued for a new
lifespan "stage", as it were, covering roughly 25-40 years
of age. During this period, in their view, adults must
deal with the "intersecting demands of progressing in a
chosen career, maintaining an intimate partnership, and
caring for children" -- and often, one might add, aging
parents as well. Individuals at this stage of life
often face what Mehta et al. call the "career-and-care
crunch" of conflicting obligations, responsibilities, and
desires. The crunch especially effects women who work
outside the home, but men aren't immune to it.
Lifespan
Development as Qualitative Change
Later theories of development focused
on qualitative changes. The child is not just a short,
stupid adult, but rather the young child is held to think
differently than older children and adults. Thus,
developmental differences are qualitative, differences in
kind, not just quantitative, differences in amount. Children
are not stupid, compared to adults, but their intelligence
needs to be appreciated on its own terms. D
Despite
debates
over the heritability of IQ, the classical continuity view
is that the child gradually acquires knowledge through
learning, where learning was construed as tantamount to the
sorts of things we studied in the lectures on classical and
instrumental conditioning.
John Locke, an English philosopher of the 18th
century, famously argued that the infant is a “tabula rasa”,
or blank slate, which is written on by experience. In the Lockean
view, development is a matter of learning more than you
already know. This
contrasted with the earlier view of Descartes, who assumed
that some knowledge was innate, as a gift from God. Whether the
theoretical focus is on biological maturation or on
learning, in either case the process of development is
viewed as a matter of continuous, quantitative change. The infant starts
out small, physically and mentally, and gets bigger,
physically and mentally, as he or she grows up.
Piaget's Stage Theory of Cognitive Development
This was the view
propounded by Jean Piaget, who argued that the development of
intelligence proceeds through a sequence of stages, each
defined by cognitive landmarks:
- sensory-motor intelligence, running from birth to
about 2 years of age;
- preoperational thought, encompassing
approximately ages 2-7.
- concrete operations, approximately ages 7-12, and
- formal operations, the last stage of cognitive
development, beginning about age 12.
An important concept in
Piaget's theory is the schema, a term which should be
familiar from Bartlett's work on memory reconstruction and
Neisser's concept of the perceptual cycle. For Piaget, a
schema is an organized mental structure that produces a
particular coordinated response to a certain class (or
category) of stimulation. Thus, a schema is a kind of concept,
that renders diverse objects functionally equivalent --
because they belong in the same class.
In a very real sense, cognitive
development is the development of schemata (the plural of
schema, although schemas is also an acceptable
alternative) -- their elaboration and differentiation.
Schemata are like categories, and guide the perception of (and
thus response to) particular objects and events. The
interaction between schemata and the objects with which they
come into contact is characterized by the twin processes of assimilation
and accommodation. By virtue of assimilation, the
mental representation of the stimulus is altered so that it
will fit into the schemata employed to process it; by virtue
of accommodation, the schema itself is altered so that it can
receive the stimulus. Thus, the final representation of the
stimulus is a sort of compromise between what was expected and
the actual stimulus itself. The neonate confronts the world
with a primitive set of innate schemata; through assimilation
and accommodation, the nature of these schemata gradually
change. At certain points, however, the changes in mental
schemata are so dramatic that they appear qualitative rather
than merely quantitative; these shifts mark the child's move
from one stage of cognitive development to another.
The first of Piaget's stages is sensory-motor
intelligence (also known as the sensory-motor period).
In this stage, which encompasses the first two years of life,
the world of the child is one of unrelated sensory experiences
and reflex-like motor reactions to them. The child has no
ability to connect the present with the past or the future,
and no ability to distinguish between self and other. At
least, that is how the child starts out. To take a phrase from
William James, for the sensory-motor infant the world is a
"blooming, buzzing confusion". Over the first two years of
life, these capacities develop; according to Piaget, their
complete acquisition terminates this stage of cognitive
development.
One of the major accomplishments of the
sensory-motor period is the development of object
permanence. The newborn's behavior is tied to what comes
through its senses, which are processed by sensory-motor
schemata. Out of sight is out of mind. Eventually,
however, the child comes to behave as if they have internal
representations of objects that are not actually present in
their sensory environment. Early on, if a toy is hidden they
will turn their attention to something else. Later, if a toy
is hidden they will search for it. This searching behavior
shows that the child has an idea of the object that persists
despite its physical disappearance -- at this point, the child
has acquired the capacity for forming internal, mental
representations --memories -- of the outside world.
At this point, about 24 months after
birth, the infant moves fully into the next stage of cognitive
development, the preoperational period. At this point,
the child is able to form and retain internal representations
of objects and events, but these representations exist as
individual mental units, unrelated to each other. The major
achievement of the preoperational period is the ability to
relate one representation to another, through higher-order
schemata called operations. This takes about the next
five years.
The development of operational modes of
thought is marked by the emergence of conservation,
which occurs by about the time the child is seven years of
age. In the earliest portion of the preoperational period, the
child does not conserve at all. If a short, wide cup of liquid
is poured into a tall, thin glass, he or she may well say that
there is more liquid in the former than in the latter. This is
because the child can track changes in either height or
diameter (actually, if you want to be technical, radius), but
not both simultaneously. Thus, the child cannot understand
that volume, which is a product of height and radius, remains
constant when the liquid is poured from one container into the
other. Similarly, if a young child is shown five objects lined
up over a short distance, and later the same five objects
arrayed over a longer line, he or she is likely to say that
there are more objects in the latter case than in the former.
Thus, the child confuses the number of the objects with the
linear distance over which they are arrayed. At some point,
however, the child no longer makes these mistakes: he or she
has acquired the ability to consider height and radius, number
and distance, simultaneously, and compensate for one with the
other. At that point, the child has acquired the ability of
conservation.
An
analogue of conservation failure in the interpersonal domain
is egocentrism, which is not to be confused with
selfishness. From Piaget's point of view, egocentrism reflects
the child's inability to take another's point of view, or to
appreciate the viewpoints of other peoples. Thus, an
egocentric child will say that other people view the world
from the same perspective as he or she does; and that others
will react to events as he or she does. By the age of seven,
however, the child has abandoned the egocentric attitude, and
is able to represent the world as others see it, as well as he
or she sees it.
At about age seven, the child enters the
stage of concrete operations, in which children are
capable of thinking and reasoning about objects and events
which they have actually experienced. Concrete-operational
children are actually pretty powerful thinkers. They conserve,
they can pay attention to objects other than the most salient
ones, they can take another's point of view, they can take
account of transformations in state, they can classify objects
into groups based on shared properties, and they can generate
and use hierarchical classification schemes. You can get along
pretty well in life with nothing more than concrete
operations, so long as you are reasoning about familiar
problems involving familiar objects and events.
Unfortunately, concrete operations aren't
always enough. It is often useful to go beyond our own past
experiences, and to reason about things we haven't seen or
touched, and about things that aren't visible or touchable.
Concrete operations don't suffice for this purpose. Good
thing, then, that around age twelve the child enters into the
last of Piaget's stages,formal operations. In formal
operations, the thinking can be purely symbolic, without
referring to anything at all by way of concrete objects and
events. It is necessary, for example, for the transition from
arithmetic to algebra -- where x, for example, is just
an abstract symbol and doesn't refer to anything at all.
The hallmark of formal operations is
scientific thinking, which is what lies behind Piaget's notion
of the child as a naive scientist. This is marked by four
different qualities: (1) hypothetico-deductive reasoning, in
which we can hypothesize about a certain state of affairs, and
then reason deductively from that hypothesis (that is, we can
assume that something is true without requiring that it
actually be true); (2) inductive reasoning, in which
the person generalizes from specific observations to general
principles; (3) reflective abstraction, in which the child is
able to reflect on their own thoughts to arrive a novel ideas;
and (4) propositional logic, in which the child can reason
about two or more abstract entities represented by statements
such as "If there is a P, then there is a Q". Children who
have developed the capacity for formal operations are able to
deal with several abstract variables at the same time.
If concrete operations are analogous to
arithmetic, formal operations are analogous to algebra.
Piaget traced cognitive development
only up to adolescence. One aspect of the debate about his
theory is whether cognitive development does in fact end with
the acquisition of formal operations, or whether there are
other, more advanced, stages of thought.
Piaget and the "Science of Creative Intelligence"
One proposal for further stages of cognitive
development, beyond Piagetian formal operations, has come
from Maharishi Mahesh Yogi, founder of the Transcendental
Meditation movement and guru to the Beatles. The Maharishi
now promotes a science of creative intelligence in
which meditation -- Transcendental Meditation, of course --
moves the practitioner to higher stages of cognitive
development.
Another question is whether Piaget's
stages are stages at all -- that is, whether it is true that
children progress from sensory-motor intelligence through
preoperational logic and concrete operations to formal
operations in the way that he thought. An important part of
this question concerns the lower boundaries of Piaget's
stages. Is it really true that children younger than age 7 are
generally incapable of abstract thought? A very large research
tradition has developed out of questions like these, the
general conclusion of which is that cognitive development is
probably more continuous than Piaget thought, and that even
very young children have amazing powers of thought, at least
in limited domains. For example, it has been shown that
five-month-old infants are capable of rudimentary arithmetic
operations of addition and subtraction, so long as they are
only asked to deal with very small number of objects.
Of course,
Piaget was not the only theorist to offer a conception of
development in terms of stages. Long before
Piaget, Sigmund Freud had offered a stage theory of
psychosexual development running essentially from birth to
adolescence. And
in the 1960s, Erik Erikson, a follower of Freud's, offered a
stage theory of psychosocial development that encompassed
the entire life cycle from birth to death. Piaget, Freud
before him, and Erikson afterward had an enormous influence
on thinking in developmental psychology. Their idea of
development as a progression through a succession of
qualitatively different stages was received quite
enthusiastically. And
you can see the legacy of these theorists in the
proliferation of stage theories of everything in
developmental psychology.
For
a comprehensive account of
Kohlberg's Theory of Moral Development
Following in Piaget's footsteps, Lawrence
Kohlberg developed a theory of the stages of moral development
-- by which he meant moral reasoning, not moral
behavior. For Kohlberg, moral development followed a strict
trajectory of 3 stages, each consisting of two sub-stages,
through which the child proceeds from a heteronymous
(other) to autonomous (self) orientation..
- In the preconventional stage, which runs from
birth to about age 9, moral reasoning is based on what is
rewarded and what is punished. Think of this as analogous
to Piaget's sensory-motor and pre-operational stages.
- In the first sub-stage, moral choices are based on a
principle of simple obedience and avoidance of
punishment.
- In the second sub-stage, moral choices are based on a
self-interested desire to gain reward.
- In the conventional stage, moral reasoning is
based on rule-following. The child knows the rule, and
follows it, without taking into account the reasoning
behind the rule, or whether the rule itself is reasonable.
This is analogous to Piaget's stage of concrete
operations.
- In the third sub-stage, the child is primarily
concerned with interpersonal accord, and in gaining
approval and avoiding disapproval through conformity to
conventional rules.
- In the fourth sub-stage, the child is guided by
notions of duty, and obedience to authority, and desire
to avoid feelings of guilt.
- In the post-conventional stage, moral reasoning
is based on more abstract principles, like the Golden
Rule. The child now understands the reasons behind the
rules, can make rational choices between alternative
rules, and can reason to his own rules.
- In the fifth sub-stage, moral reasoning is based upon
notions of agreed-upon rights, such as the social
contract.
- In the sixth and final sub-stage, moral reasoning is
based on universal ethical principles.
- There may be a seventh sub-stage, of transcendental
morality, but the nature of this stage isn't
entirely clear, and I think that Kohlberg made a place
for it simply on the chance that there really was some
fifth Piagetian stage of cognitive development, beyond
formal operations.
Kohlberg assessed an subjects' stage of
moral reasoning by coding their responses to the Heinz
dilemma -- a story about a man, named Heinz, who has a
sick wife:
A woman was near death from a special
kind of cancer. There was one drug that the doctors thought
might save her. It was a form of radium that a druggist in the
same town had recently discovered. The drug was expensive to
make, but the druggist was charging ten times what the drug
cost him to produce. He paid $200 for the radium and charged
$2,000 for a small dose of the drug. The sick woman's husband,
Heinz, went to everyone he knew to borrow the money, but he
could only get together about $1,000 which is half of what it
cost. He told the druggist that his wife was dying and asked
him to sell it cheaper or let him pay later. But the druggist
said: "No, I discovered the drug and I'm going to make money
from it." So Heinz got desperate and broke into the man's
store to steal the drug for his wife.
Should Heinz have broken into the store to steal the drug for
his wife? Why or why not?
Like Piaget, Kohlberg believed that his
stages were universal, but in fact his studies were largely
confined to male subjects -- leading him to conclude, for
example, that women generally failed to reach the highest
stages of moral development (roughly equivalent to the views
of a liberal Democrat).Later, Carol Gilligan -- who was first
Kohlberg's student, and later his colleague at Harvard's
Graduate School of Education -- argued that Kohlberg's
procedures were flawed, and that women went through stages of
moral development that were qualitatively different from those
of men. In this way, Gilligan laid the foundation for
"difference feminism", based on the "essentialist" view that
women's mental lives follow different principles than those of
men.
In any event, Gilligan
revised Kohlberg's scheme, based on the idea that women are
primarily motivated by a concern for others, rather than
independence, and an ethic of care rather than an ethic of
justice. Unlike Kohlberg (and Piaget), Gilligan did not assign
particular age ranges to the stages, but she retained the
assumption that girls and women moved through these stages in
an invariant sequence. Whereas Piaget and Kohlberg assumed
that the transition from one stage to the other is based on
changes in cognitive capacity, Gilligan argues that the
transition is mediated by changes in the person's sense of
self.
- In her view, the preconventional stage is
concerned with individual survival, and is dominated by
what can only be called selfishness. At some point,
however, the person makes the transition from selfishness
to a concern for others.
- In this way, the conventional stage is oriented
toward self-sacrifice as the manifestation of goodness. Of
course, an ethic of self-sacrifice can be pretty
detrimental too, so at some point the girl or woman makes
the transition to the truth that she is a person too, and
worthy both of self-care and the sacrifice of others.
- So, finally, in the post-conventional stage,
moral reasoning is based on a principle of nonviolence,
and a goal to hurt neither others or oneself.
Freud's Theory of Psychosexual Development
In the domain of personality, another
theory postulating qualitatively different developmental
stages is Freud's theory of psychosexual development.
The fundamental assertion of Freud's psychoanalytic theory
is that personality is rooted in conflict between certain
biological instincts (sex, aggression) and environmental
constraints and demands. This conflict must be resolved in
some way. The child's adaptation to conflict interacts with
other developmental events, and personality is formed from the
resulting habitual adaptation.
For Freud, sexual and aggressive
motives are at the center of personality. These instincts
arise from the id and are controlled by the ego
and superego. They are the urges that the defense
mechanisms defend us against. Now, nobody would argue
that sexual issues are not important for personality. Many of
the issues that confront adolescents and adults are sexual in
nature. Once you reach puberty, if not before, sex is an
issue. But Freud went further, by stating that sex is the
paramount issue, from birth. He believed that sexual impulses
were present in the newborn child, and that they continued to
seek expression and gratification until death. The theory of infantile
sexuality was Freud's most radical hypothesis. However,
the student should understand that for Freud, sexuality was
not confined to intercourse and orgasm. Rather, Freud defined
sex as anything that lead to pleasure. Thus, the theory of
infantile sexuality is a portrait of the infant as an active
seeker of pleasure. The precise form that this pleasure takes
is determined by the child's stage of development.
For Freud, all instincts have their
origins in some somatic irritation -- metaphorically, some
itch that must be scratched. At any time, a particular portion
of the body is the focus of that instinct -- the place where
arousal and gratification occur. These somatic loci change
systematically through childhood, and stabilize in
adolescence. These systematic changes comprise the stages in
psychosexual development, and the child's progress through
these stages is decisive for the development of personality.
- The oral stage comprises the period from birth
to approximately 12 months of age. According to Freud, the
newborn child begins as "all id", "no ego", experiencing
only pleasure and pain. In utero, nourishment was provided
automatically. After birth, with feeding, the child must
begin to respond to the demands of the external world --
what it provides, and the schedule with which it provides
it. Thus, instinct gratification is initially centered on
the mouth: sucking at breast or bottle. This sucking
activity has obvious nutritive value, in that is satisfied
hunger and thirst. But Freud also asserted that it also
had sexual value, because the child gained pleasure from
sucking. In addition, aggressive instincts can be
displayed through the mouth, as in biting.
- The legacy of the oral stage is a complex of
dependency and separation anxiety. The child needs its
mother for instinct-gratification, and her absence leads
to feelings of frustration and anxiety. It also leads to
the development of the ego, the mental structure
whose job it is to separate fantasy from reality.
- The anal stage lasts from about 1 to 3 years of
age. Toilet training provides the child with his or her
first experience of the regulation of impulses -- the
child must learn to postpone the pleasure that comes from
the relief of anal tension.
- The legacy of the anal stage is the sense that one
can acquire desirable goods (e.g., praise) by giving or
retaining; the first pangs of loss; and, especially, the
first sense of self-control.
- The phallic stage lasts from about 3 to 5 years
of age. In this period, Freud believed the child, boy or
girl, was preoccupied with sexual pleasure derived from
the genital area -- curiosity, exhibitionism, and
masturbation. Why is this stage called phallic, when only
boys have a penis? His idea is that in different ways,
both males and females are interested in the penis. How
this is so leads us to one of Freud's most startling
theories, the Oedipus complex.
- During the phallic period, Freud thought there
occurred an intensification of sexual interest in the
parent of the opposite sex. In his terms, there is a
sexual cathexis (e.g., heightened attention)
toward the parent of the opposite sex, and an aggressive
cathexis toward the parent of the same sex. This is the
Oedipus complex, named after the Greek myth about the
man who unknowingly killed his father and married his
mother, and the phallic stage revolves around its
resolution.
- The beginnings of the Oedipus complex are the same
for males and females. They love their mother because
she satisfies their needs, and they hate their father
because he competes for the mother's attention and love.
- In the male, the Oedipus complex occurs as the
jealousy of the father combines with castration
anxiety. The child, remember, is engaging in
autoerotic activity, which is often punished by a
threat to remove the penis. This threat is reinforced
by observation of the female genitalia, which
obviously lack a penis. So the child gets the idea
that this threat is real. Nevertheless, the boy's love
for his mother intensifies, and these incestuous
desires increase the risk of being harmed by the
father. The father is too powerful, and must be
appeased. Thus, the boy represses his
hostility and fear, and converts it by means of reaction
formation into expressions of love. At the same
time, mother must be given up, though she remains
desirable. Thus, the child also represses his sexual
longings. The final solution of the boy's problem is identification
with his father. His father is now an ally instead of
an enemy, and through this identification the boy can
gain vicarious satisfaction of his desire for his
mother.
- In the female, the same sort of process works
itself out in the Electra complex, named after
the Greek myth of the princess, daughter of Agamemnon
and Clytemnestra, who conspires with her brother
Orestes to murder her mother and her mother's lover,
in order to avenge their father's death. But the
Electra complex is not the mirror-image of the Oedipus
complex. For example, the girl's problem is not
castration anxiety, since there is no penis to injure,
but resentment at deprivation. Initially, the girl
loves her mother for her role as caretaker, and has no
particular feelings toward her father. Nor is she
punished for autoerotic activity -- perhaps because it
doesn't occur, perhaps because it isn't discovered.
Eventually, however, the girl discovers that she lacks
the phallic equipment of the male: this leads to
disappointment and feelings of castration -- what
Freud called penis envy. She blames her mother
for her fate, which weakens her cathexis toward her;
and she envies her father's equipment, which
strengthens her cathexis toward him. The result is
that the girl feels love for her father, and hatred
and jealousy for her mother. She wants her father to
give her a penis, and accepts a baby -- represented by
a doll -- as a substitute. Thus, there is no clear-cut
resolution of the Electra complex in girls. Castration
is a fact, not a threat. In the end, however, the girl
identifies with her mother in order to gain vicarious
satisfaction of her love for her father.
- In any event, the first legacy of the phallic stage
is the superego -- the child internalizes social
prohibitions against certain object-choices, as well as
parental rewards and punishments.
- The second legacy is psychosexual identification: the
boy identifies with his father, the girl with her
mother, and take on the characteristic roles, and
personality, of the same-sex parent.
- During the latency period, which lasts from
about age 5 to age 11, Freud thought that the child's
instinctual impulses subsided, with a slowing of the rate
of physical growth, and the effects of the defenses
brought to bear in the resolution of the Oedipus/Electra
complex. During this period the child is not actively
interested in sex and aggression, but works on the task of
learning about the world, society, and his or her peers.
- During the genital period, which lasts from age
12 to death, the person moves into another period of
sexual interest. Sexual maturity reawakens the sexual
instincts which have been dormant. But with a difference.
There is a shift away from primary narcissism, in
which the child takes pleasure in stimulating his or her
own body, to secondary narcissism, who takes
pleasure from identifying with an ideal. Sexuality shifts
from an orientation toward pleasure to an orientation
toward reproduction, in which pleasure is secondary. There
is strong attraction to the opposite sex, and an interest
in romance, marriage, and children. There is a continued
focus on socialization. However, the earlier stages of
psychosexual development can influence the nature of the
individual's genital sexuality, in terms of the preferred
locus of sexual foreplay, and the bodily focus of erotic
interest.
Freud believed that the individual's
passage through these stages left its imprint on adult
personality. If all goes well, the person develops a genital
character, as reflected in full sexual satisfaction
through orgasm. The genital character is able to effectively
regulate his (or her) sexual impulses for the first time. The
person need no longer adopt primitive defenses such as
repression, though certain adaptive defenses are still
operative. The person's emotions are no longer threatening,
and can be expressed. The person is no longer ambivalent, and
is able to love.
However, all doesn't usually go well --
else there wouldn't be a need for psychoanalysts! Freud
believed that people rarely, if ever, passed through the
psychosexual stages without incident, and people rarely
develop the genital character spontaneously. Usually, the
person experiences some sort of developmental crisis
at an earlier stage -- a crisis that prevents growth,
fulfillment, and the final achievement of genital sexuality.
- These difficulties are resolved through the aid of
additional defense mechanisms.For example:
- In fixation, for example, the anxiety and
frustration experienced while advancing to a new stage
cause growth to halt, so that the individual remains at
the earlier stage.
- In regression, the anxiety and frustration
occur after the advance is completed; growth is lost as
the person defensively reverts to an earlier stage of
adjustment. The point at which fixation or regression
occur determines the adult character.
By virtue of fixation and/or
regression, the person -- that means you, and me, and everyone
-- develops a particular neurotic character, depending on the
developmental stage at which the person has fixated, or to
which he or she has regressed.
- The oral character develops through the
resolution of conflicts over feeding and weaning. The oral-dependent
type relies on others for self-esteem and relief of
anxiety; he or she manifests oral preoccupations --
smoking, eating, drinking -- to overcome anxiety. The oral-aggressive
type expresses hostility towards those who are
responsible for his or her frustrations; this is expressed
not through physical biting, but rather through "biting"
sarcasm.
- The anal character develops through the
resolution of conflicts over toilet training. The anal-expulsive
type engages in retaliation towards those responsible for
his or her suffering. The person is messy, irresponsible,
disorderly, and wasteful; alternatively, by virtue of
reaction-formation, the person appears to be neat,
meticulous, frugal, and orderly; but, Freud asserted,
somewhere, something is messy -- revealing the person's
essential anal-expulsive character. The anal-creative
type produces things to please others, and also oneself.
He or she displays generosity, charity, and philanthropy.
The anal-retentive type shows a marked interest in
saving and collecting things. The basic traits are
parsimony and frugality; alternatively, again via
reaction-formation, a record of foolish investments,
reckless gambling, and spending.
- The phallic character reflects an overvaluing
of the penis. The male, compelled to demonstrate that he
has not been castrated, is reckless, vain, and
exhibitionistic ("Look what I've got!"). The female,
resentful at having been castrated, is sullen,
provocative, and promiscuous ("Look what I've lost!").
As you can see, this is
the kind of theory that only a particular kind of man could
conjure up. In discussing Freud's psychoanalysis, the student
is warned that there is hardly a shred of clinical or
experimental evidence in support of the theory. And a lot of
it will strike a contemporary reader as quaint, if not
downright silly. Nevertheless, psychoanalysis has had such an
enormous impact on our culture -- literature, cinema, and art
-- that it would be criminal to ignore it entirely. So you get
some introduction to psychoanalysis in this course, but this
is it -- a couple of crummy paragraphs buried in a lecture
supplement on development.
Erickson's Theory of Psychosocial Development
Traditionally, mental development was
thought to stop with adolescence -- or, at latest, the entry
to adulthood.
- In the earliest method of calculating IQ (mental age /
chronological age), a ceiling of 18 years was established
for both variables. Thus, mental growth was assumed to be
complete in late adolescence.
- In Freud's stage theory of psychosexual development,
the oral, anal, and phallic stages were all negotiated
before age 5, and the person was held to have achieved
full adult psychosexual development -- the genital stage
-- at adolescence.
- In Piaget's theory, the highest level of thought --
formal operations -- is also achieved at adolescence.
- Same for Kohlberg's and Gilligan's theories of moral
development.
Even so, there has always been some
sense that development was not complete at adolescence -- that
change and growth were still possible in young adulthood,
middle age, and old age.
This concern with development throughout
the lifespan, from birth to death, is expressed in the psychosocial
theory of Erik Erikson, a disciple of Freud's. Erikson
argued that personality was the product of the social
environment as well as of biology. He de-emphasized the
instincts, and especially, infantile sexuality, and focused
instead on the social conditions of child development and
adult life. Mostly, Erikson focused on the issue of ego
identity -- one's awareness of oneself, and one's
meaning for other people. He also expanded the notion of
development by arguing that there is, indeed, life after
puberty. Not only did he propose stages of growth beyond the
genital, but he also introduced a social reinterpretation of
the classic Freudian stages -- hence the label "psychosocial",
rather than "psychosexual".
A Self-Made Man
Erickson was personally consumed by issues of
identity. He described himself as a "child of the borders".
His adopted name was Erik Homburger, but he changed it to
Erik Erikson -- literally, "Erik Son of Erik". As a
self-made man, Erikson remade himself as his own father and
his own son.
In the end, Erikson gave us an epigenetic
conception of development similar to Freud's. That is, the
individual progresses through an inevitable sequence of
stages; and, at each, meets and resolves some crisis. Each
stage builds on the one(s) that went before. Each stage has
several elements, including a crisis that must be met and a
strength that develops during the crisis. The resulting stage
theory is sometimes known as the "Eight Ages of Man".
- In the oral-sensory stage, from birth to 1
year, the child comes to recognize some objects as
familiar, experiences hunger for nourishment and
stimulation, and deals with teething and grasping. The
crisis is basic trust vs. mistrust: the child must
trust that his or her wants will be frequently satisfied,
while others must trust that the child will learn to cope
with its impulses (to cry or bite). The legacy of the
oral-sensory stage is hope, the enduring belief in
the attainability of wishes.
- In the muscular-anal stage, from 1 to 3 years,
the child learns to walk and talk, dress and feed itself,
and control elimination. The crisis is autonomy vs.
shame and doubt: the child learns to rely on its own
abilities, or that its efforts will be ineffectual and
criticized. The relevant strength is will, the
determination to exercise free choice and self-restraint
- In the locomotor-genital stage, from 3 to 6
years, the child really begins to move about, find its
place in the group, and approach objects of desire. The
crisis is initiative vs, guilt: the child learns
to approach what seems desirable, and experiences the
contradiction between desires and restrictions. The
relevant strength is purpose, the courage to
envisage and pursue valued goals.
- In the latency period, from 6 to 11, the child
makes the transition to school life, and begins to learn
about the world. The crisis is industry vs.
inferiority: the child learns and practices adult
roles, but may conclude that it cannot operate the things
of the world. The strength is competence, the free
exercise of dexterity and intelligence.
- In the stage of puberty-adolescence, from 11 to
18, the young person experiences physiological growth,
sexual maturity, adolescent love, and involvement with
cliques and crowds. The crisis is identity vs. role
confusion: the idea that one's past has prepared one
for the future, as opposed to the failure to differentiate
oneself from others and find one's place in the world. The
strength is fidelity, the ability to sustain
loyalties.
In the previous stages, Erikson
reinterpreted Freud (as some of their names imply). In the
following stages, emerging after adolescence, Erikson added to
the basic Freudian conception.
- In the stage of young (or early) adulthood,
from 18-30, the person leaves school for the outside world
of work and marriage. The crisis is intimacy vs.
isolation: the ability to share oneself in an
intense, long-term relationship, as opposed to an
avoidance of sharing because of the threat of ego loss.
The strength if love, or mutuality of devotion.
- In the stage of (middle) adulthood, from 30 to
50, the person invests in the future through work and
home. The crisis is generativity vs. stagnation:
the ability to establish and guide the next generation, as
opposed to a concern only for one's personal needs and
comfort. The strength is care, a widening concern
for what has been generated by love, necessity, or
accident.
- In the stage of maturity (or late adulthood),
from 50 into the 70s or so, death enters one's thoughts on
a daily basis. The crisis is ego integrity vs. despair:
a strong sense of self and of the value of one's past
life, as opposed to a lack of satisfaction, coupled with
the sense that it is too late to start all over again. The
strength is wisdom, a detached concern with life
itself.
As he (actually, his wife and
collaborator, Joan Erikson) entered his (her) 9th decade,
Erikson (in The Life Cycle Completed, 1998) postulated
a ninth stage, in which the developments of the previous eight
stages come together at the end of life:
- The stage of very old age, beginning in
the late 80s, brings outcomes of the previous eight stages
together. The crisis is despair vs. hope and
faith, as the person confronts a failing body and
mind. If the previous stages have been successfully
resolved, will be able to transcend these
inevitable infirmities.
Observational studies have provided
some evidence for this fourth stage, but Erikson's original
"eight-stage" view is the classic theory of personality and
social development across the life cycle.
It should be noted that Erikson's
theory, like Freud's, is highly impressionistic, and not
necessarily based on a proper scientific analysis of lifespan
development. Also, like Freud's, it is strongly based on a
particular cultural experience, and a particular view of what
is important in life. Erikson's theory is not presented as
established scientific fact, but rather as a good example of
how a stage concept of development can be applied throughout
the life span. There are some pearls of wisdom here, but take
the whole thing with a grain of salt. The theory has been
extremely influential in popular culture, and it has fostered
an entire new discipline of life-span developmental
psychology.
Stage Theories of Everything
Piaget, Erikson, and Kohlberg, not to
mention Freud, popularized stages (rather than continuities)
as a framework for understanding psychological development,
and pretty soon stage theories began to appear in lots of
different domains, including of everything.
James Fowler
proposed a stage theory of the development of religious faith.
- Primal or Undifferentiated Faith, seen
from birth to about age 2, is focused on personal safety
-- that is, the development of faith that the environment
is safe and secure as opposed to neglectful and
abusive. This faith in other people (or not) sets
the stage for the development of faith in God (or gods).
- Intuitive-Projective Faith, from 3-7 years of
age, in which the child has difficulty distinguishing
reality from imagination, such that discussion of angels
and devils is taken very literally.
- Mythic-Literal Faith, in which God (or gods) is
(are) viewed in highly anthropomorphic terms.
- Synthetic-conventional Faith, in adolescence, is
characterized by conformity to religious rules and
rituals.
- Individual-Reflective Faith, in the 20s and 30s,
is a period of struggle with religious belief.
- Conjunctive Faith entails a "mid-life crisis"
elicited by awareness of the contradictions and paradoxes
of religious belief.
- Universalizing Faith, and resolution of these
paradoxes and contradictions into an "enlightened" faith
in God (or gods).
Elisabeth
Kubler-Ross proposed a stage theory of death and dying.
- Denial, either of illness or of death as a
consequence;
- Anger, feelings of betrayal and envy;
- Bargaining, an attempt to postpone or delay the
inevitable;
- Depression, in the face of the certainty of
death;
- and finally Acceptance, and the inner peace that
comes with it.
And, famously,
"Bill W." and "Dr. Bob", the founders of Alcoholics Anonymous,
proposed the "12 Steps" that people must take, in order, in
order to recovery from alcoholism -- a "self-help" approach
that has since been generalized to other forms of substance
abuse.
- Admit you are powerless over X (where X is
alcohol, drug addiction, sex addiction, whatever).
- Believe that a "higher power" is necessary to
restore proper function.
- Decide to turn your life over to God "as you
understand him".
- Make a moral inventory of yourself.
- Admit our wrongs to God, ourselves and someone
else.
- Ready yourself for God to remove character
defects.
- Humbly ask God to remove your shortcomings.
- List all the people you have harmed.
- Make direct amends to those people.
- Continue to take personal inventory and promptly
admit new wrongs.
- Meditate to improve contact with God "as you
understand him".
- Carry the 12-Step message to others.
As intuitively appealing as these stage
theories may be, they share the problems which beset Piaget's
theory (and, for that matter, Freud's and Erikson's and
Kohlberg's). So where does the intuitive appeal come
from? Mostly, I think, stage theories appeal to us
because we like stories, and stage theories provide a kind of
narrative structure that organizes what goes on in
development. Development proceeds from starting
point (like Piaget's sensory-motor period) and proceeds to
some endpoint (like formal operations). The story has a
beginning, a middle, and an end, with plot-points along the
way like the acquisition of object permanence and the loss of
egocentrism.
The only problem is that stage theories
aren't right. Development just doesn't proceed in the
lockstep stages envisioned by Piaget and Freud.
Still, Piaget's theory of cognitive
development has been enormously influential. Most
current theoretical approaches to psychological
development have their origins in extensions of, or
reactions, to, Piaget's theory and research. For a
comprehensive account of Piaget's theory of cognitive
development, there's nothing better than The
Developmental Psychology of Jean Piaget by John
Flavell (1963).
For an overview of the development (pardon the pun) of
developmental theory since Piaget, see "The Evolution of
Developmental Theories Since Piaget" by Philippe Rochat,
(Perspectives in Psychological Science, 2024). |
Cognitive Development After Piaget
In all stage theories, the
stages of development are universal,obligatory,stereotyped,
and irreversible.
- All normal individuals must pass through them.
- The stages are passed in the same sequence for all
individuals;
- Once a stage has been successfully negotiated, there is
no going back;
- The achievement of one stage is a necessary condition
for advancement to the other.
These are essentially hypotheses about
the nature of development, and when they are tested, research
has usually failed to confirm them. In particular, research
testing Piaget's theory revealed a number of anomalies, which
led investigators to refocus their theories on continuities in
mental development.
The
first of these, one noted by Piaget himself, was the
phenomenon of decalage (pardon my French). The
implication of Piaget's stage theory of development is that
the child makes a wholesale, quantum leap from one stage to
the next. But Piaget and others recognized that this
transition wasn't as abrupt as initially believed. A child
moving from the pre-operational period to concrete operations,
for example, would conserve on some tasks but not others.
The second, and by far more critical,
had to do with the lower boundaries of the various stages.
Again, the implication of Piaget's theory is that there are
certain kinds of tasks that a child of a certain age just
cannot do. And so developmental psychologists began to
ask whether, in fact, 3- and 4-year-olds might conserve -- if
only they were tested in the right way.
Among
the first to ask this question was Rochel Gelman, who examined
the child's conception of number. Piaget had argued that
pre-operational children just didn't have any concept of
number, or quantity, they couldn't count, and they certainly
couldn't do arithmetic. But Gelman, working with Randy
Gallistel (a mathematical psychologist who was also her
husband), analyzed the concept of number into three
principles:
- One-to-one correspondence means that each number
term corresponds to a particular quantity.
- Stable order refers to the child's organization
of these terms into a consistent order, from lowest to
highest.
- Cardinality means that, when counting a group of
objects, the final number used refers to the number of
objects in the group.
So, suppose you show a 4-year-old a set
of 3 objects, and ask her how many there are, and she says
"five". On first blush, it looks like she doesn't know how to
count. But Gelman went further, and asked the child to count
the objects out loud. A child might say "One, four, five --
five things!"; asked again, she repeats her count, "One, four,
five -- five things". What this indicates is that the child
understands one-to-one correspondence, has a stable order, and
understands cardinality. She just doesn't assign the
conventional words to various quantities. But she's clear got
a concept of number, and she clearly has a grasp of counting
principles. Gelman's study showed that "pre-operational"
children typically displayed an understanding of these
counting principles, even if they didn't count like the adults
did.
Karen Wynn pushed the envelope even
further, by asking if infants also had the ability to count.
This is clearly impossible, according to Piaget, because
numbers represent quantities, and children in the
sensory-motor period don't have representations. Wynn's
research was based on the idea that looking time --
the amount of time that children (and adults, for that matter)
spend looking at something -- is an index of surprise and
attention. she showed 4-5-month-old infants displays of the
following sort:
- First one object is placed in a tube, and then another;
when the tube is emptied, either one or two objects fall
out.
- First two objects are placed in a tube, and then one is
removed; when the tube is emptied, either one or two
objects fall out.
Wynn
found that infants were surprised -- they looked longer at the
display -- when it gave the "wrong" answer -- when 1+1=1
rather than 2, or (especially) when 2-1=2 rather than 1. While
the infants probably couldn't do differential calculus, they
did seem to have at least a rudimentary ability to add and
subtract. Not bad for babies who haven't even learned to talk
yet!
These are just two examples of a huge
body of post-Piagetian research that undermined Piaget's stage
theory of cognitive development. Young children simply had
greater cognitive skills than Piaget's theory allowed them.
These empirical findings, in turn, led developmentalists to
propose alternative theories of cognitive development.
Link
to an interview with Annette Karmiloff-Smith.
Similar findings have been
obtained in other areas, such as logical thinking.
Piaget argued that formal logic was a relatively late
cognitive accomplishment, but some psychologists have found
evidence of logical thinking even in very young infants.
For example, Visual behaviors, such as a shift in one’s gaze
or a prolonged stare, can be diagnostic of internal thoughts.
Cesana-Arlotti et al. ("Precursors of Logical Thinking
in Preverbal Human Infants", Science, 03/16/2018)
asked whether infants (aged 1 to 1-1/2 years) can entertain a
disjunctive syllogism of the form Either A or B is true; A
is false; Therefore B must be true. When the
infants were presented with scenes depicting the syllogism,
they looked longer, expressing surprise, when B proved to be
false.
Novices and Experts
One
post-Piagetian approach construes development as the
development of cognitive skills. According to this view, the
infant starts out as a novice in all domains of
problem-solving, and acquires expertise through
learning - through experience and practice. However, in this
view expertise is not just a quantitative difference of
"knowing more" than one did before. Rather, the argument is
that experts represent problems differently than novices do:
- Expert knowledge is cross-referenced, promoting easy
access to it in a variety of situations.
- Expert knowledge focuses on higher-order patterns, so
that experts think in bigger "chunks", and take larger
steps in problem-solving, than novices do.
One model for the development of
expertise is the difference between novice and expert chess
players. Both kinds of players know the rules, but experts
represent the game differently, and play differently, than
novices do.
Of course, all of this is
not very far from the theory of development as learning.
Recall the definition of learning as "a relatively permanent
change in behavior that results from experience"; in the same
way, expertise develops with experience and practice. However,
there are at least two important differences between the
theory of expertise and the theory of learning.
- The acquisition of expertise involves qualitative leaps
in skill that represent the individual's successive
reorganizations of task performance. These qualitative
leaps are somewhat analogous to Piaget's stages, but they
aren't the same as Piaget's stages, because even young
children can attain more expertise than adults (as an
example, consider young children's expertise in
dinosaurs).
- The theory of expertise does not consider the infant as
a blank slate to be written on by experience. Instead, the
child is viewed as bringing a rudimentary cognitive
apparatus into the world, such that learning experiences
modify his or her innate propensities.
These differences between
the theory of expertise and the theory of learning reflect the
lasting contribution of Piaget to developmental theory --
despite the fact that his theory appears to be wrong in many
salient details.
- Theories of development are no longer so concerned with
the cognitive "starting point" -- as in the debate between
nativism and empiricism.
- Now, the focus of developmental theory is on the
cognitive "end point" -- the outcome of psychological
development. The cognitive starting-point is viewed in
light of where the child is going.
Metacognition
Theories
of expertise seem to imply some degree of continuity over the
course of development, but studies of expertise reveal one big
difference between younger and older children: Put bluntly,
older children know what they're doing, while younger children
don't. Older children are not simply more expert than younger
ones: they're also more reflective, more deliberate, and more
strategic in their thought and action. In other words, what
older children possess, and younger children lack, is metacognition.
Metacognition
was defined by John Flavell (1979) as, literally,cognition
about cognition -- or, put another way, our ability to
monitor and control our own cognitive processes. Metacognition
is one's "knowledge about cognition and cognitive phenomena",
including:
- Knowledge of what is going on in your own mind:
- Whether you're perceiving something or just imagining
it;
- What knowledge you have stored in your memory, and how
your memory works (this fund of meta-knowledge is
sometimes called metamemory);
- Whether you actually understand something that you've
learned, or that has been explained to you.
- Appreciation of the rules governing mental processes:
- How to deploy attention effectively;
- How to use strategies for encoding and retrieving
memories (another aspect of metamemory);
- How to break down large problems into sub-problems.
Flavell has argued that
there are several different aspects of cognitive monitoring,
including:
- Goals or Tasks: knowing the objectives of a cognitive
enterprise.
- Actions or Strategies: cognitions and/or actions
employed to attain goals and complete tasks.
- Metacognitive Knowledge: knowledge about factors
influencing your own and others' cognition.
- Metacognitive Experiences: Conscious thoughts and
feelings pertaining to cognition.
His general idea is that with
development, children make more conscious, deliberate use of
their mental faculties.
The Theory of Mind
More
broadly, we might say that as they develop children come into
possession of a theory of mind -- a term coined by
Premack and Woodruff (1978; see also Premack, 1988) based on
their comparative study of cognition in humans and
chimpanzees. Put briefly, the theory of mind is the ability to
impute mental states to oneself and to others. It includes:
- Knowledge of Our Own Minds: understanding that we have
mental states, and realizing that our experiences are our
own -- that, somehow, our experiences are separate from
the world outside the mind, and that we can control our
own beliefs, feelings, and desires. Knowledge of our own
minds entails phenomenal awareness -- introspection -- of
what we think, feel, and want.
- Knowledge of Other Minds: understanding that our mental
states may differ from those of other people -- that
different people have different minds, and thus different
experiences. Knowledge of other minds entails an ability
to make inferences about what others think, feel, and
want.
The development of a theory
of mind is commonly indexed by what is known as the false
belief task. This task, typically, involves an
experimenter, a child, and a puppet. The puppet hides a ball
in an oatmeal container. After the puppet is put away in a
cupboard, the experimenter and the child together switch the
ball from the oatmeal container to a box. Then the puppet is
brought out of the cupboard, and the child is asked where the
puppet will look for the ball.
- Children younger than 4 years of age typically answer
that the puppet will look in the box, "because that's
where it is".
- Children older than 5 years of age typically answer that
the puppet will look in the oatmeal container, "because
that's where he thinks it is.
An early study by Wellman et al. found
that children younger than 40 months typically failed the
false-belief test, while children older than 50 months
typically passed it. somewhere between the ages of 4 and 5,
children get a theory of mind -- they understand that our
minds are our own, and that different people will have
different percepts, memories, knowledge and beliefs.
Of course, the moment researchers
established a landmark like this, other researchers began to
try to push the envelope back -- to determine whether even
younger children have a theory of mind, too, if only we tested
them the right way. One important feature of the standard
false-belief test is that it is verbal -- the experimenter
asks questions, and the child has to answer. So what would
happen if we concocted a nonverbal version of the
false-belief task?
To make a long story short, Onishi and
Baillargeon (2005) tested 15-month-old infants on a nonverbal
version of the false-belief task. The experiment itself is a
thing of wondrous beauty, they devised a totally nonverbal
version of the FB task, relying on the general finding that
even infants look longer at events that violate their
expectations.
First, the children were familiarized
with the test situation, in order to build up certain
expectations.
For this purpose, the infants were given three familiarization
trials.
On Trial 1, they saw an actor hide a
(plastic) slice of watermelon in a green box.
On Trials 2 and 3 the actor returned and
reached into the green box for the watermelon.
|
|
Then the infants were divided into four
groups for the belief-induction trial. The
infants, saw,for just one trial:
In the True Belief Green condition, the
actor watched as the yellow box moved toward the
green box.
In the True Belief Yellow condition, the
actor watched as the watermelon moved from the green
box to the yellow box.
In the False Belief Green condition, the
actor was no longer present when the watermelon
moved to the yellow box.
In the False Belief Yellow condition, the
actor watched as the watermelon moved from the green
box to the yellow one; but was no longer present
when the watermelon moved back to the green
box.
|
|
On the Test Trial, the infants
watched as the actor opened the door and reached into
the green box or the yellow box. |
|
And, in fact, the infants looked longer on trials
where an actor behaved in a way that seemed to
contradict her (the actor's) understanding of where
a plastic watermelon slice had been hidden.
Apparently, even infants have some sense of what
others believe, that their beliefs might be
different from their own, and that their beliefs
might be incorrect. If infants expect others to
behave in accordance with their beliefs, and are
surprised (and pay extra attention) when they do not
do so, then it can be said that even infants, long
before age 4, have a rudimentary theory of mind.
Actually, it's not always the case that infants
look longer at events that violate their
expectations. In some case, they look for less
time at counter-expectational events. Nobody quite
knows why. But in either case, a difference in
looking times indicates that, from the infant's
point of view, something has gone wrong. And
that's all the logic of the experiment requires.
|
|
So even very young children have some
rudimentary theory of mind, even if they can't express it
verbally.
I know what you're thinking: if we have
a nonverbal test of the theory of mind, maybe we can use it to
see if non-human animals have a theory of mind, too. In fact
we can, and Call and Tomasello did, and they found that
chimpanzees utterly failed the test. But that's a study for
another course (I discuss it in my courses on "Scientific
Approaches to Consciousness" and "Social Cognition" -- so if
you're really interested, go to those websites and click on
"Development").
Egocentrism and the Theory of Mind
In some ways, the theory of mind revives
Piaget's notion of egocentrism. For Piaget, the
preoperational child thinks that his experience is
universal. But the older child, having entered concrete or
formal operations, understands that others may not think the
way he does -- that others' percepts, memories, and
knowledge might not be the same as his own.
In fact, the theory of mind usually emerges
between 5 and 7 years of age -- exactly the same point as
Piaget's shift from preoperational thought to concrete
operations.
The "Theory" Theory
In some respects, cognitive development
is the development of social cognition -- the ability
to think about oneself and other people. The acquisition of a
theory of mind is a qualitative change, like the shift from
one of Piaget's stages to another. But it is a qualitative
shift that takes place against a continuous acquisition of
knowledge.
But it turns out
that the child isn't just developing a theory of self and
others. The child is developing a theory of the whole world --
of physics and biology as well as psychology -- and testing it
out in much the same way that a scientist would (the metaphor
of child as scientist is another legacy of Piaget's theory).
This is the essential proposition of what has come to be known
as the "theory theory" of development -- that the
developing child is engaged in a continuous process of
proposing, testing, revising, and rejecting theories of how
the world works -- including theories of how minds work, as
part of the world.
The "theory theory" of cognitive
development takes seriously Piaget's notion that the child is
operating as a naive scientist, actively exploring the
world and experimenting with it -- formulating hypotheses ('I
wonder if it works this way"), gathering evidence ("Let's see
what happens if I do this"), and revising hypotheses based on
the outcome ("OK, that didn't work, maybe it works this
way, instead"). In this way, the child develops theories of
the world -- abstract, coherent systems of knowledge that
enable him or her to predict or control events, and also to
interpret and explain events.
Like the view of
development as the acquisition of expertise, the "theory
theory" emphasizes continuities rather than changes in
cognitive development. The child is constantly experimenting,
right from birth. Put another way, the child is constantly learning
-- learning to predict and control the world around him.
Learning, in fact, is central to theory-formation.
- The child learns about the conditional probabilities
linking events, through something like classical
conditioning.
- And the child learns about the outcomes of
interventions, through something like instrumental
conditioning.
But while some
conditioning theorists viewed the learning organism as a tabula
rasa, or blank slate --what the 18th-century French
philosopher called a "perfect idiot" -- which is written on by
experience, that's not the view of the "theory theorists".
Instead, they believe that the child comes into the world with
an innate theoretical capacity -- a rudimentary ability to
form, test, and revise its understanding of the world. So,
like Piaget, they believe that the child comes into the world
already prepared with a rudimentary cognitive schema. This
position is known as starting-state nativism, and it
holds that in some nontrivial sense the child comes into the
world with "substantive innate theories" of various domains --
of physics, biology, a theory of the mind, and, possibly, a
theory of society as well. The child develops as it puts these
innate theories into action, testing them in the real world,
and revising them when the test results prove to be
surprising. Just as in science, what starts out as a
very narrow, primitive theory of some domain becomes
progressively more expansive, refined, and robust -- a theory
that actually predicts, and explains, what goes on in the
world outside and inside the child's mind.
Much of what we know about what infants
know comes from studies using the "looking" paradigms of the
sort employed to investigate the infant's concept of number
and understanding of other minds. Infants tend to look
longer at novel, unexpected events -- as if they're surprised
by them, and are trying to figure out what's going on.
So, by tracking what infant look at, we can figure out what
they already expect -- what they already know.
Other studies observe the infants' actions -- what they reach
for, what they crawl to, what they imitate.
Using these sorts of paradigms, we now
understand that even infants have some rudimentary
understanding of basic physical principles:
- the trajectory of movement;
- the pull of gravity on objects;
- how one object can contain or enclose another.
In much the same way, infants have
some understanding of basic biology:
- the distinction between animate and inanimate objects;
- plants and animals have an essential core that does not
change, even when their outward appearances change.
- that living things grow and non-living things do not;
- that living things can inherit properties from their
parents;
- that living things can get sick.
They haven't been taught these things,
and they seem to know them before they've had any opportunity
to learn them. In fact, it seems that this innate
knowledge about the physical and social world makes it
possible for them to learn new things.
Probability Learning
According to the "theory theory",
infants then build on this innate theoretical knowledge to
develop a more refined understanding of themselves and the
world around them -- all without benefit of language or much
deliberate teaching on the part of their parents.
As discussed in the lectures on Language, infants quickly
begin dividing up the sound world to which they are
exposed. They learn to expect which syllables are likely
to follow other syllables, and which musical tones are likely
to follow other tones (e.g., Saffran, Aslin, & Newport,
1996). Of course, as discussed in the lectures on Learning, nonhuman
animals do much the same thing -- picking up on conditional
probabilities in the course of classical and instrumental
conditioning. So it's not a particular surprise that
human infants can do this, too. But the point is that
infants already "know" something about probabilities.
They already now something about
sampling, too. UCB's Fei Xu (2008) showed 8-month-olds a
container full of colored ping-pong balls -- e.g., 80% white,
and 20% red. The experimenter then dipped into the
container, and removed five balls. The infants showed
sins of surprise if the color distribution of her selection
departed markedly from the color distribution in the container
as a whole -- showing that they already had some idea that a
sample should resemble its parent population.
Given these rudimentary conceptions of
probability, infants can begin acquiring knowledge about their
world -- what goes with what, and what causes what to
happen.
And this can take a while. In
phylogenetic terms, those species with the most intelligent
and flexible adults also have the longest periods of
infancy.
- Precocial species, like chickens and most birds,
mature quickly; and they are also highly dependent on
innate, instinctual routines.
- Altricial species, like most mammals, especially
humans and other primates, as well as crows, mature
slowly; and their behavior is much more dependent on
learning, including social learning.
For more on this, see "How Babies Think" by UCB's own
Allison Gopnik (Scientific American, July 2010), from
which these examples are drawn.
See also Gopnik's books intended for a popular audience,
including
- The Scientist in the Crib: Minds, Brains, and How
Children Learn (1999; reviewed by Jerome Bruner,
who was Gopnik's mentor in graduate school, in "Tot
Thought", New York Review of Books,
03/09/2000); and
- The Philosophical Baby: What Children's Minds Tell
Us About Truth, Love, and the Meaning of Life
(2009).
Ethics and Morality
What an innate theory might
look like is illustrated by research concerning infants'
conceptions of ethics and morality. Long before they've gone
to Sunday School, and long before they've received any serious
rewards or punishments, infants apparently have some
rudimentary sense of good and bad, right and wrong. Early on,
of course, they have some of the prerequisites for making
moral judgments. For example, they understand the
difference between animate and inanimate objects -- a logical
prerequisite to understanding the concept of agency.
An experiment by Kuhlmeier, Wynn, and
Bloom (2003) shows that even infants have a rudimentary
concept of agency. They watched an animated film of a
ball climbed a hill: another figure, a triangle, pushed the
ball up, while another figure, a square, kept him down.
Later, they saw a test film in which the ball approached
either the square or the triangle on level ground.
Five-month-old infants didn't discriminate between the two
films, but 12-month-old infants did. Apparently, the
older infants expected the triangle to approach the "good"
object that "helped" it up the hill, but not the "bad" object
that hindered its progress.
- Later experiments by Hamlin in the same lab, acted out
the same scenario with actual objects, and then gave the
infants a choice. Both 6- and 10-month-old infants
actively preferred the "helper". Even 3-month-olds,
who were too young to reach for anything, preferred to
look at the "helper" rather than the "hinderer".
In another series of experiments, Felix
Warneken and his colleagues arranged accidents, like a
dropping a pen. Infants as young as 18 months of age
will actually help the experimenter -- but not if the
experimenter intentionally throws the pen on the floor.
- Similar human-helping behavior has been observed in both
lab-reared and wild-reared chimpanzees.
The age of the infants in these
studies, and the behavior of the chimpanzees, suggests to some
theorists that children are born with an innate, if
rudimentary, sense of right and wrong -- in innate moral
knowledge that is further amplified through learning.
Others deny this, but claim that infants are very fast
learners in this domain. Perhaps humans and other
primates are "prepared" to learn about right and wrong, even
if they're not actually born with this knowledge.
The Pendulum of Developmental Theory
In some sense, theories of development
cycle back and forth between continuity and change, and
between qualitative and quantitative changes. Developmental
psychologists aren't just on a swinging pendulum, however.
Every turn in the cycle represents an increasingly
sophisticated shift in our understanding of the nature of
mental development -- and, for that matter, in our
understanding of the mind.
Cognitive Aging
Erikson was clear that social development
continued throughout the lifecycle: roughly half of
his "Eight Ages of Man" occur after, in Piaget's view,
the child acquires formal operations -- and certainly
after children acquire a theory of mind. The
pimplication of Piaget's theory, however, is that cognitive
development stops with the acquisition of formal
operations. After that, intelligence assumes a
kind of steady state.
Or, worse, it's often said that the story after early
adolescence is one of steady cognitive decline (never
mind the effects of age-related dementias such as
Alzheimer's disease). But it turns out that even
that story is complicated.
- Recall, for example, from the lectures on Thinking, that
-- at least in Cattell's view -- fluid intelligence
may decline with age, but crystallized intelligence
stays steady, or increases, as the individual
acquires new knowledge.
- Similarly, recalling (sorry!) the lectures on Memory,
procedural knowledge may decline, especially if we
no longer practice a particular skill (I used to be
a pretty good French horn player, but that was more
than 50 years ago, and I don't think I could manage
more than the C-major scale now).
- One function that seems to consistently decline
with age is processing speed, as measured by
reaction times on various cognitive tasks; but even
in this case there are complications, depending on
what's being processed. And, in any event,
this would just disadvantage older individuals on
"speed" as opposed to "power" tasks. The
elderly may not win Jeopardy!, with its
signalling devices, but they will do just fine at Trivial
Pursuit.
A review of performance on various
standardized tests of intelligence and memory (such as
the WAIS) by Joshua Hartshorne and Lura Germine (Psychological
Science, 2015) shows a clear decline in
performance with age. What's even more
interesting, however, is that the peak of performance
varies, depending on the function being tested. Of
course, as with all studies such as this, there is
considerable variability around each of these data
points, meaning that there are even some very old
individuals who continue to perform reasonably
well.
One important principle in cognitive aging, articulated
most forcefully by Nancy Denney at the University of
Wisconsin (Developmental Psychology, 1984), is "use
it or lose it". That is, individuals who
continue to practice a cognitive skill show less decline
in that skill than those who do not. So, if you
can, keep playing that French horn.
A related point is that it is possible for middle-aged
and elderly individuals to acquire new skills. A
social trend toward lifelong learning is documented in
several recent books, such as Beginners: The Joy and
Transformative Power of Lifelong Learning by Tom
Vanderbilt (reviewed in "The Joys of Approaching Life
san Amateur" by Cal Newport, New York Times,
01/31/2021), or Late Bloomers: The Hidden Strengths
of Learnng and Succeeding at Your Own Pace by Rich
Karalgaard (both reviewed in "Starting Fresh" by
Margaret Talbot, New Yorker, 01/18/2021).
A spectacular example is that of Nell Irvin Painter who,
after a distinguished career at Princeton as a historian
of the 19th-century American South, decided to become a
painter, completing both BFA and MFA programs in
retirement. It's not that Painter turned to a
talent or interest that she already possessed, but had
set aside during her academic career. She just decided
that she was going to become a real painter, not
a dilettante like Winston Churchill or George W. Bush
'43, good as these amateurs were and are (see her
memoir, Old in Art School: A Memoir of Starting Over).
Of course, it's may be easier to regain a skill that had
once been laid down. That's what savings in
relearning is all about. From time to time,
I think of going on the market for a used French
horn. But I don't think my family would like it.
Whether it's learning something for the first time or
relearning something you used to know how to do, Rachel
Wu of UC Riverside and her colleagues have identified a
number of factors that promote success (Human
Development, 2016):
- Open-minded input-driven learning, which
relies on new observations rather than prior
knowledge.
- Individualized scaffolding, such that the
steps in skill-acquisition are arranged in
increasing order of difficulty.
- A growth mindset (the term comes from
Carol Dweck, as discussed in the lectures on Motivation)
that abilities are not fixed and innate, but can
improve with practice and effort;
- A serious commitment to learning.
- A forgiving environment that supports the
learning activity, even through initial failure;
- A practice of learning several skills
simultaneously, so what what you learn in one
domain may help you learn in another.
Wu et al. note that these six factors are present
when children learn, and decline in the environments
in which adults learn. The implication, is that
lifelong learning requires not just what the Buddhists
would call a "beginner's mind" (in which
habitual modes of thought disappear, making the world
seem new and unfamiliar), but also a "beginner's
environment".
|
The Phylogenetic View of Development
One important perspective on the
development of mind is provided by evolution. The phylogenetic
point of view on development traces the evolution of mind in
the human species as a whole, often by comparing mental
processes in subjects of different species. This is the field
known as comparative psychology. It is an interesting
challenge to develop tests of perception, memory, learning,
categorization, problem-solving, and even language that can be
reasonably applied to nonhuman animals, and comparative
psychologists often exercise great ingenuity in their work.
Three Cheers for Evolution!
Darwin's theory of evolution by natural
selection is based on four principles (summarized by Richard
Lewontin in "Not So Natural Selection",New York Review of
Books, 05/27/2010):
- Variation: "Among individuals in a population
there is variation in form, physiology, and behavior."
- Heredity: "Offspring resemble their parents more
than they resemble unrelated individuals", by virtue of
some biological characteristic that they have inherited
from their parents.
- Differential Reproduction: "In a given
environment, some forms are more likely to survive and
produce more offspring than other forms."
"Evolutionary change is then the mechanical
consequence of variation in heritable differences between
individuals whenever those differences are accompanied by
differences in survival and reproduction. The evolution
that can occur [in this manner] is limited by the
available genetic variation, so in order to explain
long-term continued evolution of quite new forms we must
also add a fourth principle."
The usual view of natural selection is that
some inherited traits are passed on to the next generation
because they facilitate the species' adaptation to their
environmental niche -- "a preexistent way of making a living
into which organisms must fit or die". But Lewontin points
out that adaptation is actually a two-way street. "Organisms
do not 'fit into' niches, they construct them...." The
organism affects its environment, even at the level of
physics and biology, at the same time that the organism is
affected by its environment. And then he offers a great
example:
The most remarkable feature of terrestrial
organisms is that each one of them manufactures the
immediate atmosphere in which it lives.... By use of a
special kind of optical arrangement (Schlieren optics) on
a motion picture camera it is possible to see that
individual organisms are surrounded by a moving layer of
warm moist air. Even trees are surrounded by such a layer.
It is produced by the metabolism of the individual tree,
creating heat and water, and this production is a feature
of all living creatures. In humans the layer s constantly
moving upward over the body and off the top of the head.
Thus, organisms do not live directly in the general
atmosphere but in a shell produced by their own life
activity. It is, for example, the explanation of
wind-chill factor. The wind is not colder than the still
air, but blows away the metabolically produced layer
around our bodies, exposing us to the real world out
there.
So even at the level of physics and biology,
not just the building of structures and the like, organisms
change the environment that they live in. They're not
passive recipients of environmental stimulation.
Somewhere Steven Jay Gould, the late
paleontologist and evolutionary biologist (and close
colleague of Lewontin's), cited three classes of evidence
for evolution:
- Evolution Around Us: Although Darwinian evolution
transpires over millions of years, we can see similar
sorts of changes, on the smaller scale of micro-evolution,
occurring over the course of just a few generations or
even a single lifetime. Examples include the domestication
of dogs and of crop plants, and the emergence of
DDT-resistance in agricultural pests, and of
antibiotic-resistance in human pathogens.
- Intermediate Forms: Although there are definite
gaps, the fossil record contains ample evidence of extinct
species that mark the transition between one species and
another. Examples include the shift from reptiles to
mammals, the origins of whales in cow-like land creatures,
and "Neanderthals" and other species of hominoids (see
below).
- Oddities and Imperfections: Various physical
traits that serve no current adaptive purpose, or that
reveal the "attempts" of evolution to solve some problem
of adaptation. Gould's favorite example was the panda's
thumb, which also provided the title for one of his best
popular-science books.
The classic view of evolution, known as the Modern
Synthesis, combined natural selection with genetics
(Mendel published his work on inheritance in peas in 1866,
after Darwin published the Origin of Species, and
his work on the laws of inheritance was not really
recognized and appreciated until the early 20th
century). The Modern Synthesis basically argued that
randomly varying genes cause phenotypic characteristics
which are then selected for (or against) by the
environment. And the Modern Synthesis reached its apex
in the discovery of the discovery of the structure of DNA by
Watson and Crick, and the development of techniques for
mapping the human genome (and the genomes of other
organisms).
This story remains valid, but some theorists (e.g.,
Jablonka and Lamb, in Evolution in Four Dimensions,
2005) have argued that the genome itself is responsive to
environmental pressures, and that there are other
transmissible differences besides genetic ones. They have
suggested that there are, in fact, four different types of
influence on evolution:
- Genetic, basically following the lines of the
Modern Synthesis.
- Epigenetic, meaning the transmission of
phenotypic variations that do not, themselves, depend on
differences in DNA.
- Behavioral, meaning inadvertent transmission by
observational learning.
- Symbolic, meaning the deliberate transmission by
means of language.
Ordinarily,
evolution
occurs over extremely long periods of time, but the
intervals involved can be much shorter. Consider, for
example, the case of the peppered moth, an insect
whose white wings are "peppered" with black spots.
Before the Industrial Revolution, most peppered moths
resembled the image on the left, with individual variation
in the density of the "peppering". But by the end of
the 19th century, the dark-colored variant depicted on the
right vastly outnumbered the light-colored variant.
The evolutionary explanation is that, in the heavily
polluted environment of industrial England, the denser
peppering was highly adaptive, because the darker-colored
variant was harder for birds to see against soot-darkened
trees, and therefore less likely to be caught and
eaten. This evolutionary change, occurring over less
than 100 years, is probably the clearest evidence of
Darwinian evolution by natural selection.
Another, dramatic (and tragic) case of rapid evolution was
observed among elephants in Mozambique. Most African
elephants have tusks, but some females do not (and some
females have only one tusk). During the Mozambican
civil war (1977-1992), both sides slaughtered elephants to
sell their tusks in order to finance the conflict -- what's
called conflict ivory. But this slaughter was
not indiscriminate: it targeted only those individuals with
tusks. A study led by Shane Campbell-Staton, an
evolutionary biologist at Princeton, (Science,
10/22/2021; see commentary by Chris Darimont and Fanie
Pelletier in the same issue; also "Tuskless Elephants Escape
Poachers, but May Evolve New Problems" by Elizabeth Preston,
New York Times, 10/26/2021). Since the civil
war, ecologists and evolutionary biologists have documented
a severe decline in the number of Mozambican elephants with
tusks. Now, I an hear you say it: Of course,
you idiot! They've all been killed for their ivory!..
True, but that's not the whole story, because the relative
decline in tusked elephants has persisted since the civil
war ended, and controls on elephant-poaching have been
strengthened. It turns out that "tusklessness" is
controIled by a single dominant gene on the X chromosome,
known as AMELEX, that is implicated in the production of
malformed human teeth (remember, tusks are teeth).
And, for good measure, that gene is next to another one
whose that is lethal to male offspring, and the two of them
tend to get passed along together. So, remembering
your high-school genetics:
- On average, a tuskless female (X+X-), who has one copy
of the mutation (X-), will produce half her daughters with
(X+X+), and half without (X+X-), tusks; half her sons
(X+Y) will have tusks, and the other half (X-Y) will die.
- Two-tusked females (X+X+) can only mate with tusked
males (X+Y), because X+X- males die before they can
mate. Therefore, two-tusked females will produce
only tusked daughters (X+X+) and sons (X+Y)
So the result has been an increase in tuskless females, but a
decrease in males. To make things worse, elephants use
their tusks to strip bark from trees, dig holes for water, and
defend themselves. So tusklessness is not an
unadulterated good thing, and it's not clear how the increased
population of tuskless females will adapt to this new
situation. So, in the final analysis, over a period of
just 15 years, the elephants' civil-war environment introduced
a new selection pressure, which made tusklessness adaptive
(and, perforce, "tuskedness" maladaptive). Maybe, in the
new post-civil war environment, with decreased poaching, tusks
will bounce back. But for a while, at least,
tusklessness will remain at relatively high levels.
But evolution doesn't only affect body morphology. It
also affects behavior, as discussed in the lectures on Learning. And
moths provide evidence of this, too. Moths are famous for
their "flight to light" behavior a taxis (also discussed in
the lectures on Learning)
in which moths fly toward sources of light ("as they say,
"like a moth to a flame"), with frequently fatal
consequences. Florian Altermatt and Dieter
Ebert, two Swiss evolutionary biologists, (Biology
Letters, 2016) found that ermine moths collected from
light-polluted environments were significantly less
attracted to light than those collected from dark-sky
environments. The researchers speculate that this
behavioral change increases reproductive fitness, in terms
of both survival rate and reproductive capacity. Maybe
there's hope for those sea turtles after all!
It's worth pointing out how powerful natural selection
is. Consider a calculations by JBS Haldane, a British
evolutionary theorist, which assumes that one gene (A) is
found in 99.9% of the population, and another gene, B, is
found in only the remaining 0.1%. If B has a merely a 1%
reproductive advantage, producing 101 offspring for every
100 offspring produced by A, after only 4,000 generations
the numbers will be reversed: B will be found in 99.9% of
the population, and A in only 0.1%!
For more on evolution, see UCB's 'Understanding
Evolution" website at http://evolution.berkeley.edu/.
Also he January 2009 issue of Scientific American,
which celebrated the 150th anniversary of the publication of
Darwin's Origin of Species (1859).
Similarly, for the 150th anniversary of the publication of
Darwin's The Descent of Man (1871), Science
published an extensive review of advances in our
understanding of human evolution since Darwin's book "Modern
Theories of Human Evolution Foreshadowed b Darwin's Descent
of Man by Petetr J. Richerson, Sergey Gavrilets, and
Frans B.M. de Waal, 05/21/2021). Given the status of
both the authors and the publication venue, this overview
maybe considered authoritative. Herewith, some
quotations from the "Review Summary" (images also from the
main article).
Modern research shows that we share many
developmental, physiological, morphological, cognitive,
and psychological characteristics as well as about 96% of
our DNA with the anthropoid apes. We now know that since
our last common ancestor with the other apes 6 million to
8 million years ago, human evolution followed the path
common for other species with diversification into closely
related species and some subsequent hybridization between
them. Since Darwin, a long series of unbridgeable gaps
have been proposed between humans and other animals. They
focused on tool-making, cultural learning and imitation,
empathy, prosociality and cooperation, planning and
foresight, episodic memory, metacognition, and theory of
mind. However, new insights from neurobiology, genetics,
primatology, and behavioral biology only reinforce
Darwin’s view that most differences between humans and
higher animals are “of degree and not of kind.” What makes
us different is that our ancestors evolved greatly
enhanced abilities for (and reliance on) cooperation,
social learning, and cumulative culture—traits emphasized
already by Darwin. Cooperation allowed for environmental
risk buffering, cost reduction, and the access to new
resources and benefits through the “economy of scale.”
Learning and cumulative culture allowed for the
accumulation and rapid spread of beneficial innovations
between individuals and groups. The enhanced abilities to
learn from and cooperate with others became a universal
tool, removing the need to evolve specific biological
organs for specific environmental challenges. These human
traits likely evolved as a response to increasing
high-frequency climate changes on the millennial and
submillennial scales during the Pleistocene. Once the
abilities for cumulative culture and extended cooperation
were in place, a suite of subsequent evolutionary changes
became possible and likely unavoidable. In particular,
human social systems evolved to support mothers through
the recruitment of males and nonreproductive females. The
most distinctive feature of our species, language,
appeared arguably driven by selection for simplifying
cooperation. Reliance on social learning and conformity
led to the emergence of new factors constraining and
driving human behavior, such as morality, social norms,
and social institutions. These forces often act against
the immediate biological or material interests of
individuals, promoting instead the interests of the
society as a whole or of its powerful segments. Continuous
engagement in cooperation has led to the evolution of
strong coalitionary psychology, which can bring us
together whenever we perceive that our identity group
faces outside threats. Coalitionary psychology also has an
undesirable byproduct: often negative or even hostile
reaction to others who differ from us in their looks,
behaviors, beliefs, caste, or class (p. 806).
In Descent, Darwin remarked
in a few passages on the origin and antiquity of humans,
but he and his contemporaries had almost no relevant
fossils to work with and very underdeveloped
archaeological and paleoecological records. We now know
that the human lineage has undergone a rather dramatic
series of changes since our last common ancestor with the
other apes 6 million to 8 million years ago. Human
evolution followed the path common for other species, with
diversification into closely related species and some
subsequent hybridization between them. DNA and fossil
remains suggest that our ancestors diverged from
Neanderthals and Denisovans more than half a million years
ago. Anatomically modern humans were present in Africa
200,000 years ago. Around 70,000 years ago, up to six
highly distinctive subspecies of humans coexisted. Since
then, we have been a single species that emerged
fromAfrica about 50,000 years ago. Some of our derived
features, especially bipedal locomotion, are fairly
ancient; others, especially stone tool knapping, evolved a
little before the first fossils attributable to our genus
Homo appears in the fossil record around 2
million years ago; and still others appeared after 250,000
years ago. Human behavior was substantially modern by
30,000 years ago, but both biological and especially
cultural changes have been dramatic right up to the
present. In the Holocene, cultures evolved a whole series
of new ecological niches based on cultural adaptations and
symbolic markers of tribes and tribe-like social units
that partially isolate ecologically different populations
(p. 808).
One
thing that Richeson et al. make clear is that some of the
traits that evolved through the Darwinian principle of
natural selection -- namely, general intelligence, language,
and consciousness -- set the stage for cultural evolution
through discovery and social learning. And cultural
evolution is quite different from organic evolution -- not
least because it is much faster. In their "Review
Summary", they illustrate the differences with this image,
derived from Anthropology: Cultural Patterns and
Processes (1923) by UCB's pioneering anthropologist,
Alfred Kroeber. "Biological inheritance is rigid from
parents to offspring..., and [different] species mostly do
not exchange genes." The result is a tree with very
distinctly separated branches. By contrast, cultural
traits "are potentially acquired from anyone in
person's social network, and ideas spread rather readily
from one culture to another". This results in a much
more tangled tree.
In a lead editorial in the same issue ("The Descent of
Man, 150 Years On), Agustin Fuentes, an anthropologist
at Princeton, reflects on the racism and sexism that
permeates Darwin's book (he was not immune to the attitudes
of his time). The bottom line, says Fuentes, is that "Descent
is a text from which to learn, but not to venerate".
As the "father" of evolutionary theory, Darwin got a lot
right. But, Fuentes writes, students "should also be
taught Darwin as an English man with injurious and unfounded
prejudices that warped his view of data and experience".
Another expression of the phylogenetic
point of view is evolutionary psychology, an offshoot
of the sociobiology proposed by E.O. Wilson in his
book by that title. Sociobiology assumed that patterns of
social behavior evolved in the service of adaptation. Put
another way, large segments of social behavior are instinctual
in nature. Similarly, evolutionary psychology assumes that
mental functions also evolved to serve adaptive purposes. Put
another way, these modes of experience, thought, and action
are also instinctual in nature -- they are part of our innate
biological endowment, a product of evolution.
Kinds of Selection
Most discussions of biological
evolution focus on Darwin's first principle of natural
selection -- that those traits, resulting from natural,
random variation, that enhance the organism's ability to
survive in a particular environment, and thus pass its genes
to its offspring. In the case of Darwin's finches,
Darwin's concept of natural selection was
deliberately modeled on the "artificial" selection by which
farmers and ranchers have "improved" their livestock since
time immemorial. Thus, consider Carl Sagan's example (in Cosmos:
A Personal Voyage) of the samurai crab (Heikea
japonica). The Japanese "Tale of the Heike" tells the
story of Heike warriors in a naval battle in the 12th century,
who committed suicide by jumping overboard rather than face
defeat. Later, fishermen working in the vicinity of the battle
caught some crabs whose shells resembled human faces.
Believing that these crabs were the reincarnations of the
drowned warriors, they returned them to the sea. So, the
face-like appearance of the shell became an adaptive trait,
which was passed on to subsequent generations.
Detail from
illustration: The ghost of Taira Tomomori and heikegani with
faces of fallen soldiers,ukiyo-e print by Utagawa
Kuniyoshi (1797-1861).
Photograph
of a demon-faced crab, found in the waters surrounding Japan
(Smithsonian Magazine).
But it turns out that natural selection
is not the only possible form of selection. There are certain
traits that do not necessarily promote the survival of the
individual organism, but which are adaptive in other ways, and
thus also likely to be passed from one generation to another.
In
sexual selection, certain traits enhance the likelihood
that the organism possessing them will be able to mate -- even
if that trait is maladaptive in other respects. The classic
example are strength and size (in males), elaborate plumage in
birds (again, males), and various kinds of courtship displays.
the idea actually was originated by Darwin's grandfather,
Erasmus Darwin, but Charles Darwin elaborated on the concept
in The Descent of Man and Selection in Relation to Sex
(1871). Sexual selection is theoretically important, because
it makes clear that the basis for evolution is not merely
"survival of the fittest", as the stereotype goes, but rather
reproductive success. Fitness doesn't mean mere survival. It
means the ability to pass on one's genes. The classic example
of a trait subject to sexual selection, one noted by Darwin
himself, is the male peacock's display of tail-feathers.
Another, also noted by Darwin, is the bower bird of Australia
and New Guinea, which adorns its nest with all sorts of
colorful objects, including fruits and flowers but also found
objects like pieces of glass, in an attempt to attract a mate.
Richard Milner, in the Encyclopedia of Evolution:
Humanity's Search for Its Origins (1990) characterizes
sexual selection as "survival of the flamboyant".
A further principle of group
selection is often invoked to show how certain group
behaviors, like cooperation, might be adaptive. The classic
examples are found in social insects: beehives often function
as if they were a single organism. Individuals contribute to
the reproductive success of the group, and so pass on certain
genes that are common to their group -- even if they (and
their immediate family members) do not reproduce. But it's not
the group's survival that's at issue. Rather, the
selection favors genes that the group members have in common.
Milner (1990) calls group selection the "survival of the
social unit". Evolutionary psychologists often invoke group
selection as the biological basis of religion.
In kin selection, organisms
pass on traits that do not necessarily promote their own. This
concept has its origins in the work of William Hamilton, who
was interested in problems of altruism. For example, why do
worker ants and bees, themselves sterile, sacrifice themselves
to serve their queen? Hamilton's answer is that such
activities incidentally increase the chances that one's own
genes will be passed on. Even if the individual dies, the
survival of his genetic relatives insures that the family's
genes are passed on. Consider that, by virtue of sexual
reproduction, each individual organism shares about 50% of its
genes with first-degree relatives. We share an average of 50%
of our genes with our parents, siblings, and children; an
average of 25% of our genes with our first cousins, and an
average of 12.5% of our genes with our second cousins. If an
organism has a trait, like working instead of reproducing,
that increases the likelihood that its (many) relatives will
reproduce successfully, then it thereby increases the
likelihood that it's own genes will be passed on as well.
Instead of the individual's reproductive fitness, kin
selection enhances inclusive fitness. J.B.S. Haldane,
a British geneticist, once remarked that he would be willing
to sacrifice himself if he could be assured that his genes
would live on: when pressed further, he announced his
willingness to die for "two brothers, four uncles, or eight
cousins" (as quoted by Milner, 1990; see also Lehrer, 2012).
Kin selection is the basis of Richard Dawkins's theory of "the
selfish gene" -- that it's not species that are struggling to
reproduce themselves, nor individuals, nor group members --
but, rather, genes themselves.
According to William Hamilton, a
British mathematical biologist who undertook to express
Haldane's theory precisely, rB >C, meaning
that a gene for altruism could evolve if the benefit of the
genetically controlled behavior exceeded its cost, provided
that genetic relatedness was taken into account. This formula
became the basis of inclusive fitness theory which
gets its name because the reproductive value of the trait
includes the individual's close genetic relatives as well as
the individual itself. Hamilton's theory, in turn, was
popularized by E.O. Wilson, the Harvard entomologist who, in
such treatises as Sociobiology: The New Synthesis
(1975) and On Human Nature (1979), was among the first
to apply evolutionary theory to human social behavior -- a
foreshadowing of today's evolutionary psychology.
Inclusive fitness theory, in turn, has
been challenged by other theorists, including now Wilson
himself, who have proposed a principle of group fitness
instead. According to this theory, the reproductive value of a
trait includes other members of the individual's social group
(think: tribe), regardless of how genetically related the
group members are. In some sense, group fitness reverses
inclusive fitness. In inclusive fitness, the effect of the
gene determines he composition of the group, because it favors
the survival of genetic relatives. But in group fitness, the
group comes first, and the gene favors the survival of group
members.
In fact, both principles, kin selection
and group selection, operate, but at different levels: we have
genes for both selfishness (which favors the survival of the
individual and his genetic relatives) and cooperation (which
favors the survival of unrelated group members), which tend to
oppose each other. As Wilson (2007) has put it: "Selfishness
beats altruism within groups. Altruistic groups beat selfish
groups. Everything else is commentary."
For the record, there's at least one
other form of selection, known as balancing selection.
Consider the distribution of the "Big Five" personality traits
discussed in the lectures on "Personality
and Social Interaction". We know that each of
these traits is, to some degree, under genetic control.
And we can assume, as well, that there is an optimal
distribution for each of these traits -- let's say, for
purposes of argument, a fair amount of extraversion,
agreeableness, and openness to experience; lots of
conscientiousness; and not too much neuroticism. How
come natural selection hasn't operated to give us all the
optimum levels of these traits, just as it's given every
normal human the capacity for language? How come
there are any neurotics, or psychopaths, at all? Why
don't we all just get along? According to the principle
of balancing selection, different combinations of traits are
optimum for different environments, both physical and
social. If everyone were low on neuroticism, it would be
adaptive for "neurotic" people to be aware of, and responsive
to, threats that non-neurotic people are oblivious to.
If everyone were high on openness to experience, it would be
adaptive for "closed" people not to go jumping out of planes
without a parachute. So, over time, evolution
establishes a sort of equilibrium. And because the
genetic contribution to neuroticism or openness isn't a single
gene, but rather a large number of genes, as more-neurotic and
less-neurotic people mate and produce offspring, you tend to
get something that looks like a normal distribution.
Genes, Culture, and Altruism
Evolutionary psychologists get twisted up in knots
trying to explain altruism: How could it possibly be
adaptive for an organism to sacrifice itself for
others? How could such a tendency be built into
the genes? It's to solve this problem that ideas
like kin selection and group fitness were put
forth. But these principles have an ad-hoc
quality to them. In scientific terms, they're
not very parsimonious, and appear (to me) to have been
proposed in acts of desperation to make sure that
there is a genetic, biological, Darwinian
explanation for everything.
The first thing to ask is: how often does true
altruism occur? How often do individuals
sacrifice themselves for others who are not their
genetic offspring? I bet a statistically valid
survey would answer: not very often.
One place where it does occur is on the
battlefield, where soldiers often (but not
always -- which is not a criticism) often sacrifice
themselves to save others. Consider, to take a
nonrandom example, Humayun Khan, son of Khizer and
Ghazala Khan, who famously reprimanded
soon-to-be-President Donald Trump for his attitudes
toward Islam and Muslims. In 2004, while serving
as an Army captain in Iraq, he ordered his
company to stay sheltered while he confronted a
suspected car-bomber alone. His suspicions
proved valid, and the car bomb went off: Khan is now
buried at Arlington National Cemetery, but the rest of
his company lived. This happens with some
frequency in the military and first-responders at home
(consider the firefighters who went up the
Twin Towers on 9/11, when everyone else was coming
down. These examples suggest that altruism isn't
built into the genes. It's built into the
culture -- more precisely, the particular
sub-culture that is the military, or the fire
department, or similar organization.
So altruism does occur, and it has to be
explained.
One way of studying altruism has been through the Prisoner's
Dilemma, a game which I discussed briefly in the
lectures on "Personality
and Social Interaction". In the game, two
players take the roles of criminal suspects, A and B,
who face a prison sentence for committing a crime
(Poundstone, 1992). The prosecutor offers each
of them the following deal:
- If A and B both confess, saving the prosecution
the expense of a trial,
- they will each be sentenced to 2 years in prison.
- If A confesses and implicates B, A will go free
while B will serve 3 years.
- If B confesses and implicates implicates A, B will
go free while A will serve 3 years.
- If both stay silent, each will serve 1 year in
prison on a lesser charge.
Here's a depiction of the payoff matrix in the
classic Prisoner's Dilemma:
Prisoner A
|
Prisoner B
|
Cooperate
|
Defect
|
Cooperate
|
A -- 1 Year
B -- 1 Year |
A -- 3 Years
B -- 0 Year |
Defect
|
A -- 0 Year
B -- 3 Years |
A -- 2 Years
B -- 2 Years
|
The two prisoners cannot communicate with each other,
so each has a choice of whether to cooperate
by remaining silent or to defect (compete)
by confessing. Obviously, defection is the
rational choice for the one who defects; but
cooperation is the altruistic outcome, because is
minimizes the loss to the other person as well as to
oneself. But cooperation requires each prisoner
to trust that the other will not defect.
There are two basic versions of PD. In the
standard PD, there is just one round of the
game. In iterated PD, there are several
rounds, and subjects' behavior changes over
time. On the first round, as in the standard
game, many, perhaps most, subjects compete by
defecting. On the second round, an initially
cooperative subject may retaliate by competing in
turn. In this "tit for tat" strategy,
each player mirrors the other's behavior.
Eventually, however, both players come to understand
that they can maximize their joint outcomes by
cooperating.
In addition to this process, known as direct
reciprocity, there are other processes involved
in the emergence of cooperative behavior (for details,
see "Five Rules for the Evolution of Cooperation" by
Martin A. Nowack, Science, 2006; also "Why We
Help" by Nowak, Scientific American, 07/2012):
- Spatial selection: neighbors tend to
cooperate with each other, and the network of
cooperators gradually expands.
- Kin selection, as discussed earlier.
- Indirect reciprocity, in which people help
others who have already established a reputation for
helping.
- Group selection, also as discussed earlier,
in which individuals help others for the "greater
good".
Nowack argues that evidence for all these mechanisms
can be found in many different species of animals,
suggesting that they have in common some basis in
Darwinian evolution by natural selection. But,
as he also points out, some of these mechanisms aren't
carried on the genes. In particular, direct and
indirect reciprocity depend on experience, and the
sharing of information (e.g., about reputation) via
language. This is especially the case for direct
reciprocity, which depends on players' ability to
learn from experience; and indirect reciprocity, which
depends on players' ability to learn from others
(i.e., social learning). The mechanism for
Darwinian evolution by natural selection is based on
genetic variability in innate traits. But we
also know that there's no inheritance of acquired
characteristics. So, helping behavior isn't all
encoded in the genes. It's also encoded in
knowledge, beliefs, and attitudes -- it's encoded in
the mind.
|
Our Phylogenetic Heritage
Humans have a particular place in the
phylogenetic scheme of things. The earliest analyses of the
human place in nature were based on morphological
similarity -- the similarity in form and structure
between humans and other animals. From this perspective,
humans are warm-blooded vertebrates, primates related to the hominoid
apes: orangutans, chimpanzees, gorillas, and gibbons.
One popular morphological analysis holds that we are most
closely related to orangutans.
The Cosmological Backdrop
The earth, and the rest of the physical
universe, evolved too -- just not through such processes as
natural selection. Like our understanding of human
evolution, our knowledge of the evolution of the universe
changes constantly, with new scientific discoveries.
However, the rough outlines can be drawn below, beginning
with the "Big Bang", about 14 billion ya.
The point is that the universe has not always
existed, in a sort of steady state. As far as we can tell,
the universe had its origins in the (extremely brief) era
known as "Quantum Gravity", shortly before the Big Bang --
very shortly before, about 10-43 second into the
existence of the universe (that's a zero, a decimal point,
and a 1, with 42 zeroes in between). The entire universe was
essentially a point in space, measuring 10-33
centimeters in diameter (so it wasn't exactly a point, but
that's close enough for government work). Space and time
were discontinuous, and all the physical forces in the
universe were unified. Time began at this point, but there
was not yet any space -- that is, there weren't any
meaningful dimensions of length, width, and depth.
The "Era of Unification"
- At 10-39 seconds, the "strong force" split
from the "weak force" and from electromagnetism, beginning
the "era of unification".
- At 10-34 seconds, the Big Bang caused the
cosmos to swell.
- At 10-11 seconds, the "weak force" split from
electromagnetism.
- The period before the "Big Bang" was defined by
physicist Alex Vilenkin as "a closed spherical spacetime
of zero radius".
"Quark Soup"
- At 10-5 seconds, quarks combined into protons
and neutrons, beginning the era of "quark soup". All this
so far, and we're still less than a second into the age of
the universe!
The "Primordial Fireball"
- From 10-2 seconds to 3 minutes,
nucleosynthesis occurred. Protons and neutrons formed the
nuclei of atoms, yielding the light elements -- helium,
lithium, and deuterium.
- In about 400 thousand years, atomic nuclei began
capturing electrons. The universe became transparent, and
cosmic radiation was released.
The "Dark Ages"
- After about 1 million years, the cosmic background
radiation faded, leaving the universe empty and dark.
"First Structures"
- At about 500 million years of age, the dark ages ended
with the formation of the first stars. These stars then
exploded, filling the universe with heavy elements. At
this point, the era of "first structures" began.
- Beginning at about 1 billion years, the first galaxies
formed, with black holes at their centers. These were the
quasars, the farthest objects that can be seen from the
earth today.
- From 2 to 6 billion years of age, other galaxies formed,
including our own Milky Way.
- At 7 billion years, "dark energy", began to accelerate
the expansion of the universe.
- At 9.5 billion years, or about 4.5 billion ya, our solar
system, including the Sun and the Earth, were born,
essentially completing the universe essentially as we know
it -- a universe built from about an ounce of primordial
stuff that exploded in the Big Bang.
- Interestingly, though, it appears that most of the
universe appears to be composed of unseeable "dark matter"
of subatomic particles left over from the Big Bang.
The Modern Universe
- Some 3.5 billion ya, life began on earth. The universe
was about 10.5 billion years old.
- About 3 billion ya, dark energy outweighed matter in the
universe.
- And here we are today, about 14 billion years after the
birth of the universe, about 4 billion years after the
origin of the Earth, and about 3.5 billion years after the
first emergence of life on Earth.
Source: "In the Beginning", by Dennis
Overbye (New York Times, 07/23/2002)
The Future Universe
- Cosmologists tell us that the evolution of the universe
isn't over yet. In about 2 billion years, the warming Sun
will make Earth uninhabitable. About 3 billion years after
that, the Sun will swell into a red giant, burning the
Earth to a crisp. As if that weren't enough, the Milky Way
will collide with our nearest galactic neighbor,
Andromeda.
- About 131 billion years later,if the universe
keeps expanding, the galaxies will be moving away from
each other at such a high speed that they will outpace the
speed of light, and stars will no longer be visible If
there were anyone here to see them).
- Forever, that is, unless the expansion accelerates, in
which case the universe might literally shred itself in
what some cosmologists call the "Big Tear" or "Big Rip",
leaving no particles remaining, and thus no space either.
This possibility was satirized by the cartoonist Roz Chast
in the New Yorker magazine (12/07/2020).
- Or a flaw in the structure of the Universe might create
a "Quantum Bubble of Death" that moves through the
universe, destroying everything in its path.The universe
might continue expanding this way forever.
- If the Universe just keeps expanding, it may eventually
suffer a "Heat Death", as all its energy dissipates into
cold darkness (or dark coldness).
- Alternatively, the expansion of the universe might come
to a stop, in what is known as a "Flat State", and just
stay that way forever.
- Of course, there's also the possibility that the
universe will fall back on itself, in what is called the
"Big Crunch". If so, this will likely be followed by
another "Big Bang" (or "Big Bounce", if you will), and the
evolution of the universe will start all over again. Given
the contingent nature of evolution, with details depending
on accidents of circumstance, it's not clear that the
Milky Way (and other galaxies), or our solar system (and
other solar systems), would appear again in the form(s) we
know them. Sort of like the cosmological equivalent
of Joni Mitchell's "Circle Game"
And the seasons they go round and
round
And the painted ponies go up and
down
We're captive on the carousel of
time
We can't return we can only look
behind
From where we came
And go round and round and round
In the circle game
- Or, as proposed by several prominent cosmologists, it
may be that there are already several alternative
universes, each produced by the same process that ignited
the Big Bang that created our own universe, and each
reflecting the operation of different contingencies in the
process just described -- and each outside the boundaries
of the only universe we can know. Of course, because
this is the only universe we can know, there's no way of
proving (or disproving) the multiverse hypothesis.
The fact that distinguished physicists can seriously
entertain a hypothesis that is completely untestable
should put an end to "physics envy" among psychologists,
once and for all.
See The End of Everything (Astrophysically Speaking)
by Katie Mack (2020), which reviews various cosmological
scenarios for the end of the Universe; also her article,
"Tearing Apart the Universe, American Scientist,
11-12/2020. Reviewing the book in the New York Times
("This Is How It All Ends", 09/06/2020), James Gleick,
quotes Robert Frost ("Some say the world will end in
fire,/Some say in ice") and T.S. Eliot ("This is the way the
world ends/Not with a bang but a whimper"), but ends with
this:
...I found it helpful -- not reassuring, but
mind-expanding to be reminded of our place in a vast
cosmos. Mack puts it this way: "When we ask the
question, 'Can this all really go on forever?', we are
implicitly validating our own existence, extending it
indefinitely into the future, taking stock and examining
our legacy."
It seems safe to say, though, that any meaning and
purpose will have to be found in ourselves, not in the
stars. The cosmic end times will bring no day of
judgment, no redemption. All we can expect is the
total obliteration of whatever universe remains and any
intelligence that still abides there.
Or, as Priyamvada Natarajan, writes, in another review of
Mack's book ("All Things Great and Small", New York
Review of Books, 07/01/2021, which also contains a
lovely update of both cosmological and subatomic theory),
As Mack points out, only one thing is certain: the
universe will end. It simply cannot persist
unchanged forever.
Mammals, Primates, Hominoids, Hominins,
Hominids, Humans
The fossil evidence suggests a gradual
divergence among the hominoids. Life on earth began about 3
billion ya, during the Precambrian era of geologic
time, in a probiotic soup of organic molecules (i.e.,
molecules containing the element carbon). For the next 2
billion years, only very simple life forms -- bacteria and
algae -- existed. About 500 million ya, during the Paleozoic
era,complex invertebrates began to evolve. Then,
especially in the Mesozoic era, about 250 million ya,
came the vertebrate species -- fish, then amphibians, then
reptiles. Finally, in the cretaceous period of the
Mesozoic era, beginning about 145 million years, ago, and
especially in the tertiary period of the Cenozoic
era, beginning about 65 million ya, birds, and mammals.
This is what our
earliest mammal ancestor, the morganucodon (Morganucodon
oehlieri, to be exact) looked like, as depicted in a
bronze sculpture at the Smithsonian Museum of Natural History
in Washington, D.C.
Among mammals,primates
are a relatively recent development. All primates have a set
of morphological features in common, that tend to distinguish
them from other mammals:
- grasping hands and feet, with opposable thumbs and big
toes;
- nails (rather than claws) on the digits;
- converging eye sockets (i.e., eyes that face forward);
- postorbital bars (bony rings around the orbits);
- other physical characteristics that enable the animal to
leap from branch to branch and tree to tree;
- large brains.
The earliest primates, emerging more
than 60 million ya, were tree-dwelling ancestors of
present-day tree-shrews and lemurs. About 40 million ya, the
"higher primates" -- or, more correctly, the anthropoid
primates -- began to emerge. These came in two groups,
evolving in distinct areas of the globe: the "New World"
monkeys, first appearing in North America, but then colonizing
Central and South America; and the "Old World" monkeys and
great apes, in Eurasia, diversifying into Africa.
Hominoids and Hominids
The great apes -- present-day gibbons,
gorillas, chimpanzees, and orangutans -- are also known as hominid
primates, and they share ancestors in common with modern-day
humans.
The fossil evidence further
indicates that the gibbons split off from the rest of the
primates about 25 million ya. Then, about 19 million ya there
was a big split, dividing chimps and gorillas from humans and
orangutans. Finally, about 18 million ya, hominins --
the ancestors of modern humans -- split from orangutans. Thus,
by the morphological and fossil evidence, humans are most
closely related to orangutans.
For many
years this was the standard view in the field. More recently,
however, this view has been revised by genetic evidence.
The human genetic endowment consists of 23 pairs of chromosomes,
which are contained in the nucleus of each cell in the
human body. (There is one exception to this rule: the sperm
cells of men, and the egg cells in women, contain only one
randomly selected member of each pair. When the egg is
fertilized by the sperm, the chromosomes in one cell are
matched with their counterparts in the other, yielding a
one-celled embryo that contains all 23 of the required
pairs, one chromosome in each pair contributed by each parent.
Each chromosome consists of thousands
of genes. These are the basic units of heredity, and
affect the organism's physical features, and the course of its
development. The genes themselves are composed of DNA
(deoxyribonucleic acid), a chain of four chemical bases
(adenine, guanine, thymine, and cytosine). Every gene is
located at a particular place on a specific chromosome.
According to the theory of evolution, closely related species
have closely similar sequences of bases. Thus, examining the
similarity in DNA molecules between modern humans and modern
nonhuman hominoids indicates the evolutionary relations among
these species. And with knowledge of the rate of DNA change,
we can determine how early, or how recently, these species
diverged.
It turns out, somewhat paradoxically,
that morphological change is not necessarily related to
genetic change. Comparisons of the structure of DNA in the
blood (see material on genetics, below) indicate that humans
are most closely related to chimpanzees -- in fact, we have
about 98.5% of our genetic material in common what that
species. Further, the genetic evidence indicates that the
split between chimpanzees and hominids occurred only about 6
to 8 million ya.
So, our evolutionary
history is encoded in our genes, and our genes tell us that we
are most closely related to chimpanzees. At the same time,
those 1.5% of genes that are not held in common by the two
species can make quite a difference. Our characteristically
hominid features include:
- The ability to walk upright on two legs, freeing our
hands to manipulate objects.
- Opposing thumbs on both hands, giving us a unique manual
dexterity, and ability to grasp. All mammalian species
tend toward pentadactylism, or having five digits per limb
(the hooves of horses and deer evolved from the five-digit
limbs of earlier species, not the other way around). The
digits of primates have nails rather than claws, sensitive
tactile pads on the tips, and the ability to grasp. The
opposable thumb (meaning that it can face the other
fingers) permits both a precision grip and a power grip.
- Binocular vision, in which two eyes focus on objects,
giving a different image in combination than either eye
could give alone;
- A uniquely structured vocal tract, tongue, and mouth
cavity, that permits highly flexible vocal communication;
- An extremely large cerebral cortex, much larger than
would be expected given our body mass, and much larger
than would be expected given the size of the rest of our
brains.
The last two features,vocal
apparatus and brain mass, are the most
characteristically human. And, of course, the brain provides
the biological substrate of cognition -- it supports our
learning, thinking, and problem-solving. It also contains
specific cortical structures specialized for language,
permitting symbolic representation of objects and events, and
flexible, creative communication with other humans. Thus, the
most important legacy of evolution is the human mind -- a
particularly powerful cognitive apparatus coupled with a
capacity for linguistic representation and communication. This
feature, the human mind, sharply divides the dullest human
from the smartest chimpanzee -- it is the difference that 1%
makes.
The special features of the human brain
doesn't mean that other animals don't have minds. For example,
pigeons have a high capacity for abstracting concepts. And
chimpanzees and dolphins have some limited linguistic
abilities (symbolic representation, some degree of flexibility
and creativity) -- though no capacity for speech because of
the different configuration of their vocal tracts. The mental
abilities of nonhuman animals are interesting in their own
right, and we can learn a great about ourselves from studying
other species.
But this does mean that the human mind
is something quite special, and that we should focus on its
development in the life of the individual -- that is, move
from the phylogenetic perspective on development to the
ontogenetic perspective.
|
"The March of Progress"
(TMOP), perhaps the most famous (and most frequently
parodied) scientific illustration of all time,
appeared in Early Man by F. Clark Howell, an
anthropologist at UC Berkeley and generally regarded
as the father of paleo-anthropology. Published
in 1965 by Time-Life books, Early Man was
intended as a non-scholarly introduction to the field
(there's a copy in the UCB Education-Psychology
Library). TMOP itself was drawn by Rudolph
Zellinger, an important scientific illustrator.
Reading from left to right, TMOP traces 15 milestones
in human evolution, as it was understood at the time:
- Pliopithecus,
a gibbon-like primate.
- Proconsul,
possibly an early ape.
- Dryopithecus,
an early ape.
- Oreopithecus.
- Ramapithecus,
an early orangutan.
- Australopithecus,
the earliest hominid
- Paranthropus.
- Advanced
Australopithecus.
- Homo
erectus, an early member of the genus Hom.
- Early Homo sapiens.
- Solo
Man, a sub-species of H. erectus.
- Rhodesian Man,
an early H. sapiens.
- Neanderthal Man.
- Cro-Magnon Man.
- Modern
Man.
Of course, the illustration -- wonderful as it is
-- an oversimplification:
- In the first place, there's been a lot of
progress in paleo-anthropology since 1965, and we
know that human evolution was more complicated
than that. For example, there may have been
at least six different kinds of early H.
sapiens, and they may have even lived in
close proximity to each other, fought, and
interbred.
- More important, there was no linear "march" from
early primates to modern man (and Howell didn't
think there had been -- he knew better). As
Stephen Jay Gould tirelessly pointed out, and even
Darwin agreed, evolution is better depicted as a
bush, with lots of branches, than as a ladder, or
even as a tree.
- As Gould also was at pains to point out, even
the notion of "progress" is suspect.
Evolution isn't just a progression from simple to
complex, or from less good to better. Even
very simple organisms are perfectly adapted to the
environment in which they evolved. Humans
are in no sense an ideal form, towards which
evolution has been progressing for millions of
years. We're animals with a high degree of
general intelligence, consciousness, and language,
and that gives us a special place in nature --
which is why we call the era of modern humans the
Anthropocene.
- But from a biological point of view, we're just
another species of animal.
For more on TMOP, see:
- From an evolutionary point of view, "The
Iconography of an Expectation" by Stephen Jay
Gould, in Wonderful Life (1989).
- From a design standpoint, see There's
Nothing Funny About Design by David
Barringer (2009).
|
Click on the thumbnail to the right for
a larger view of the image. |
|
The Evolution of Hominins
The precise manner in which modern
humans evolved from ancient hominids is not known, and there
is considerable controversy over this matter among physical
anthropologists. To make life even more interesting, every so
often a new discovery will shake up the field of paleontology.
Still, the basic outlines are known. The account that follows
is based on my understanding of the theory of Donald Johanson,
discoverer of the skeleton of Lucy, one of our earliest
ancestors (the rival theory is by Louis and Mary Leakey).
According to Johanson's view, the earliest hominids split off
from the ancestors of African apes (gibbons, gorillas,
orangutans, and chimpanzees) about 6-8 million ya.
About 4.4 million ya, in the midst of
the Ice Age, the genus Australopithecus ("Southern
ape") emerged. Australopiths have human-like teeth, but in
other respects they resembled terrestrial apes: short body,
long arms, small brain. Perhaps their most important physical
feature was that they walked upright on two feet, thus freeing
their arms and hands to make and use tools -- something which
began about 2.5 million ya, and coincided with a period of
increasing brain size.
The earliest known example of
Australopithecus is, in fact, Lucy -- discovered in the Afar
Triangle of northern Ethiopia, and thus named Australopithecus
afarensis ("Selam", an infant A. afarensis also
dubbed "Lucy's baby", although the fossil was about 100,000
years older than was also discovered nearby). A. afarensis
lived from about 4 million to about 2.5 million ya. Another
ancestor,Australopithecus africanus, lived from 3 to 1
million ya: specimens have been found in eastern and southern
Africa; the most famous sites are in Kenya and Tanzania. Yet
another ancestor,Australopithecus robustus, bigger and
more powerful than the others of its kind (hence the name),
lived from 2.5 to 1.5 million ya: specimens have been found in
southern Africa. A fourth ancestor,Australopithecus boisei
(named for Charles Boise, a benefactor of many fossil hunts)
lived from 2.5 to 1 million ya.
Another hominid line,Paranthropus,
lived from about 2.8 million to about 1.4 million ya.There's
also a Paranthropus boise, with the same
namesake. There is evidence that P. boise lived
in East Africa alongside early early Homo
species, which is how its got its name: para from the
Greek for "beside" and anthropus, of course, for the
Greek for "human being" .
For a summary of what we know about P.
boisei, see "Meet Your Exotic, Extinct Close Relative"
by Bernard Wood and Alexis Williams, American Scientist,
11-12/2020.
Actually, according to Johanson,
neither Australopithecus nor Paranthropus is not a direct
ancestor of humans. That honor belongs to another hominid
entirely:Homo habilis ("handy Man", the initial capital
indicating that it refers to both males and females of the
species), discovered in eastern Africa by the Leakeys. H.
habilis had a much bigger brain than any Australopith. It made
and used tools, while the Australopiths probably did not. It
lived as a community, building shelters and surrounding its
camps with fences or windbreaks. H. habilis emerged in
eastern, southeastern, and southern Africa about 2 million ya,
and lived alongside several genera of Australopiths for about
500 thousand years. Homo habilis apparently gave direct rise
to another ancestor,Homo erectus ("upright Man"), which
lived from about 1.6 million ya to about 200 thousand ya. H.
erectus has been found everywhere in the Old World, including
Europe, Asia ("Peking Man"), and Southeast Asia ("Java Man").
It had an even bigger brain, a better toolkit and building
materials, hunted, and used fire.
The Origins of Fire
Anthropologists and paleontologists generally
date the control of fire to about 250,000 years BCE.
However, deposits of burned wood and flint discovered in the
Gesher Benot Ya'aqov site in northern Israel strongly
suggest that early humans may have controlled fire (as
opposed to simply using it) as long as 800,000 ya. If
validated, the discovery would help explain how early humans
were able to migrate out of Africa and into the colder
climates of Europe and Asia -- a migration that began at
about this time (N. Goren-Inbar et al.,Science,
04/30/04).
About
300 thousand ya, yet a new subspecies,Homo sapiens
("Wise Man"), emerged. To put it bluntly, this is us. The
archaic form of H. sapiens, popularly known as
Neanderthal Man (from fossils found in the Neander Valley near
Dusseldorf, Germany), controlled fire and made clothes; thus
they were the first hominids to be able to survive in cold
climates (naked humans cannot survive outside the tropics).
They cared for the sick and buried their dead. They produced
art. But they didn't last. They competed unsuccessfully with
another subspecies,Homo sapiens sapiens ("very
wise Man", I guess), popularly known as Cro-Magnon Man (from
fossils found at Cro-Magnon, in southwest France).
Neanderthals went extinct approximately 30,000 ya. Recently,
Neanderthal Man has been renamed h. neanderthalensis,
and modern man, simply,h. sapiens. The
illustration at left shows a reconstruction of a Neanderthal
skeleton (foreground), with a skeleton of Homo sapiens
in the background (from the American Museum of Natural
History).
Neanderthals get a bad rap, because
they were replaced by modern humans. Then again, they
lasted for about 350,000 years, and never once came even
close to blowing themselves up with nuclear weapons, or
threatening the planet with global warming. And, for
the record, the first modern humans to migrate out of Africa
didn't last too long, either. A more positive
appreciation of Neanderthals is found in Kindred:
Neanderthal Life, Love, Death, and Art (2020) by
Rebecca Wragg Sykes. Reviewing the book in Science
(10/30/2020), Emma Pomeroy, an anthropologist at Cambridge
University, wrote that "Wragg Sykes evaluates the available
evidence on Neanderthals with empathy and even-handedness,
revealing the group to be less 'them' and more 'us'".
(For another, more extensive review, see "Why Did They
Vanish?" by Tm Flannery, New York Review of Books,
05/13/2021.).
The
ancestors of Neanderthals emigrated from Africa roughly 500-250
thousand ya, turned north and west, and ended up in
Europe. Another archaic form of
homo sapiens,
known as the Denisovans, turned east, and ended up in Asia (the
first Denisovan bones were discovered in Denisova Cave, in the
Altay Mountains of southern Siberia). Unfortunately, the
only fossils definitively known to be Denisovan consisted of a
single finger and a couple of teeth from a single individual
("Denisova Girl")found in Siberia, and a jawbone found in
China. However, Carmel, Gokhman, and their colleagues were
able to use advanced DNA analyses (don't ask) to generate a
guess at what Denisova Girl might have looked like. The
image, published in
Cell (2019), turns out to be
consistent with the independently discovered jawbone -- but the
researchers caution that this is only a reconstruction of a
single individual who may or may not be representative of her
species as a whole.
About 50,000 ya, the ancestors of modern humans also left
Africa, encountered and interbred with both Neanderthals and
Denisovans (depending on which turn they took), competed with
them, and became dominant, after which the Neanderthal line died
out. As a result of this cross-breeding, however, modern
humans still carry some Neanderthal genetic material --
amounting to as much as 5% of the human genome.
Here's one version of the story, based
mostly on DNA evidence (Akey et al., Science, 2016):
- The first encounter with Neanderthals occurred soon after
modern humans left Africa, somewhere in West Asia, leaving
traces of Neanderthal DNA among the ancestors of modern
Europeans, East Asians, and Melanesia. (At this point,
the ancestors of modern Melanesians split off from the
Europeans and Asians.)
- A second encounter with Neanderthals, somewhere in the
Middle East resulted in further interbreeding with the
ancestors of modern Europeans and East Asians, and South
Asians.
- An encounter with Denisovans resulted in interbreeding
with the ancestors of modern Melanesians.
- And a third encounter involved only Neanderthals and the
ancestors of modern East Asians.
- It was long thought that the ancestors of Neanderthals
migrated out of Africa, turned left or right, but never
turned around. In 2020, researchers from the Max
Planck Institute for the Science of Human History, found
that some Neanderthal DNA can be found in the genomes of
modern Africans, providing evidence of 'backmigration" from
Eurasia into Africa.
- Recent DNA
modeling by Alan Rogers, a population geneticist at the
University of Utah, suggests that there may have been
interbreeding between Neanderthals, and Denisovans and
"archaic", "ghost" lineages, such as h. erectus, who
also migrated out of Africa (Science, 02/21/2020,
from which the image at the right is taken). Some of
this interbreeding may also have involved the ancestors of
modern humans. If so, we don't just carry a little
Neanderthal DNA with us -- the "ghosts" in the human family
tree go back way further than that.
- Actually, recent DNA sequencing studies (e.g., Petr et
al., Science, 09/25/2020) indicate that there may
have been two encounters between Neanderthals and h.
sapiens. The first migration of h. sapiens
out of Africa and into Europe may have occurred as long as 200-300,000
years ago. These first modern humans in Europe died
out, but not before transferring some of their genes (ahem)
to Neanderthals who were already living there. Then
there was a second migration about 40-80,000 years ago,
after which the Neanderthals went extinct, and the modern
humans thrived (this is the standard story).
There may have been other, isolated encounters, some of which
did not result in the passing of DNA to descendants. But the
four interminglings documented in the DNA evidence means that
the ancestors of modern Melanesians got only one "pulse" of
Neanderthal DNA -- most of their archaic DNA comes from
Denisovans. Europeans and South Asians got two "pulses" of
Neanderthal DNA, while the ancestors of modern East Asians got
three. And of course, the ancestors of modern Africans,
who never migrated out of Africa and so never encountered
Neanderthals or Denisovans, got none at all.
This whole story is told by Svante Paabo, the Swedish
biologist who first sequenced the Neanderthal genome, in Neanderthal
Man: In search of Lost Genomes (2014), a book which
has been compared with James Watson's The Double Helix
(1968).
For a shorter version of the story, see "Neanderthal
Minds" by Kate Wong, Scientific American, 02/2015.
Neanderthals and Cro-Magnons were
closely related in genetic terms. In 2006, two different teams
of researchers reported the first steps in reconstructing the
Neanderthal genome, based on samples from preserved bone
tissue: preliminary results indicate that the two genomes are
about 99.5% identical. So, that's 1.5% genetic
difference between us and chimpanzees, and 0.5% difference
between us and Neanderthals. It was also determined that
Neanderthals had the FOXP2 gene that is considered critical
(though probably not sufficient) for speech, if not language,
suggesting that they may have had the ability to use language
as well. However, it's not really possible to make
inferences about the mental and behavioral capabilities of
Neanderthals from knowledge of their genomes. For that,
we have to rely on the physical evidence they left behind to
be studied by archeologists
For a summary of what we know about Neanderthal
psychology, see How to Think Like a Neanderthal by
Thomas Wynn and Frederick L. Coolidge (2011).
The traditional theory is that modern
humans invaded Neanderthal territory in Europe and eliminated
them -- and that similar scenarios played out between modern
humans and other bands of "archaic" humans. However, the
close similarity in DNA, and other evidence,now suggests that
modern humans actually interbred with Neanderthals, and
perhaps other archaic forms with whom they shared close
genetic resemblance. (See "Human Hybrids" by Michael F.
Hammer, Scientific American May 2013).
These are the broad outlines. It's
clear where we came from, but the precise path is unclear, and
it's certainly not a straight line from Pan troglodytes
to Homo sapiens.
For a recent overview of the
complexities of human evolution, see "Shattered: New Fossil
Discoveries Complicate the Already Devilish Task of
Identifying Our Most Ancient Progenitors" by Katherine
Harmon, Scientific American, February 2013.
The Geological "Ages"
The geological timescale is a
chronology of the history of the earth, divided into
- eons, which in turn are divided into
- eras, which in turn are divided into
- periods, which in turn are divided into
- epochs, which in turn are divided into
The origin of life on earth can be
traced to at least 3.5 billion years ago, which is
the date for the earliest known fossils. Most
biologists push back the actual origins to the
formation of the oceans, about 4.41 billion years
ago -- not too soon after the origin of the earth
itself, roughly 4.54 billion years ago (give or
take). According to theory, life emerged from
a primordial soup of organic
compounds. We needn't tarry over the details,
much less argue about precise dates. The
general idea can be illustrated by the 1952 Miller-Urey
Experiment in which a vessel of water
(representing the Earth's oceans) and a vessel of
gasses (representing the Earth's atmosphere), when
stimulated electrically (lightning!), yielded a
substance containing glycine, the simplest amino
acid. More recently, it's been suggested that
life began in hydrothermal vents in the ocean, where
chemical reactions could create amino acids.
For an excellent account of the evolution of
theories concerning the origins of life, see The
Genesis Quest: The Geniuses and Eccentrics on a
Journey to Uncover the Origin of Life on Earth
(2020) by by Michael Marshall (reviewed by Tim
Flannery in "In the Soup", New York Review of
Books, 12/03/2020).
In the Cretaceous period, animal life
was dominated by reptiles. According to Luis
Alvarez, about 65 million ya an asteroid or comet
struck the Earth, spreading dust into the atmosphere
and suppressing photosynthesis, leading to the death
of the dinosaurs and the emergence of mammals as the
dominant animal species. The "Cretaceous-Tertiary
Barrier (K-T barrier, after its abbreviation in
German) is marked by a layer of iridium in the
earth's crust, apparently the remains of the meteor.
Early humans began to emerge during
the Miocene Epoch of the Cenozoic Era, and H.
sapiens emerged during the Pleistocene Epoch
-- the "Environment of Evolutionary Adaptedness"
touted by evolutionary psychologists (the term comes
from John Bowlby; it is also referred to as the
"Environment of Early Adaptation", and other
variants, and abbreviated EEA).
The last "Ice Age" (there have been
others) occurs at this time as well, with a sheet of
Arctic ice covering North America as far south as
the Ohio and Missouri rivers, Europe as far south as
the British Isles and northern Germany, and Asia as
far south as the Himalayas; it permitted early
emigrants to walk across the Bering Sea from Siberia
to Alaska, beginning the population of the Americas
(an Antarctic ice sheet covered South America as far
north as Patagonia and the southern Andes).
|
Era |
Period |
Epoch |
Age |
Ended ya |
Precambrian |
|
|
|
600 million |
Paleozoic |
Cambrian |
|
|
500 million |
Ordovician |
|
|
425 million |
Silurian |
|
|
405 million |
Devonian |
|
|
345 million |
Carboniferous |
|
|
280 million |
Permian |
|
|
225 million |
Triassic |
|
|
190 million |
Jurassic |
|
|
136 million |
Cretaceous |
|
Lower Cretaceous
Upper Cretaceous
|
65 million |
Cenozoic |
Tertiary |
Paleocene |
|
58 million |
Eocene |
|
36 million |
Oligocene |
|
25 million |
Miocene |
|
13 million |
Pliocene |
|
2 million |
Quarternary |
Pleistocene |
|
10 thousand |
Holocene
("Entirely Recent") |
|
The Present |
The Anthropocene
Recently, earth scientists have begun
to refer to the Anthropocene era, defined as
the epoch in which human beings began to have a
palpable and permanent impact on the environment --
such as mass extinctions and global warming. The
term was coined by Paul Crutzen (2000), an
atmospheric chemist who won the Nobel Prize for his
work documenting the hole in the ozone layer (in the
19th century, Antonio Stoppani, proposed the term anthropozoic,
but it didn't catch on). Similarly, Andre Revkin, a
journalist, coined the term Anthrocene in
1992. But while the label is recent, the idea
is very old. As long ago as 1960, Theodosius
Dobzhansky wrote (in "The Present Evolution of Man",Scientific
American, Sept. 1960):
Mutation, sexual recombination and
natural selection led to the emergence of Homo
sapiens. the creatures that preceded him had
already developed the rudiments of tool-using,
tool-making and cultural transmission. But the
next evolutionary step was so great as to
constitute a difference in kind from those before
it. There now appeared an organism whose mastery
of technology and of symbolic communication
enabled it to create a supra-organic culture.
Other organisms adapt to their environments by
changing their genes in accordance with the
demands of the surroundings. Man and man alone can
also adapt by changing his environments to fit his
genes. His genes enable him to invent new tools,
to alter his opinions, his aims and his conduct,
to acquire new knowledge and new wisdom.
Some geologists, however, object to adding a new
era, because the evidence for it can't be found "in
the rocks", as it were. They want a definite
date -- like the one about 12,000 ya, when the last
Ice Age ended -- to identify the beginnings of any
new era. Proponents, though say that the
critics shouldn't be "sticks in the mud"
(sorry). They say that the Anthropocene can be
dated at least to the beginnings of the Atomic Age,
which left traces of radiation in the soil; and
maybe as far back as the beginnings of agriculture,
at least in Europe. Crutzen himself dates the
Anthropocene to the industrial revolution of the
19th century. All organisms affect the planet.
We may be the first to do so permanently; we are
certainly the first to do so deliberately.
By one account, the
Anthropocene began about 8,000 ya, with the
rise of the first cities, and consequent
deforestation, which led to an increase of carbon
dioxide in the atmosphere and prevented a new Ice
Age from occurring. It's also been argued that the
Anthropocene began much earlier, more than 100,000
ya and before the rise of cities, when neolithic
hunter-gatherers first used fire to prepare land for
cultivation -- thereby artificially changing the
landscape for the first time. Then too,
control of fire enabled early humans to keep warm at
night and in the winter -- permitting them to expand
over, and thus have an effect on, more of the
earth's surface -- not to mention burning forests to
clear land for agriculture. A recent worldwide
collaborative study known as the ArchaeoGLOBE
Project (Stephens et al., Science,
08/30/2019), based on a worldwide sample of
archeological evidence, indicate that prehistoric
human activity, such as deforestation and rice
farming, and leading to an increase in atmospheric
methane and carbon dioxide, had already
substantially altered the landscape worldwide by
about 3,000 ya.
Crutzen himself suggests that the
Anthropocene began in the late 18th century, with
the beginnings of the Industrial Revolution, when
atmospheric CO2 began to rise
consistently. Gradually, theorists have fixed
on 1950 as a good starting date, because a lot of
environmental trends began to change dramatically
about then (graphic from "Nine Key Questions About
the Future: 1. What Mark Will We Leave On the
Planet?" by Jan Zalasiewicz, Scientific American,
September 2016):
- plastics and other polymers began to appear in
soil deposits, and contaminating rivers and
oceans;
- concrete, invented by the ancient Romans, began
to be used in abundance as a building material;
- the concentration carbon and other byproducts of
fossil fuels took a sharp turn upward;
- plutonium isotopes 239 and 240 begin to appear
in soil samples, the product of above-ground
nuclear testing;
- carbon dioxide and other greenhouse gases,
rising since the beginning of the Industrial
Revolution, increased markedly after World War II;
- the concentration of methane emitted by
livestock increases in the atmosphere;
- as does the concentration of nitrous oxide, a
product of fossil fuels and chemical fertilizers.
But the era really picked up steam
with the scientific revolution of the 17th and 18th
centuries. At least, that's the implication of David
Deutsch's argument in the Beginning of Infinity:
Explanations that Transform the World (2011).
Before the scientific revolution, humans changed the
environment through things like the invention of
agriculture and the domestication of animals. After
the scientific revolution, we acquired a whole new
level of ability -- almost limitless, in Deutsch's
view -- to understand the world around us, and so to
shape it according to our inclinations.
A good example of the difference
between the Anthropocene and earlier geological
periods is provided by a famous wager between Paul
Ehrlich, and Julian Simon, an economist, about the
effects of overpopulation. In his book, The
Population Bomb (1968), Ehrlich drew on the
theories of Thomas Malthus to predict that, with
uncontrolled population growth, the earth would
exhaust its natural resources, precipitating
widespread famine. Simon, for his part,
believed that human rationality would triumph by
creating innovative efficiencies and other
conservation efforts that would permit a larger
population to thrive on diminishing resources.
In 1980, Simon proposed the following bet: Ehrlich
would select a "market basket" of five commodities
to be purchased for $1,000; if, after 10 years, the
price went up, reflecting increasing demand and
decreasing supply, Simon would pay Ehrlich the
difference between the 1980 and 1990 prices; if they
went down, meaning that the commodities were not
quite so precious any longer, Ehrlich would pay
Simon. As it happened, although the population
of the world increased by about 800 million over
that decade, the price of the "market basket" went
down by more than 50%. So, Ehrlich cut Simon a
check for $576.07. Now,you can argue that
Ehrlich's "market basket", full of metal, wasn't
quite right, and that something different would have
happened with commodities like wheat and corn; but
still, Ehrlich could have selected any commodities
he wished. The point, here, is that the
biological environment (population, arable land,
etc.) doesn't change independently of human
activity: human ingenuity can find ways to cope with
increasing population, climate cycles, and the
like. For example, the "green revolution"
initiated by Norman Borlaug introduced high-yield
crops to underdeveloped countries such as Mexico and
India, greatly increasing food security in those
countries (Borlaug won the 1970 Nobel Peace Prize
for his efforts). We're not at the mercy of
the environment: rather, the environment is at the
mercy of us.
The impact of man on the physical environment is
suggested by a little piece of pseudo-mathematics:
I = P x A x T,
where I = impact; P = population; A
= affluence; and T = technology. It's also
been suggested that a fourth factor be added, E,
for education, which can -- we can hope -- mitigate
the others.
On the other hand, there are some writers who object
to the term "Anthropocene" as well, too
human-centered. The idea is that we humans
think we're at the center of everything, so we even
name a geologic era after ourselves. On the
other hand, the changes in question were undoubtedly
caused by us, rather than by autonomous geological
processes like tectonic shifts, or random physical
events like the asteroid that wiped out the
dinosaurs. Another critique is that the term
"Anthropocene" blames humans in general for problems
that are caused by only some of us -- namely, those of
us who live, more or less comfortably, in more-or-less
capitalist, colonialist, industrial or post-industrial
societies. And it's true that hunter-gatherers
weren't responsible for wrecking the environment, and
that if we still lived like that we wouldn't be
talking about the Anthropocene. But a lot of us
do live like that, and even the last remaining
hunter-gatherer tribes-people are going to suffer the
consequences if we don't fix it and soon.
Something to think about when tech-addled billionaires
(or is it addled tech-billionaires?) lapse rhapsodic
about terraforming Mars. Does anyone think that
we won't wreck that planet, too?
The Anthropocene is a human creation, as is the
existential threat of global warming. So some
philosophers and others have actually argued that the
Earth, and the rest of the species that live on it,
would be better off without us. The most radical
thinkers along these lines have focused on two quite
different solutions to the crisis: (1) Anthropocene
antihumanism, which holds that the human era is
coming to an end, and that's a good thing for
everything else on earth; and (2) transhumanist
futurism, in which we somehow merge with
artificial intelligence -- what the futurist Ray
Kurzweil calls "the Singularity" -- to become a new,
more earth-friendly form of life. The first
alternative is hard to contemplate; the second
alternative is hard to take seriously as long as AI
gobbles energy based on fossil fuels (of course, we
could pave the earth's surface with solar panels, and
cloud the sky with wind turbines; explain that to the
land animals and migrating birds). For more, see
The Revolt Against Humanity: Imagining a Future
Without Us (2023) by Adam Kirsch, reviewed in
"Hastening the End" by Mark O'Connell, New York
Review of Books, 04/20/2023.
For more on the debate over the
Anthropocene, see a series of articles by
Elizabeth Kolbert:
- "Enter the Anthropocene Age of Man" by
Elizabeth Kolbert,National Geographic,
March 2011.
- "The Lost World", Parts 1 and 2, New
Yorker, 12/16 and 12/23-30/2013.
- "Nine Key Questions About the Future: 1. What
Mark Will We Leave On the Planet?" by Jan
Zalasiewicz, Scientific American,
September 2016.
- And Kolbert's book, The Sixth Extinction
(2014).
Also, for other full-length treatments:
- The God Species: Saving the Planet in the
Age of Humans by Mark Lynas, (2011)
- Adventures in the Anthropocene: A Journey
to the Heart of the Planet We Made by Gaia
Vince (2014).
For the story of the Ehrlich-Simon wager, see:
- The Bet: Paul Erlich, Julian
Simon, and Our Gamble Over Earth's Future
by Paul Sabin (2013).
- For a synopsis, see "The Battle of Two
Hedgehogs" by Cass R. Sunstein, New York
Review of Books, 12/03/2013).
|
Whether the Neanderthals were killed by
Cro-Magnon, or interbred with them, the line died out about 30
thousand ya. H. sapiens sapiens thrived, through the Old
(Paleolithic), Middle (Mesolithic) and New (Neolithic) Stone
Age.
Why the Cro-Magnons thrived, while the
Neanderthals did not, will likely remain an anthropological
mystery. The most likely hypothesis is that the Cro-Magnons
had a capacity for language, and for symbolic thought, that
gave them an advantage (as indeed it would). So far as we can
tell from the scant evidence, Neanderthal culture was pretty
primitive even by the standards of Early Man -- much more so
than Cro-Magnon culture. But it's all pretty speculative --
pretty close to reasoning backward from the fact that we
have a sophisticated capacity for language and symbolic
thought.
Out of Africa... Into the Americas
For an overview of research on human
migration to and through the Americas, see the First
Americans: In Pursuit of Archaeology's Greatest
Mystery by J.M. Adovasio (Random House, 2002).
For an overview of fossil and genetic
evidence concerning the migration of early modern
humans, see "The Greatest Journey" by James Shreeve, National
Geographic, March 2006.
For an up-to-date account, based on the
latest discoveries in population genetics, see Who
We Are and How We Got Here: Ancient DNA and the New
Science of the Human Past (2018) by David Reich, a
leading figure in the application of population genetics
to cultural anthropology. In his
chapter-by-chapter histories of Africans, Neanderthals,
modern Europeans, South Asian Indians, East Asians,
Polynesians, and Native Americans, Reich demonstrates
how population genetics expands, and sometimes corrects,
theories of early human migration based solely on the
fossil evidence.
In 2013 Paul Salopek, a journalist, began to
retrace the path of human migration out of Africa, on foot,
beginning at Herto Bouri, in Ethiopia, where some of the
earliest human remains have been found, and ending in Tierra
del Fuego, Chile, where human migration ended. The
journey is traced in his forthcoming book, A Walk
Through Time (scheduled for publication in 2016), as
well as in a series of articles in National Geographic
(December 2013, July 2014, December 2014, March 2015, April
2016, December 2017).
The path out of Africa looks something like
this:
- H. sapiens emerges in Africa about 160-200,000
ya.
- About 55-70,000 ya, H. sapiens expands from
Africa into what is now the Middle East.
- From there they expand in two directions, arriving in
both western Europe and Australia about 45,0000 ya.
- And also in a third direction, arriving in Arctic
Siberia about 35,000 ya.
- About 14,000 ya, they crossed they arrived in the
Americas, reaching South America about 13,500 ya.
Traditional
theory, based on the discovery of an early human site in
Clovis, New Mexico with distinctively shaped spear-points,
suggests that the first humans crossed the Bering Strait
from Asia into North America about 13,500 ya, and had
settled as far south as Tierra del Fuego, at the tip of
South America, within about 500 years. In this view,
these first hunters and foragers were the ancestors of the
ancient Hohokam and Anasazi cultures who built great hamlets
and towns (e.g., Montezuma Castle and Casa Grande by the
Hohokam; Canyon de Chelly by the Anasazi), and gave rise,
respectively to present-day Tohono O'Odham and Pueblo
cultures in Arizona and New Mexico.
However, evidence from other sites, including
Adovasio's own excavations near Meadowcroft, Pennsylvania,
suggest that there were even earlier migrations.
And other evidence indicates that humans were making tools
(though not Clovis points), and killing mammoths, in Florida
at least 14,500 ya.
According to the coastal migration
hypothesis, the first Americans didn't come across a
land bridge from Siberia, but rather by boat along the
Aleutian Islands, 15-16,000 ya. There's even a theory
that the Solutrean people migrated from Europe to the east
coast of North America more than 20,000 ya.
In these revisionist views, Clovis people were
not the pioneers, but rather later immigrants, whose
culture, exemplified by Clovis points, spread widely
throughout North America.
In 2015, two groups of investigators,
examining a large set of genomic data, offered competing
conclusions about the population of the Americas.
- One group, including researchers from UCB, found
evidence for the traditional hypothesis, that (for the
most part) the Americas were populated in a single wave,
crossing a land bridge over the Bering Straight somewhere
between 15,000 and 23,000 ya.
- Another group, including researchers from (wouldn't you
know it?) Harvard, examining the same genomic evidence,
saw signs of two separate waves -- one originating
in Siberia (that's the traditional origin), and another
originating in Australasia.
- Both groups agree that the Inuit people of Alaska
represent yet another, independent wave of migration.
So that's two or three waves of prehistoric migration,
depending on how you count. But notice that these
researchers are looking at the same set of data, with
the most powerful tools available to modern physical
anthropology and paleontology. So this issue isn't
likely to be settled, for good, anytime soon.
Especially because there's yet another data set --
one that is especially provocative, even though it doesn't
contain any human remains. This is the "Cerutti
Mastodon Site" near San Diego, named for the staff member at
the San Diego Natural History Museum who discovered
it. The site contains a mastodon skeleton, obviously,
but it also contains some cobblestone-like rocks whose
markings suggest that they may have been used as hammers and
anvils. The thing is, this site has been dated to
about 130,000 BCE -- long before Clovis
people. And while the Clovis people were homo
sapiens, stone tools similar to those found at
the Cerutti Site have been found in homo erectus
sites in Africa. We know that there were migrations
out of Africa by species other than h. sapiens --
the Neanderthals, to take a clear example. And, in a
neat coincidence, there may well have been a land bridge
between Asia and North America during a period of global
warming which also occurred about 130,000 ya. So, it's
possible that h. erectus also got as far as North
America. But again, no erectus fossils have
ever been found in the Western Hemisphere.
Whatever the details, it's clear, from the high degree of
genetic similarity between ancient remains found in Canada
and the Northern United States, and other remains found in
Chile and Brazil, and even between remains found in South
America and remains found in Australia and New Guinea, that
the migration to and through the Americas happened
remarkably quickly (in historical terms, that is).
And, for that matter, anthropologists keep pushing back the
date for the earliest migration out of Africa. H.
sapiens skulls have been found at the Skhul and Qafsez
sites in Israel which have been dated to 90-120,000 ya, and
a 2018 report that fossil evidence of h. sapiens,
including stone tools, has been found in Israel's Misliya
Cave dating to 180,000 ya.
Then again, in 2019 a group of paleoanthropologists led by
Katerina Harvati reported finding two hominid skull
fragments in a Greek cave known as Apidima -- one with
Neanderthal features, the other more closely resembling
modern humans (Nature, 2019). Uranium dating
suggested that the H. sapiens skull was 210,000
years old, older than the skull found at Misliya Cave.
The find suggests that modern humans at least put their toes
into Europe long before generally believed. If so,
they may have died out quickly, or they may have gone back
where they came from, only to return later. Now, the
classification of the skull isn't completely solid, nor is
the dating. But it's not beyond the realm of
possibility that modern humans ventured from the Middle East
into Southeastern Europe at an early point in prehistory --
it's not that far.
Here's
our
best understanding of early human migration, based on the
paleological, archeological, and paleogenomic findings as of
2018 (for details, see "The Origin of Us" by Natalie O'Shea
and Eric Delson, and "The Paleogenomic Revolution" by Robert
DeSalle, both published in Natural History, 09/2018
-- an entire issue dedicated to our understanding of human
origins).
- There's still agreement that human origins were in East
Africa, but now those origins have been pushed back to
about 300,000 ya.
- A group of UC Berkeley biochemists, analyzing
mitochondrial DNA (mtDNA), which is inherited
maternally, suggested in 1987 that all known mtDNA could
be traced to a single female, whom they dubbed
"mitochondrial Eve", who lived about 143,000 ya.
- Not to be outdone, in 1999 a group of Stanford
geneticists identified a group of genes that are
inherited paternally, and proposed the existence of a "Y
Chromosomal Adam", who lived about 59,000 ya.
- How mitochondrial Eve and Y-chromosomal Adam ever got
together is anyone's guess, but more recent analyses
have put them much closer in time, if not in place (see
below).
- Humans dispersed throughout Africa, but also moved out
of Africa beginning about 180,000 ya.
- There may have been earlier migrations, but these
appear not to have been successful.
- There were likely multiple migrations, some following
a "northern" route through modern Egypt, and others
following a "southern" route through modern Saudi
Arabia.
- Either way, migration reached Indonesia about 70,000
ya, and Australia about 65,000 ya.
- At some point, some migrants "turned left", ending up
in modern Europe about 30,000 ya.
- Others crossed a land bridge across the modern Bering
Strait from modern Siberia into North America, and then
into South America.
- And there may well have been a migration back into
Africa.
- Analyses of maternally inherited mtDNA, the paternally
inherited Y chromosome, and also recombinant DNA have shed
further light on population ancestry and dynamics,
especially within the five habitable continents
(This is mostly the work of a group at the Max Planck
Institute in Leipzig, who focused on DNA sampled from
specimens collected in the pre-colonial period, before
Europeans really sent genetic diversity into high gear by,
if you pardon the expression, contaminating DNA all over
the place). In the map on the right, color coding
indicates genetically related groups of pre-colonial
peoples.
- The greatest of genomic diversity is found in Africa,
with three major lineages: hunter-gatherers living in
the rain forest of central Africa; Khoesan-speaking
peoples of southern Africa; and the Hadza peoples of
Tanzania.
- "Y-Chromosomal Adam" seems to have lived in West
Africa, near modern Cameroon.
- The Tanzanian lineage was the source of much of the
migration into Europe, but there were three additional
sources: Paleolithic hunter-gatherers, Neolithic farmers
from the Fertile Crescent, and late Neolithic and early
Bronze Age Yamnaya people from the Caucasus.
- There may have been two waves of migration into Asia,
one occurring about 90-120,000 ya, and the other about
40-60,000 ya. The first of these seems to have
been the source of Aboriginal Australians, while the
second was the source of East Asians.
- There were at least two migrations across the Bering
Strait, resulting in separate lineages for Athabaskan-
and Amerindian-speaking populations of Native Americans.
- And it wasn't just h. sapiens who migrated out
of Africa. Although their split with Neanderthals
occurred 360,000-470,000 ya, the two populations of
hominins interbred enough, as recently as 100,000 ya, to
leave about 1-2% Neanderthal DNA in the genomes of
Europeans and Asians.
For a timeline of the prehistoric population of
the Americas, see "First Americans" by Glenn Hodges, National
Geographic, January 2015. See also "Journey into
the Americas" by Jennifer Raff, Scientific American,
05/2021.
For
an
authoritative update, see "Late Pleistocene Exploration and
Settlement of the Americas by Modern Humans" by M.R. Waters
(Science 2019, from which these images are
taken). Waters concludes that the best evidence
indicates that migration south of modern Canada began as
long ago as 17,500 ya -- but no earlier. This group
then split into two branches of Native Americans, one
remaining in the north (NNA) and one moving towards the
south (SNA), and splitting into a number of related
sub-branches. This review is a must-read for anyone
interested in this subject.
About the only place we didn't get by
walking was Antarctica, and now we're there, too, with
permanent settlements since 1959.
Pushing Back Our Origins
The
story of human evolution from Australopithecus through
Paranthropus to Homo is fairly well accepted,
though there are disagreements about the details. However, as
noted earlier, occasionally a major discovery will occur that
calls for major revisions in the story. The major consequence
of these new discoveries is to push the origins of hominids
back a little (or, in some cases,a lot) further back in
time, or to add a branch to the main lines of human descent.
(Image by Viktor Deak from "The Human Pedigree" by Kate Wong,
Scientific American 01/2009).
For
example, until recently it was generally thought that the
earliest hominids, known as Australopithecus, dated
back some 4.5 million ya, to an area in eastern Africa. But in
just two short weeks in 2002, different teams of
paleontologists reported discoveries that pushed the emergence
of hominids back a lot in time and over a bit in space.
Sahelanthropus tchadensis.The
first of these, and the most surprising, was the discovery by
Dr. Michel Brunet of the University of Poitiers, in France, of
the "Chad skull" in Chad (hence its name), in the region of
the southern Sahara desert known as the Sahel (actually, the
skull was discovered by Ahounta Djimdoumalbaye, an
undergraduate at Chad's University of N'Djamena, who was a
member of Brunet's research group). This skull apparently
belongs to a new hominid species known as Sahelanthropus
tchadensis ("Sahel man of Chad") -- or Toumai ("Hope of
Life") in Goran, the language of the people in the area where
it was discovered. Toumai lived as long as 6 or 7 million ya,
about 1 million years earlier than the earliest hominid
previously known, and more than 3 million years before the
famous Australopithecus known as "Lucy" (see below).
Moreover, previous hominid skulls were found in the eastern
and southern portions of Africa.
Before Toumai, the earliest known
hominid,Orrorin tugenensis, dating from about 6 million
ya, was discovered in Kenya.
The fact that the Toumai skull was
found in western Africa suggests that our ape and hominid
ancestors were more widely dispersed on the continent than
previously believed. The Toumai skull mixes features of apes,
such as a small brain-case, with features characteristics of
hominids, such as a flat rather than protruding face. For this
reason, it has been suggested that it may be an ancestor of both
chimpanzees and hominids, living before the two lines
diverged. Another theory is that it represents another primate
branch that has simply gone extinct.
The "Georgia Skull".The other
discovery, known as the "Georgia skull" because it was found
in Georgia, a country that was formerly part of the Soviet
Union, in an area near Tbilisi between the Black and Caspian
Seas, is about 1.75 million years old and belongs to one or
the other of three hominid species already known --Homo
habilis,H. ergaster, or H. erectus. But
it and similar skulls were found with stone tools, such as
choppers and scrapers, that seem much more primitive than
those typically associated with H. erectus, so some
theorists , So, unlike the Toumai skull, it doesn't suggest
any sort of "missing link". Rather, the surprise here is that
the skull was found outside of Africa. Previously, it was
believed that the Homo species evolved in eastern
Africa, and that H. erectus moved out of Africa
sometime more recently than 1.6 million ya. But here's a homo
skull well outside Africa, and more than 1.6 million years
old. So, the migration out of Africa might have begun earlier
than we previously thought.
Homo
floresiensis. In October 2004, a team of
Australian anthropologists discovered a "downsized" or
"hobbit" version of homo erectus, with adults standing
only 3-31/2 feet high, in a cave on Flores, an island in the
Indonesian archipelago near Bali. Individual members of this
species, named homo floresiensis, have been dated
between 95,000 and 13,000 ya -- meaning that they overlapped
with h. sapiens, which arrived in Australia about
40,000 ya. Stone tools also found have been attributed to h.
floresiensis rather than later Neanderthals -- which is
interesting, because these humans had brains hardly larger
than those of adult chimpanzees. So, it's not brain size
that's related to intelligence, as brain structure --
including, perhaps, the sulci and gyri that feature so
prominently in human brains, and that permit humans to pack a
large amount of cortex into a relatively small space. The
precise placement of h. floresiensis in the hominid
family tree remains a matter of controversy. In some respects,
they appear more primitive than either h. erectus or h.
sapiens -- yet they made and used tools. Some
anthropologists speculated that they might be genetically
deformed h. sapiens dwarfs. As of 2009, the consensus
was that they were, indeed, a distinct species of hominids,
which split off from the main hominid line 1-2 million ya,
more closely related to h. habilis than to h.
sapiens..
Ardipithecus ramidus. In 2009,
paleontologists from UC Berkeley and their colleagues from
Ethiopia discovered a new fossil hominid,ardipithecus
ramidus (nicknamed "Ardi") that is a million years older
than Lucy, dating to about 4.4 million years before the
present.Ardipithecus was discovered in the Awash River
valley of Ethiopia, not very far from where Lucy had been
found -- but in an area that was, at the time, forest rather
than savanna. What's really interesting about Ardi is that it
walked upright (if not particularly gracefully, with short
legs and lacking an arched foot), but still had the capacity
-- like long arms and opposing thumbs on the feet as well as
the hands -- to move about easily in trees. It had previously
been thought that bipedalism had emerged on the Savannah, as
an adaptation useful for living in an area that had no trees.
But Ardi suggests that bipedalism first emerged in an arboreal
area. Apparently, by the time of Lucy, a million years later,
bipedalism had advanced to the point that the new species,Australopithecus,
could walk rapidly over long distances.
As of its discovery, Ardi was the
oldest known hominid skeleton. It is different enough from
chimpanzees to suggest that the common ancestor of chimpanzees
and humans was about 6-7 million years old.
Australopithecus sediba. In 2011, a
group of anthropologists at the University of Witwatersrand,
in South Africa, led by Lee Berger, announced the discovery of
a new pre-human species, named A. sediba, which
appeared to comprise a "patchwork" of ape and human features.
There is a hand that is able to grip tools as well as hold on
to tree branches, and a foot that can support upright walking
as well as climbing trees. The pelvis was human-like, able to
accommodate big-brained fetuses; but the skull itself was
small -- suggesting that the pelvis evolved, not to
accommodate big-brained fetuses, but rather as a byproduct of
bipedalism. The researchers dated the skeleton to about 1.977
million ya, and suggested that it is now the oldest "missing
link" -- not between ape and man, exactly, but between the
most recent known Australopith species,A. africanus,
and the earliest known human species,H. habilis. This
conclusion is, of course, vigorously disputed by other
anthropologists. Perhaps more important, at least so far as
the theory of evolution is concerned, is what it suggests
about the emergence of complex human characteristics, such as
the hand. Rather than evolving microscopically (or, for that
matter, rather than being created instantaneously out of whole
cloth), complex features like the hand might have evolved in a
sort of modular fashion. There might be several different
configurations of thumb, wrist, finger length, etc., all
occurring more or less randomly, until one particular
combination -- short, opposing thumb, long fingers, and a
particular twist of the wrist, perhaps -- happened to appear,
proved to be particularly adaptive, and so was passed down to
subsequent generations while other combinations, perhaps less
adaptive, simply (and literally) died out. [For more on A.
sediba, see "First of Our Kind" by Kate Wong,Scientific
American, 04/2012.]
Lucy's
Grandfather(?) In 2019, a group of Ethiopian
researchers published a reconstruction of a skull of A.
anamensis, the direct predecessor to "Lucy".
Previously, anamensis was known only from teeth and
jaws. But in 2016, Ali Bereino, an Ethiopian goat herder
who had a side interest in fossil hunting, uncovered a
complete anamensis skull while preparing a shelter for
his animals. The fossil, known as MRD for Mira Dora, the
place where it was found, has been dated to 3.8 million
ya. The current theory is that Lucy's species, A.
afarensis, branched off from A. anamensis about
3.7 to 3 million ya. The researchers also promptly hired
Bereino.
The
Dmanisi Skulls. In an interesting twist on the
"Georgia skull" described earlier, a set of skulls uncovered
at Dmanisi, Georgia, and reported in 2013, cast doubts on the
traditional classification of early hominids into separate
species -- Homo erectus, Homo ergaster, Homo
habilis, and Homo rudolfensis. The
traditional story, remember, is that only H. erectus
flourished, eventually evolving into H. sapiens.
The distinctions among the various species are based largely
on features of the skull. The Dmanisi skulls, which
include the "Georgia skull"" are all dated to roughly 1.8
million ya, but they show amazing variability -- as much as
the variability found in skulls from Africa, which led to the
naming of several different species. The implication is
that there may have been just one early species of Homo,
namely erectus, characterized by a high degree of
within-species variability with respect to skull
morphology. Rather than there having been several
different Homo species, all competing, as it were, for
survival, there was just one -- H. erectus, which
eventually evolved into H. sapiens. Another
implication is that, rather than waiting, H. erectus
began to move out of Africa almost as soon as it appeared on
the East African savanna.
In 2015, Lee
Berger, an American anthropologist teaching at the University
of Witwatersrand, in South Africa, reported the discovery of
hominid skeletons in the Rising Star Cave near Pretoriain
South Africa (the bones were accidentally discovered by
spelunkers; Berger was also the discoverer of A. sediba,
so he's one lucky paleontologist!). Analysis strongly
suggests that these bones come from an entirely new hominin
species, Homo naledi ("naledi" means "star" in a local
indigenous language). Based on the geology of the cave
and naledi's primitive anatomy, Berger has suggested
that the skeletons date from more than 2.5-2.8 million ya --
that is, close to the dawn of H. sapiens. As
such, naledi might be close to the "missing link"
between australopiths and homo. A photo of a
substantial portion of a naledi skeleton was published
in Scientific American, 08/2017 ("Our Cousin Neo", by
Kate Wong). See also "Return to the Cave of Bones" by
Lee Berger and John Hawks, National Geographic,
07/2023.)
Link to a
NOVA/National Geographic documentary on the discovery of H.
naledi, broadcast in September 2015.
All Aboard for Marrakech?
Excavation at a mining site near Marrakech, Morocco, uncovered
human and animal bones and stone tools, originally dated to
40,000 BP and identified as Neanderthal remains. But
more recent investigation (Hublin et al., Nature,
2017) has reclassified the human remains as H. sapiens
and dated the site to 350,000-280,000 BPE -- which is
pretty surprising, given that standard theory dates the
emergence of H sapiens to East Africa "only" 60,000
ya! So if correct, the Morocco H. sapiens discovery
would push back the origins of our particular species back by
about 100,000 years. They also indicate that some H.
sapiens, instead of simply migrating out of East Africa,
populated the entire African continent as well. On the
other hand, there are enough morphological differences between
the Moroccan specimens and H. Sapiens that they might
represent a distinct, yet to be named, H. antecessor.
From Fossils to DNA. Most
of the story of human evolution is told through fossils like
Lucy or the Dmanisi Skulls, but it's now possible to answer
some questions about our origins from analyses of DNA
extracted from fossilized remains. For example, Matthias
Meyer, Juan-Luis Arsuag, and their colleagues (2013) sequenced
the DNA from a 'Neanderthal" skeleton found at a site in Spain
known as Sima de los Huesos, dated to 400,000 before the
present era (BPE). Surprisingly, they found that it
closely resembled DNA extracted from a "Denisovan" skeleton
found in Siberia, and dated to 80,000 BPE. Based on the
fossil evidence alone, it has long been assumed that
Neanderthals and Deniisovans represented distinct species of Homo,
which left Africa about 300,000 BPE. The Neanderthals
turned left into Europe, while the Denisovans turned right
into Asia. Later, about 200,000 BPE, H. sapiens
migrated out of Africa, and replaced both the Neanderthals and
the Denisovans. Possibly, the Spanish fossils represent
yet another Homo subspecies, yet to be named.
Or, the Denisovans migrated to Europe as well as Siberia, and
the two interbred. Or something.
A series of papers published in Nature
in 2016 may have settled the matter. Comparing
samples of DNA collected from all over the world, including
many populations of indigenous people, three separate teams of
researchers concluded that all non-Africans are descended from
a single group of H. sapiens that emigrated from
Africa 50-80,000 ya. This doesn't mean that there
weren't earlier waves of migration, such as the Neanderthals
and Denisovans. It's just that the descendants of any
earlier migrations didn't last. So far as modern
Europeans, Asians, Australian aborigines, and Native
Americans, it seems that we're all descended from that last
migration out of Africa,
Ian Tattersall,
an anthropologist at the American Museum of Natural History,
has traced recent changes in our understanding of human
evolution with two excellent geneological trees -- the one on
the left, drawn in 1993, and the one on the right, drawn in
2011 (from "Human Evolution in Perspective" by I. Tattersall,
Natural History, 06/2015). Note that h.
naledi isn't there -- it's that new a discovery.
Breaking News!
Just when you think
everyone's reached consensus: along comes an
anthropologist who tries to upend it all -- or most of
it. In 2021, Madeleine Bohme, an anthropologist
at the University of Tubingen, intended to upend most,
if not all, of the "Out of Africa" consensus (in Ancient
Bones: Unearthing the Astonishing New Story of How
We Became Human, co-authored with Rudiger Braun
and Florian Breier; reviewed by Tim Flannery in "Out
of Savannastan", New York Review of Books,
11/04/2021). Briefly, that consensus has three
elements:
- The hominem lineage, which ends with modern h.
sapiens, split from chimpanzees between 7 and
13 million years ago.
- Our specific genus, homo, arose in Africa
about 2.3 million years ago.
- Our specific (sorry) species, h. sapiens,
arose in Africa about 300,000 years ago.
First, a jawbone found in Greece during World War I
(German soldiers were digging a bunker), and
determined to be more than 7 million years old, was
determined to belong to the family Hominiae.
That means that the oldest ancestor of modern humans
lived in Greece, not Ethiopia. Second, another
paleoanthropologist spotted set of fossilized
human-like footprints in some rocks by a beach in
Crete (he was vacationing with his girlfriend) was
dated to 6 million years ago. That challenges
Point #1.
Second, Bohne also cites Homo wushanensis, a
group of fossils found in China in 1991, which have
been dated to about 2.5 million years ago -- making
them older than the oldest Homo habilis
fossils found in Africa. That challenges Point
#2 -- except that the researcher who originally dated
the fossils now believes that he was mistaken.
Bohne, for her part, believess that our genus, Homo,
evolved in a once-grassy woodland known to
professionals as "Savannastan", which encompassed
parts of Africa, Asia, and Europe about 2.6 million
years ago -- hence the Georgia skull, hence the
Cretan footprints, hence the Greek jawbone. In
any event, Point #3, that Homo sapiens arose
in Africa, remains unchallenged. So far.
Stay tuned.
|
For an excellent survey of human evolution,
see Masters of the Planet, and the Strange
Case of the Rickety Cossack: and Other Cautionary Tales
from Human Evolution, both by Ian Tattersall (2012).
"Races"
Still, as I noted earlier, the main
outlines of the story of human evolution remain intact.Here's
the story as of Fall 2009, as depicted in the Science
paper announcing Ardi.
Fossil Hominids
For a comprehensive, detailed, up-to-date
overview of human evolution, check out Fossil
Hominids: The Evidence for Human Evolution, a
website maintained by Jim Foley that features a timeline of
recent fossil finds:www.talkorigins.org/faqs/homs.
As H. sapiens sapiens
proliferated around the world, adaptation to different
climatic zones produced the physical differences associated
with the different "races" of humans (much as h.
floresiensis developed small stature in an. For example,
pale skin is an advantage to those living in cool, cloudy
lands because it facilitates the absorption of ultraviolet
radiation, promoting the manufacture of vitamin D, a substance
necessary for healthy bone growth. For people who live in the
tropics, skin darkened by melanin protects against peeling and
blistering, cancers caused by constant exposure to these same
ultraviolet rays. In the tropics, a tall, slim physique
radiates surplus heat, and keeps the body cool. In the cold
climates of the far north, a short, squat body with high fat
levels serves an insulating function. In areas with unreliable
or inadequate supplies of food, short stature is adaptive, as
are fatty buttocks. Improved diets increase stature and shrink
teeth, muscles, and bones.
Much ink (and blood) has been spilled
over the biological reality of various racial distinctions.
Perhaps the best perspective on this issue is presented in an
essay, "Human Equality is a Contingent Fact of History", by
Stephen Jay Gould, a paleontologist at Harvard, who
contributed a regular column, "This View of Life", to Natural
History magazine. The essay, written while Gould
was giving a series of lectures on racism in South Africa
(whose black majority was then suffering under the
white-imposed apartheid regime) was published in the November
1984, issue of the magazine, and contains Gould's reflection
on the evolution of our notions of race and racial
differences. In his article, Gould argues that "Human
equality is a contingent fact of history" (italics
original). By this he means that it could have
happened that different, and unequal, races
evolved through human history; it just didn't happen that
way. Initially, Western thought was thoroughly
imbued with the idea that the different races had separate
origins (polygeny), and that racial inequality was somehow a
"natural" consequence of this separate creation. But
with the gradual acceptance of Darwin's ideas about The
Origin of Species, suggesting evolutionary links between
humans and other animals, Western views of race have
themselves evolved.
- Early- to mid-19th century anthropologist debated between
two geneological theories of racial origins.
According to a monogenic theory, all human
lineages had their origins with Adam and Eve, but some races
degenerated from this Edenic state. According to a polygenic
theory, Adam and Even were the ancestors only of
whites, and the other, "colored" races were the product of
separate creation. When Darwin came along, the
argument shifted: all races had a primeval ancestor in
common, but the various races began quickly began to
diverge. This latter theory, of common origins but
ancient separation, was promulgated as recently as the 1960s
by Carleton Coons (Origin of Races, 1962), who argued
that "caucasoids" and "mongoloids" were more advanced than
"australoids", congoids", and "capeoids" because they
evolved further to adapt to the more challenging
environments of the northern climate. The modern
geneological view is that the modern races are all "recent,
poorly differentiated sub-populations of... homo sapiens,
products of at most tens or hundreds of thousands of years,
and marked by remarkably small genetic variations".
- Early anthropologists also explained racial differences in
terms of geographical theories. Darwin had
speculated that h. sapiens had its origins in
Africa. But, in an echo of the revised geneological
view, others argued for Asian origins, and that the
"tropical" races reflected a subsequent migration into
the "less challenging" zones of Africa, Australasia, and the
Americas -- and, therefore, a less advanced form of
human. When the evidence for African origins became
uncontrovertible, the argument shifted to a more
psychological one: that human consciousness and intelligence
evolved in Asia. As Coon wrote: "Of Africa was the
cradle of mankind, it was only an indifferent
kindergarten. Europe and Asia were our principal
schools". The modern geographical view is that both h.
erectus and h. sapiens arose in Africa and
then spread outward.
In addition to debunking the geneological and geographical
arguments for racial inequality, Gould also presented what he
called "positive" arguments for the equality of the races.
- First: technically, biologists recognize only one division
within a species -- that of subspecies, which
inhabit a particular geographical region and display
recognizable traits that make them distinctive. But he
points out that subspecies function only as "categories of
convenience" representing geographic variation. But
designation of a subspecies makes sense only when the groups
are geographically distinct. To be blunt about it, the
human races aren't geographically distinct. We
intermingle and interbreed, resulting in "fluidity and
gradation", not distinct subcategories, so there's not
really much point in naming the races as if they really
constituted anything like subspecies.
- Obvious "racial" differences may exist, such as skin
color, but these are of relatively recent origin -- to
recent to provide the foundation for any sort of inequality
among the races.
- Empirically, there just aren't enough genetic differences
between the races to permit them to be meaningful
subcategories. As Gould wrote in 1984, "Intense
studies for more than a decade have detected not a single
"race gene" -- that is, a genes present in all members of
one group and none of another. Frequencies vary, often
considerably, among groups, but al human races are much of a
muchness". Echoing Lewontin's summary of within-group
and between-group differences, Gould goes on to note that
"Variation among individuals within any race is so
great that we encounter very little new variation by adding
another race to the sample. In other words, the great
preponderance of human variation occurs within groups, not
in the differences between them.... The recent origin
of races... squares well with the minor genetic differences
now measured. Human groups do vary strikingly in a few
highly visible characters (skin color, hair form) -- and
this may fool us into thinking that overall differences must
be great. But we now know that our usual metaphor of
superficiality -- skin deep -- is literally accurate.
More recent research by Marc Feldman and
associates, based at Stanford University, suggests that there
are, after all, small genetic differences among the races.
Surveying the DNA sequences of 1000 people sampled from each
of 52 populations, he found that DNA differences between the
groups fell into five clusters, or groups, roughly
corresponding to their continents of origin: Africa, Eurasia
(including Europe, the Middle East, and South Asia), East
Asia, Oceania (including Australia), and North and South
America. These, of course, correspond to the five geographical
"races" of folk taxonomy: Negroid, Indo-European, Mongolian,
Pacific Islander, and (American) Indian.
For more details of the genetic
differences between the races, see A Troublesome
Inheritance by Nicholas Wade (2014). But pay
attention only to the first half of the book, which sets out
the genomic differences among the geographical races.
There are some, but you should also bear in mind that these
racial differences are minuscule when compared to the entire
human genome, with its 22,500-some genes. As Richard
Lewontin, a prominent population geneticist, has pointed
out, the differences between racial or ethnic groups are far
smaller than the similarities. Put another way -- and
this is a general rule: the differences between groups are
smaller than those within groups. In the second half,
Wade tries to argue that racial differences in social
outcomes -- such as the "achievement gap" between Asian-,
European-, and African-Americans -- are due to these
differences -- and here, his science is on very shaky ground
indeed. Earthquake-sized shaky. Magnitude 8
shaky. Not least because the differences in social
outcomes within the races are far greater than the
differences between them. Moreover, Wade offers no
evidence that any of the genetic differences between the
races, small as they are (and they are tiny), have
anything to do with the behavioral differences that he is
concerned with (or much of anything else, except skin color
and lactose tolerance).
For a counterargument, that race is
essentially a social construction, see Fatal Invention:
How Science, Politics, and Big Business Re-Create Race in
the 21st Century by Dorothy Roberts (2014).
A more balanced view is offered by
Adam Rutherford, a science journalist with a PhD in
genetics, in A Brief History of Everyone Who Ever Lived
(2017). Rutherford argues that genetic studies can
tell us a lot about where we came from -- it turns out, for
example, that the British people generally known as "Celts"
(Welsh, Irish, Scots, etc.) are not genetically related:
"Celt" really is a social construction!
Neanderthals had the FOXP2 gene that is crucial for
speech. And it's now been demonstrated convincingly
that every person now living on Earth, no mater what their
"race", is descended from a very small group of H.
sapiens who lived just a few thousand ya.
Outward appearances to the contrary notwithstanding, our
genomes don't divide us into the usual racial categories.
See also Black
and White, a special issue of National Geographic
(04/2018) devoted to the biological and sociocultural
aspects of race. The two girls on the cover are Marcia
and Millie Biggs, fraternal twin daughters of Michael Biggs,
a native of Jamaica, and Amanda Wanklin, an Englishwoman
(you've seen them before, in my discussion of behavior
genetics). One of the articles in the special issue,
"A Color Wheel of Humanity" by Nina Strochlic,
illustrates the amazing variety of human skin tones by
selections from Humanae, a project by portrait
photographer Angelica Dass, who matched the skin tones her
subjects to the standard color palette maintained by Pantone
(discussed in the lectures on Sensation).
The important thing to remember is that
despite superficial physical differences (and even more
superficial differences in biochemistry), all "races" of H.
sapiens sapiens, Black, White, Mongoloid, or whatever, reflect
differences in the same species of animal. Moreover, there are
major individual differences among members of any single
"race", and there are people with superficially similar
features -- Africans and Australian Aborigines, for example,
who are not members of the same "race". The bottom line is
that while "race" has some degree of biological reality, it is
largely a mythical concept that is better discarded. All of us
have a single common ancestor. In fact, there is a theory,
based on analyses of mitochondrial DNA (which is passed only
from female to female) that the entire race of modern humans
-- black, white, and yellow -- are the descendants of a single
female who lived in East Africa 200 thousand ya. This theory
has been called into question, but the essential point
remains: we are all very closely related to each other, so we
might as well treat each other with respect and affection.
The best guess is that any two randomly
selected individuals are more than 99% identical in their
genes -- if not 99.9% identical, then maybe 99.5% identical.
And remember, there is about 98% similarity between the human
genome and that of chimpanzees, our closest primate relatives.
So, there may be some minor genetic differences between people
of different continental ancestries (African, Eurasian, and
East Asian), which is a nice way of saying different "races",
but "race" is still largely a social construct -- a way of
classifying people that has no serious biological
justification.
The Beginning (and Perhaps the End) of History
Since the Ice Age ended,
about 10 thousand ya, H. sapiens sapiens has continued to
thrive. During the New (Neolithic) Stone Age, hunters and
foragers became farmers and herders who cultivated plants and
domesticated animals.
- Rye was first grown as a crop cereal in about 11,000
BCE, and wheat about 8700 BCE, both in the Near East;
- Rice was cultivated in China around 7000 BCE;
- Cattle were domesticated in Africa by 5900 BCE;
- Maize (corn) was cultivated in Central America about
3500 BCE; and
- Pearl millet was cultivated in sub-Saharan Africa about
2000 BCE.
Neolithic people built villages and
towns that gave rise to cities, the development of economies
that were not devoted solely to the production of food, and
the rise of hierarchical social structures. Ceramics were
developed to store food. Reliance on stone tools gave way to
bronze and copper -- first by hammering metal, later by
forging and casting it. The earliest wooden plows date from 6
thousand ya; the first wheels for transportation, 5.5 thousand
ya; the first sailing ships, and the first writing, 5 thousand
ya.
The Origins of Writing
Here's the story, according to "Visible
Language: Inventions of Writing in the Ancient Middle East
and Beyond", an exhibit at the Oriental Institute at
the University of Chicago, 2010-2011 (from "Hunting for the
Dawn of Writing, When Prehistory Became History" by
Geraldine Fabrikant,New York Times, 10/20/2010).
- A clay tablet written in "proto-cuneiform", a language
of ancient Sumer in Mesopotamia, has been dated to 3200
BCE. The earliest Sumerian writing appears to have been
confined to a sort of shipping receipt for trade goods.
Narrative writing appears about 700 years later, around
2500 BCE, in the earliest copies of the Gilgamesh epic.
- It was once thought that Egyptian writing, in the form
of hieroglyphs, was influenced by Sumerian cuneiform, but
it is now believed that Egyptian writing systems emerged
independently. An Egyptian alphabetic script has been
dated to 1800 BCE.
- Examples of writing have been found in Chinese
archeological sites, dating from 1200 BCE.
- Mayan culture in Mesoamerica had a written script before
500 CE. In the New World, systems for writing are
generally thought to have emerged first in Zapotec culture
near present-day Oaxaca, Mexico, about 300 BC, and in
Mayan culture in southern Mexico and Central America about
200 AD. However, new archeological findings, reported in
2002 by Mary E.D. Pohl and her colleagues, suggests that
some form of symbolic writing, known as glyphs,
may have been available to the Olmec civilization in what
is now Tabasco, Mexico, as early as 650 BC. The issue is
not settled among archeologists: what appear to be glyphs
may really be pictures, and the artifacts may not be as
old as their discoverers originally thought.
See also Writing: Theory and History of
the Technology of Civilization (2009) by Barry B.
Powell.
Not all of these developments occurred
simultaneously in every geographical area; and in some areas,
some developments did not occur at all.
When writing begins, where it begins,
history begins too, and science and culture begin to develop
and proliferate extremely rapidly. At this point, about 5000
ya, biological evolution essentially ends: in genetic terms,
and speaking metaphorically, we are the same species as Adam
and Eve (created, or so Bishop Usher calculated, on the night
before October 23, 4004 BCE). The development of tools,
clothes, medicine, and social structure mean that we are
protected -- or, more correctly,we protect ourselves --
against the biological pressures that formerly killed those
who were weak or stupid. The evolution of a new species
requires biological isolation, and migration and inbreeding,
within and between "racial" groups, effectively precludes
that. It also requires a hostile environment, which insures
the survival only of the fittest. So, biologically speaking,
we are pretty much at the end of our line. Now the only
threats to our existence come from ourselves --
overpopulation, ecological disaster, and nuclear holocaust.
To our knowledge, few other species
have lasted even three million years before their inevitable
extinction. But unlike nonhuman animals, we know what the
threats to our existence are, we understand that they are
largely of our own making, and we have the intelligence and
technology to do something about them. We can save ourselves
from extinction, but only if we think, and try. That's where
human intelligence comes in. The ultimate gift of evolution,
the human mind, has been and will remain the key to our
survival as a species.
Actually, Though, It Ain't Over 'Till It's Over
It's common to think that the
biological evolution of the human species has pretty much
reached its end point. That's pretty much the point of view
taken here: once people start changing the environment, the
environment has less chance to change them through
natural selection. And, more or less, that's also the point of
view taken by evolutionary psychologists (see below), who
argue that patterns of human thought and behavior that evolved
in the late Pleistocene era have remained pretty much
unchanged up to the present.
These are good arguments, but they're
apparently not quite true. Obviously, there's still
opportunity for natural selection to operate on the human
genome. In fact, in 2006, Jonathan Pritchard and his
colleagues identified a number of segments of the human genome
that have been subject to change via natural selection as
recently as the last 5-10,000 years -- roughly since the
beginning of agriculture (for the details, see A
Troublesome Inheritance [2014] by Nicholas Wade).
- Some of these genes code for differences in skin color
-- for example, between Europeans and Africans.
- Asians apparently acquired their light skins earlier,
and through a different genetic route.
- Another gene facilitates the digestion of lactose -- a
gene that was particularly useful to early European
farmers who domesticated cattle and drank their milk (the
mutation first occurred in what is now central Europe
7-12,000 ya.
- Yet another genetic mutation protects against altitude
sickness; it occurs in 90% of Tibetans but only 10% of Han
Chinese, and may have appeared as recently as 3,000 ya.
- A mutation in a single gene, known as EDAR, may have
given rise to a host of physical traits characteristic of
East Asians (Han Chinese, Japanese, Thais) and American
Indians (who are descended from East Asians), including
thicker hair, extra sweat glands, and distinctively shaped
teeth. This variant first appeared about 35,000 ya
in central China.
In 2011, a study of parish records from
the Canadian Isle aux Coudres, which date back to 1799,
revealed a steep decline in the age at which women had their
first child. Other data confirms a decline in age at first
reproduction, and an increased age at menopause -- both of
which have been plausibly attributed to natural selection
(although, frankly, it seems to me that both changes could
just as easily have occurred as a result of improvements in
nutrition and other aspects of health). If the evolutionary
biologists are right, evolutionary change in these traits has
been both very recent and very rapid.
So, evolution continues at the genetic
level, at the level of body morphology, mind, and behavior, if
not the mind as well. The fact that human evolution continued
even after we moved out of the EEA, and into other
environments that made other demands on survival, has led some
evolutionary psychologists to espouse a concept of "fast
evolution" -- essentially, that evolution can proceed at a
much faster pace than previously believed -- possibly in
response to much more recent environmental changes. And to
focus on the Holocene epoch rather than the Pleistocene --
i.e., the most recent 12,000 years, since the beginnings of
agriculture.
OK, point well taken, but
let's not go overboard.
- In the first place, the humans who moved out of Africa
weren't just selected by their environment. Early
protohumans originally came out of the trees in the
African rain forest; when the rain forest began to
disappear, they moved into the African savanna; and when
the savanna proved inhospitable, some of them moved out of
Africa entirely. So early humans also selected
their environment, and they were able to do so precisely
because they had a general problem-solving capacity, not
just a set of modules that evolved in the Pleistocene EEA
lasting roughly 2 million years.
- Even "fast" evolution isn't fast enough to help
individual species members, or even their immediate
offspring, adapt to environmental changes. We still need a
general capacity for learning and problem-solving.
- We're not at all dependent on these evolutionary-genetic
changes. Han Chinese can live in Tibet (ask any Tibetan
about this) -- they just have to get there slowly, and
they may need occasional doses of oxygen. Japanese can
live in milk-drinking Europe and America - -they just
can't drink much milk or eat much cheese.
- For all the claims about fast evolution (which isn't
very fast), the changes it's produced are pretty trivial
compared to the big achievements of evolution -- language,
consciousness, and the like.
- Moreover, cultural evolution can affect biological
evolution. Achieving control of fire, which is the first
big step toward civilization, enabled early humans to move
out of the Tropical Zone in the first place (you can live
naked year-round at sea in the Tropics, but once you get
outside that zone, or find yourself in a higher elevation,
you had better have some way to keep warm at night and in
the winter). It also allowed them to cook their food. And
cooking promoted the evolution of smaller mouths and
teeth, and shorter intestines. The increase in caloric
intake may even have promoted an increase in brain size.
How Did We Get to Be Human?
Evolution did not draw a
straight line from early hominins to modern humans. At
one point, we shared the planet with a number of
near-relatives.
Note:
On 11/19/2018, the New York Times
commemorated 40 years of its "Science Times"
section by looking at "11 Things We'd Really
Like to Know -- And A Few We'd Rather Not
Discuss". One of the essays, by Carl
Zimmer, traces progress in understanding human
origins -- and our deepening understanding of
our complex evolutionary history.
I spoke recently to a
scientist who was writing up a summary of what we know
about human evolution. He should have had a head
start, having written a similar article five ya.
But when he looked
at what he had written then, he realized that
little of it was relevant. “I can’t use much of
any of it,” he told me.
As a journalist, I
can sympathize.
In recent years,
scientists have offered a flood of insights into
how we became human. Fairly often, the new
evidence doesn’t square with what we thought we
knew.
Instead, many of
these findings demand that researchers ask new
questions about the human past, and envision a
more complex prehistory.
When Science Times
debuted 40 ya, scientists knew far less about
how our ancestors branched off from other apes
and evolved into new species, known as hominins.
Back then, the
oldest known hominin fossil was a diminutive,
small-brained female unearthed in Ethiopia named
Lucy. Her species, now known as Australopithecus
afarensis, existed from about 3.85 million ya to
about 2.95 million ya.
Lucy and her kin
still had apelike features, like long arms and
curved hands. They could walk on the ground, but
inefficiently. Running was out of the question.
Hominin evolution
appeared to have taken a relatively direct path
from her to modern humans. The earliest known
members of our genus, Homo, were taller and had
long legs for walking and running, as well as
much larger brains. Eventually, early Homo gave
rise to our own exceptional species, Homo
sapiens.
Now, it’s clear
that Lucy’s species wasn’t the beginning of our
evolution; it was a branch that sprouted midway
along the trunk of our family tree. Researchers
have found fossils of hominins dating back over
six million years. Those vestiges — a leg bone
here, a crushed skull there — hint at even more
apelike ancestors.
But even the earliest known hominins were like us in
one important regard. They appear to have been able to
walk on the ground, at least for short distances.
Paleoanthropologists
have uncovered a wealth of new fossils from all
points on the spectrum of hominin evolution. Some
clearly belonged to known species, such as
Australopithecus afarensis. Some were so distinct
that they deserved a new designation.
But others have
fallen somewhere in between. Often they look like
mosaics of other species, carrying remarkable
combinations of traits. Some of these mosaics may
have been the result of interbreeding between
species.
But it may be, too,
that hominins independently evolved many traits
many times, along separate lines of evolution.
All this mixing and experimentation produced as many
as 30 different sorts of hominins — that we know of.
And one kind did not simply succeed another through
history: For millions of years, several sorts of
hominins coexisted.
Indeed, our own
species shared this planet with near-relatives
until just recently.
In 2017,
researchers found the oldest known fossils of
our species in Morocco, bones dating back about
300,000 years. At that time, Neanderthals also
existed. They continued to live across Europe
and Asia until 40,000 ya.
At that time, too,
Homo erectus, one of the oldest members of our
genus, still clung to existence in what is now
Indonesia. The species did not go extinct until
at least 143,000 ya.
Homo erectus and
Neanderthals are hardly new to
paleoanthropologists. Neanderthals came to light
in 1851, and Homo erectus fossils were
discovered in the 1890s. But still other
hominins, recent research has shown, shared the
planet with our own species.
In 2015,
researchers unearthed 250,000-year-old fossils
in a South African cave. Known as Homo naledi,
this new species had a Lucy-sized brain, but it
was also a complex structure in ways that
resembled our own.
The wrist and
other hand bones of Homo naledi were human-like,
while its long, curved fingers seemed more like
an ape’s.
While Homo naledi
thrived in Africa, another mysterious species
could be found on an island now called Flores,
in Indonesia. Known as Homo floresiensis, these
hominins stood only three feet high and had
brains even smaller than that of Homo naledi.
The species may
have arrived on Flores as early as 700,000 ya,
and these hominins endured until at least 60,000
ya. Homo floresiensis appears to have made stone
tools, perhaps to hunt and butcher the dwarf
elephants that once lived on the island.
Paleoanthropologists
today are no longer limited to just examining
the size and shape of fossils. Over the past 20
years, geneticists have learned how to extract
DNA from bones dating back tens of thousands of
years.
In one remarkable
discovery in Siberia, researchers examining a
nondescript pinkie bone discovered the genome of
a separate line of hominins, now known as
Denisovans.
As it turns out,
we have had the planet to ourselves only in the
past 40,000 years — a small fraction of Homo
sapiens’ existence. Perhaps we out-competed
other species. Maybe they just had bad luck in
evolution’s lottery.
But in one way, we
are still living with them. Both Neanderthals
and Denisovans interbred with our ancestors some
60,000 ya, and billions of people today carry
their DNA. Still mosaics, after all this time.
|
Evolutionary Psychology
Evolution doesn't just leave its mark
on body morphology, giving fish scales and birds feathers, and
humans opposable thumbs. It also leaves its mark on behavior,
as seen in the "instincts", or fixed action patterns,
discussed in the lectures on learning. In the 1970s, the
evolutionary biologist E.O. Wilson coined the term sociobiology
to represent the idea that patterns of social behavior evolved
under the pressure of natural selection, just as physical
traits did. In other words, a number of human social behaviors
are instinctive, part of our innate behavioral endowment.
We've seen instincts before, in the context of innate
stimulus-response connections (remember reflex, taxis,
instinct?). In the last chapter of his book, Wilson argued
that instinctual social behavior might not be restricted to
nonhuman animals like ants (the species Wilson studied) or the
species studies by ethologists like Tinbergen, Lorenz, and von
Frisch, but might extend to humans as well.
Taking a leaf from Wilson's book, some
psychologists -- led by Leda Cosmides, John Tooby, and David
Buss, among others -- have argued that mental traits evolved
in the same way: that human beings, no less than other
animals, have evolved specific patterns of thought, feeling,
and desire through natural selection (in fact, Buss's first
book was entitled The Evolution of Desire). The reason
some of our behaviors and thought processes seem maladaptive,
or at least inappropriate, in today's world is that they
evolved to foster adaptation to a particular environment,
known as the environment of evolutionary adaptedness
(also known as the "environment of early adaptation",
in either case abbreviated EEA) -- roughly the African
savanna of the Pleistocene epoch (modern Ethiopia, Kenya, and
Tanzania), where homo sapiens first emerged about
300,000 ya -- and have changed little since then.
Although these assertions are
debatable, to say the least, the literature on instincts makes
it clear that evolution shapes behavior as well as body
morphology. Many species possess innate behavior patterns that
were shaped by evolution, permitting them to adapt to a
particular environmental niche. Given the basic principle of
the continuity of species, it is a mistake to think that
humans are entirely immune from such influences -- although
humans have other characteristics that largely free us from
evolutionary constraints.Since the emergence of humans, the
cultural environment has changed a great deal, but there has
not been enough time for biological evolution to produce new,
more adaptive traits.
Certainly there are good reasons for
believing that the uniquely human capacity for language is a
product of evolution. So, arguably, are the kinds of
mechanisms envisioned by Gibson's idea of direct perception.
But there are reasons for thinking that the theory of
biological evolution is not the answer to psychology's
problems. This is because there are at lest four
characteristics of mind that appear to distinguish humans from
all other animals. As outlined by Marc Hauser (himself a
distinguished evolutionary psychologist) in "Origin of the
Mind" (Scientific American, 09/2009), these are:
- Generative computation: through recursive and
combinatorial thinking, humans are able to "create a
virtually limitless variety of words, concepts, and
things". Hauser, Noam Chomsky, and W. Tecumseh Fitch
have argued that recursion is the key to human linguistic
ability (Science, 2002). See also "The
Uniqueness of Human Recursive Thinking", by Michael C.
Corballis, American Scientist, 05-06/2007).
- Promiscuous Combination of Ideas: intermingling
knowledge across different domains "thereby generating new
laws, social relationships and technologies".
- Mental Symbols: representing both real and imagined
experiences, which can be expressed to others through
language.
- Abstract Thought: the ability to deal with objects
and events that we cannot physically sense.
Evolutionary psychologists who study
mating frequently place a great deal of stress on a particular
dimorphism in which older males prefer younger females. This
pattern makes evolutionary sense, given that young females (at
least, young women, as opposed to young girls)
have greater childbearing capacity than older ones (at least,
women who have reached menopause). In turn, evolution is
evoked to explain a wide variety of mating phenomena, from
President Bill Clinton's entanglement with Monica Lewinsky to
the common practice of rich men abandoning their first wives
(or their second, or their third...) to take younger,
ostensibly more attractive "trophy wives" (a term coined by Fortune
magazine in the 1980s).
But...
- It should be remembered that Clinton and Lewinsky did
not actually engage in sexual intercourse, but rather
stuck with oral sex (and if you believe the stories,
Clinton also preferred oral sex with his other
extramarital partners). Therefore, whatever the causes of
Clinton's behavior, it appears to have had nothing to do
with any desire on his part to propagate his genes by
impregnating multiple, youthful, partners.
- While it's certainly a trend for older, successful men
to divorce and marry trophy wives (someone once referred
to it as "changing a 40 for two 20s"), it is not at all
clear that they go on to have children by these women --
unless, perhaps, the women make it a condition of hooking
up with them in the first place. In fact, an article in
the New York Times notes the increasing trend to
write a prohibition on children into prenuptial agreements
-- although it is not clear that such provisos will hold
up if challenged in court ("A Promise to Love, Honor, and
Bear No Children" by Jill Brooke, 10/13/02). Some young
women may make children a condition for marrying an older
man, or may desire children of their own to inflate their
claims for support in the event of divorce -- or just
because they want to bear children; but that doesn't mean
that the older man is motivated, in taking a new, younger
wife, by the desire on his part to further propagate his
genes.
Similar problems attached to other
evolutionary explanations of mating behavior, which are
commonly explained by some variant of Robert Trivers's (1972)
parental investment theory. According to the
theory, men evolved to produce lots of offspring, but women
evolved to be more selective about whom they'll mate with,
because they have to invest more in caring for their offspring
than men do. Therefore:
- Men are more distressed by actual infidelity; women are
more distressed by "emotional" infidelity (e.g., Buss et
al., 1992).
- Men are less selective than women about whom they'll
mate with.
- Men like causal (i.e., non-reproductive or at least
non-committed) sex more than women do.
- Over a lifetime, men have more sexual partners than
women.
It all sounds good, especially when you
consider the conditions of child-rearing in hunter-gatherer
societies. On the other hand, there is dispute about the
facts to be explained.
- For example, the claimed gender difference in sexual
jealousy is almost wholly an artifact of method. The
original studies forced subjects to choose which would
cause them more distress. But DeSteno et al. (2002)
offered an alternative explanation in terms of their double-shot
hypothesis. In their view, men assume that
their unfaithful mates are also in love with their
adulterous partners, while women assume that their
unfaithful mates were just in the affair for sex.
So, men are wounded twice, women only once. Further,
Harris (2003) showed that when men and women are asked to
rate their distress on continuous scales, the
difference is trivial. So, even the "double shot"
doesn't make much of a difference in jealousy (for a
review of the voluminous literature on this topic, see
Carpenter, Psychology of Women Quarterly, 2012,
and a response by Bendixen et al., Personality &
Individual Differences, 2015).
- Many studies studies ask subjects how many sexual
partners they'd like to have, and they generally
find that men want more partners than women do. But
Alexander and Fisher (2003) asked subjects how many sexual
partners they had actually had. When they just asked
the question, men reported more partners than women.
But when they used a fake lie-detector (what's known in
social psychology as a "bogus pipeline" technique, women
actually reported having had more partners than men
(though the difference was not statistically significant.
- Studies of "speed dating" show that men don't
discriminate very much among the women they meet --
they're attracted to all (or, at least, most) of them,
while the women are more choosy. But in the
conventional speed-dating situation, men rotate from table
to table, while women sit. Finkel and Eastwick
(2009) reversed the procedure, and found that the behavior
reversed as well.
- In an amazing study, Clark and Hatfield (1989) had
confederates approach male and female college students
(previously strangers to them) on campus, and ask them one
of three questions: Would you go out with me
tonight? Would you come over to my apartment
tonight? Would you go to bed with me tonight?
Men and women agreed to the date in about equal numbers,
but much less likely to assent to either of the other
propositions (in case you're wondering, in the first study
of this type, 70% of the men said "yes" to the third
question, while none of the women did). But Conley
(2011) criticized the methodology of this paper: When was
the last time a complete stranger walked up to you on
campus and offered sex? In her study, she asked
people to imagine this scenario, and found that when you
added context, the gender difference virtually
disappeared. So, there's nothing innately automatic
about either masculine promiscuity or feminine chastity.
On the distaff side, a prominent claim of evolutionary
psychology is that women prefer more "masculine" faces when
they are ovulating, and thus more likely to become pregnant as
a result of sexual intercourse (Penton-Voak et al.,
Nature,
1999). Again, this makes evolutionary sense.
According to the
ovulatory shift hypothesis, women are
hard-wired by evolution prefer to become pregnant by
"masculine" men, but prefer to share child-rearing
responsibilities with men who are more "feminine", more
cooperative, more care-giving. Put another way: they
want to conceive children with more "masculine" men who
presumably have better genes, but they want to marry more
"feminine" men who make for better long-term partners.
And again, the question is whether it's actually true. A
fairly large literature has seemed to confirm this finding,
but there are enough methodological problems in these studies
to cast in in doubt (Harris,
Sex Roles, 2012). A
fairly definitive study by Jones et al. (
Psychological
Science, 2018) suggests it's not. These
investigators used computer morphing software to emphasize or
de-emphasize the "masculine" features of men by sharpening or
rounding features features in their facial photographs.
Then they tested women's preference for the more- or
less-masculine faces at different phases of their menstrual
cycle, as confirmed by hormonal assays (as opposed to
self-reports, which are, amazingly unreliable -- and one of
the reasons that the "rhythm method" is not a reliable method
of birth control). In a within-subject design, in which
the same 584 heterosexual women were tested at different
points in their cycles, Jones et al. found no relationship --
not even when they took into account the potential effects of
their subjects' use of hormonal contraceptives.
For comprehensive analyses of gender differences in mating
and other behavior, see:
- Peterson, J.L., Hyde, J.S., "A meta-analytic
review of research on gender differences in sexuality,
1993-2007" (2010), which concludes that "most gender
differences in sexual attitudes and behaviors are small",
especially in "nations and ethnic groups with greater
gender equity", they also note that "Gender differences
decreased with age".
- Conley, T.D., et al. "Women, Men, and the Bedroom:
Methodological and Conceptual Insights that Narrow,
Reframe, and Eliminate Gender Differences in Sexuality"
(2011), whose subtitle says pretty much all there is to
say.
- Carpenter, C.J., "Meta-Analyses of Sex Differences in
Responses to Sexual Versus Emotional Infidelity:Men and
Women Are More Similar than Different" (2012).
- Guildersleeve, et.al., "Do women’s mate preferences
change across the ovulatory cycle? A meta-analytic review"
(Psychological Bulletin, 2014).
- Wood et al., "Meta-analysis of menstrual cycle effects
on women’s mate preferences" (Emotion Review,
2014).
So far, evolutionary
psychology has gone through three stages:
- The first found evolutionary roots, and thus
evolutionary explanations ( in terms of adaptiveness), for
all sorts of nasty human behaviors, like war, polygamy,
rape, and child murder. It's exemplified by books such as
Desmond Morris's The Naked Ape and A Natural
History of Rape by Randy Thornhill and Craig T.
Palmer.
- The more recent stage has found evolutionary roots, and
thus evolutionary explanations, for all sorts of good
things that people do, such as religion and our sense of
morality. It's exemplified by books such as Marc Hauser's
Moral Minds: How Nature Designed Our Universal Sense of
Right and Wrong, Dacher Keltner's Born to be
Good, and Frans de Waal's The Age of Empathy:
Nature's Lessons for a Kinder Society.
- There are also claims that certain aspects of cognition,
like the alleged confirmatory bias in hypothesis-testing,
also represent cognitive styles that evolved because they
served us well in the EEA.
Evolutionary psychology
tends toward the reductive -- that is, it seeks an explanation
for everything about mind and behavior in terms of its
role in enhancing reproductive fitness. You can see this
especially in the debates over homosexuality and altruism.
- Why is it adaptive to have homosexuals, people who won't
reproduce? Because they serve as caretakers for other
people's children, particularly those of other family
members, thus enhancing the reproductive fitness of their
relatives -- and thus, indirectly, passing on their genes.
- Why is it adaptive to have altruists, people who risk
their lives to save others? Because they do this mostly
for family members -- thus, indirectly passing on their
genes.
These folks have even offered an
evolutionary explanation for grandmothers. Why should humans
have females who, almost alone in the animal kingdom, continue
to live beyond their childbearing years? Because they remain
available to take care of their children's children -- thus,
indirectly, enhancing the reproductive fitness of their
children and grandchildren, and, indirectly, passing on their
genes. Honestly, I swear this is true, you can't make this
stuff up.
I've argued that biological evolution is
outpaced by cultural evolution, mediated by social learning,
but some evolutionary psychologists have gone so far as to
claim that learning itself is a biological adaptation
-- that learning itself is a product of natural
selection. In this way, they try to have it all.
But they can't, for the simple reason that, by claiming
learning for biological evolution, they've distorted the
meaning of both evolution and learning. Yes, there's a sense
in which learning occurs via something that looks like natural
selection: according to Skinner, for example, behaviors that
are reinforced are maintained, while those that are not
reinforced disappear. But this is nothing more than an
analogy. Darwinian theory, and the evolutionary
psychology that is based on it, assumes genetic
variation, which is operated on by the environment. But
learning doesn't depend on genetic variation. It
depends on cognitive variation -- that is, variation
in the thoughts, and behaviors, in the individual
organism. This is not within the scope of the Darwinian
paradigm. Learning is, in fact, more Lamarckian, as it
involves the inheritance of acquired characteristics -- but
not through genetic inheritance. The inheritance
that occurs in learning is cognitive and cultural
inheritance, passed through social learning, not through
genetic mechanisms.
Steven Pinker has written (2008): "To
understand human nature, first understand the conditions that
prevailed during most of human evolution, before the
appearance of agriculture, cities, and government". Perhaps.
Then again, it was by virtue of human nature that we invented
agriculture, cities, and government in the first place. Human
nature is not restricted to biological givens: it also extends
to sociocultural constructions.
Top 10 Questions to Ask
Your Local Evolutionary Psychologist
(with Apologies to David Letterman)
I know this is more than 10, but you get the idea:
- Do you eat meat? If so, do you eat it raw? Do you
confine your diet to uncooked foods? If not, why not?
- Does your wife have a 5:7 waist-to-hip ratio? How big
are her breasts? And are they symmetrical? If not, why are
you still with her?
- Has she reached menopause? If yes, why haven't you
divorced her?
- If your wife has not yet reached menopause, do you
practice any form of birth control? If so, why?
- Other than your wife, how many attractive younger women
of childbearing age are you currently having sex with?
Does she mind that you're sleeping with all these other
young women?
- When having sex with these individuals, do you practice
birth control? If yes, why?
- If you have divorced and remarried, have you killed your
new wife's children from her earlier marriage -- or at
least kicked them out of the house with no financial
support? Has your ex-wife's new husband done the same to
the children you had by her? If not, why not?
- Do you ever have sex when your partner isn't ovulating?
If yes, why?
- Do you engage in any form of sexual activity other than
vaginal intercourse? If yes, why?
- Do you always mount your sexual partners from behind,
while they're standing up? If not, why not?
A more scholarly critique of
evolutionary psychology has been provided by David J. Buller,
a philosopher, who lists "Four Fallacies of Pop Evolutionary
Psychology" (Scientific American, 01/2009) -- where
"pop evolutionary psychology" (PopEP) "refers to a branch of
theoretical psychology that employs evolutionary principles to
support claims about human nature for popular
consumption". Here's his list of fallacies:
- "Analysis of Pleistocene Adaptive Problems Yields Clues to
the Mind's Design". PoPEP-ists generally trace the
mind's design to the problem of mate selection, but Buller
points out that the paleontological record "is largely
silent regarding the social interactions that would have
been of principal importance in human psychological
evolution".
- "We Know, or Can Discover, Why distinctively Human Traits
Evolved": The comparative method on which evolutionary
biology relies involves studying species who share a common
ancestor, but which developed different adaptations to deal
with different environments -- as in the case of Darwin's
finches. That works well for finches, who come in
great variety, but it does work for humans, who diverged
from our closest relative, the chimpanzee, about 6 million
ya. the relatives that would allow the comparative
method to work, like australopiths and other hominins, just
aren't around for us to compare ourselves to.
- "Our Modern Skulls House a Stone Age Mind". Some
human traits, like our basic emotions, arose long before the
Pleistocene era; and environmental change since the
Pleistocene has arguably altered human thought patterns as
well.
- "The Psychological Data Provide clear Evidence for Pop
EP". Here Buller criticizes PopEP-ists' failure to
consider alternative explanations. The fact that some
human mental or behavioral characteristic fits an
evolutionary explanation doesn't matter if there is an
alternative explanation that fits better. Consider,
for example, the PopEP claim men are more upset by sexual
infidelity, while women are more upset by emotional
infidelity. First, this isn't exactly true. This
finding emerges only from forced-choice questionnaires that
require subjects to choose which kind of infidelity upsets
them more. When subjects are asked to rate how
much they would be upset by each kind of infidelity,
men and women come out about even. Moreover, any such
difference may have nothing to do with the environmental
pressures on Stone age hunter-gatherers. The same sex
difference could reflect men's belief that sexual infidelity
in women is generally accompanied by emotional infidelity,
and women's belief that male sexual infidelity is generally
not accompanied by emotional infidelity.
Evolutionary psychologists have been
extremely creative in conjuring up plausible accounts of how
this or that characteristic of human mental life is adaptive
-- or was adaptive in the EEA. But are these anything
more than "Just So" stories, a la Rudyard Kipling? To quote
Richard Lewontin again ("Not So Natural Selection",New York
Review of Books, 05/27/2010):
"The success of evolutionary biology as an
explanatory scheme for its proper subject matter has led, in
more recent times, to an attempt to transfer that scheme to
a variety of other intellectual fields that cry out for
systematic explanatory structure....
"One answer has been to transfer the formal
elements of variation and natural selection to other aspects
of human activity.... We have evolutionary schemes for
history, psychology, culture, economics, political
structures, and languages. the result has been that the
telling of a plausible evolutionary story without any
possibility of critical and empirical verification has
become an accepted mode of intellectual work even in natural
science....
"Even biologists who have made fundamental
contributions to our understanding of what the actual
genetic changes are in the evolution of species cannot
resist the temptation to defend evolution against its
known-nothing enemies by appealing to the fact that
biologists are always able to provide plausible scenarios
for evolution by natural selection. But plausibility is not
science. True and sufficient explanations of particular
examples of evolution are extremely hard to arrive at
because we do not have world enough and time. The
cytogeneticist Jakov Krivshenko used to dismiss merely
plausible explanations, in a strong Russian accent that lent
it greater derisive force, as 'idel specoolations'.
"Even at the expense of having to say 'I
don't know how it evolved', most of the time biologists
should not engage in idle speculations."
For another critical
view of evolutionary psychology, see "It Ain't necessarily So"
by Robert Gottlieb, New Yorker, 09/17/2012.
Here's an extract:
There are plenty of factions in this newish
science of the mind. The most influential... focuses
on the challenges our ancestors faced when they were
hunter-gatherers on the African savanna in the Pleistocene
era..., and it has a snappy slogan: "Our modern skulls house
a Stone Age mind." This mind is regarded as a set of
software modules that were written by natural selection and
now constitute a universal human nature. We are, in
short, all running apps from Fred Flintstone's
not-very-smartphone. Work out what those apps are --
so the theory goes -- and you will see what the mind was
designed to do.
Evolutionary Change and Cultural Change: The
Case of Violence
Some inkling of the comparative speed
of biological and cultural evolution, with respect to human
experience, thought, and action, is afforded by an analysis of
historical changes in violence by Steven Pinker -- himself a
prominent proponent of evolutionary psychology. In The
Better Angels of Our Nature: Why Violence Has Declined
(2011), Pinker argues from archival data that the rate of
violence among humans has declined radically from the Stone
Age until now. For example, criminal records from the 14th
century indicate that London's homicide rate was about 55
deaths per 100,000 population, compared to 2 per
100,000 today. And London is far from a unique case. Outside
of the United States, for example, capital punishment has
virtually vanished from the western world -- and, despite the
large number of inmates on death row in states like Texas and
California, actual executions are quite rare (OK, maybe not in
Texas). Despite the horrors of World Wars I and II, and
headline-grabbing terrorist bombings, campus shootings,
domestic violence, and gang killings in our inner cities,
Pinker asserts that "The decline of violence may be the most
significant and least appreciated development in the history
of our species".
Pinker's data has been controversial --
the two World Wars were pretty bad; there are persistently
high levels of violence in Africa, Asia, and South America;
and the homicide rate in American cities in 21st-century
Detroit and New Orleans (though not New York City) rivals that
of 14th-century London. But if he's right, this radical and
rapid decline in violence cannot be accounted for by
evolutionary change. After all, as noted earlier, when it
comes to mind and behavior there's been no change in the human
genome since the time of Adam and Eve. So it has to be a
product of cultural change.
Actually, it doesn't have to be a
product of cultural change alone. Pinker notes that, in
addition to innate tendencies toward competition and violence
-- our inner demons -- we also have countervailing innate
tendencies toward cooperation and empathy -- our better angels
(the phrase comes from Abraham Lincoln's first inaugural
address). But Pinker is an evolutionary psychologist, and if
our better angels were innately stronger than our inner demons
there wouldn't ever have been high levels of violence, and
there would have been no historical decline, either -- because
there would have been no high level of violence to decline from.
So, even from an evolutionary-psychological point of view, the
key to the decline in violence has to lie in cultural change.
Compared to 50,000 (or 5,000, or even 500) ya, the cultural
environment has changed to favor our innate goodness rather
than our innate badness. Following the sociologist Norbert
Elias, Pinker calls this "the civilizing process".
So what are the elements
of the civilizing process. Elias thought that it consisted of
the elevation of state power over feudal loyalty, and also the
development of commerce. Pinker's view is more monolithic. In
addition to four innate better angels (like cooperation and
empathy), and five innate inner demons (like competition and
violence), he identifies six trends and five historical forces
that have fostered the decline of violence. For example:
- The emergence of some some form of centralized ruling
authority, culminating in modern state power, put an end
to the "war of all against all".
- In Leviathan (1651), the British political
philosopher Thomas Hobbes pointed out that in the
absence of a state, life is "nasty, brutish, and short".
- The rise of cities demanded that people adhere to
stricter codes of conduct -- which, once internalized,
literally changed individuals' psychology.
- The spread of literacy expanded the "circle of empathy".
- The rights movements of the 20th century -- the
expansion of male suffrage, women's rights, civil rights,
gay rights, even animal rights -- led much violent
behavior -- beating your wife or your dog -- into
antisocial behavior.
- Trade transformed potential enemies into paying
customers.
- Democracy, and especially the concept of minority
rights, incorporated the peaceful resolution of conflict
through compromise.
- Individual thinking generalized into collective
rationality, and the recognition of others as rational
agents who deserved to be treated the way we would wish
ourselves to be treated (i.e., the Golden Rule).
Ever the evolutionary psychologist,
Pinker asserts that this changing environment selected for
our better angels over our inner demons. But he neglects the
simple fact that this changing environment was, itself, the
product of human cognitive activity -- the collective
rationality through which we built a world that would
reinforce these innate tendencies. Evolutionary psychology
has to embrace a Darwinian notion of the organism as,
essentially, passive in the fact of the environment --
traits evolve precisely because they were selected by an
environment that changed autonomously. The dinosaurs didn't
make the asteroid whose collision with Earth rendered them
extinct, and gave mammals a chance. But we made the cultural
environment that increased the power of our better angels
over our inner demons.
This is the Doctrine of Interaction
on a large scale: We are not creatures of our environments.
We make the environments in which we live.
There is actually a
third view of development, the cultural point of view, which
is concerned with the effect of social development on the
development of the individual's mind. In general, psychology
has tended to ignore sociocultural differences in mental
life. Psychology is universalistic, in that it assumes that
the basic principles of mental functioning are found in all
normal adult humans; and it is particularistic, in that it
assumes that the course of individual lives reflect
individual differences in knowledge and skills. But
cognitive anthropology and sociology take on the task of
understanding how social and cultural processes affect what
we now and how we know it. Cognitive anthropology (also
known as anthropological psychology or cultural psychology)
arose in the late 19th century, with an interest in
characteristic patterns of thought associated with people in
so-called "primitive" and "advanced cultures. In the context
of late 19th-century European imperialism (and its American
counterpart in the drive westward towards "Manifest
Destiny"), cognitive anthropology essentially studied the
differences between conquerors (e.g., British, American) and
the conquered (e.g., Arabs, Africans, and Native Americans).
But setting politics aside, there are other aspects of
cultural development that might also affect individuals'
mental processes:
- literacy, or the proliferation of written language
(not to mention, more recently, the proliferation of
electronic media such as radio, television, and the
Internet);
- economic development, as in the progression of
societies from hunter-gatherer through agricultural and
industrial to "post-industrial" forms of social
organization (this definition of development was
especially pursued by psychologists in the Soviet Union,
such as Lev Vygotsky, but it has since outgrown its
Marxist overtones).
- modernization, typically defined in terms of two
dimensions: traditional beliefs versus secular
rationalism; and a concern with survival and physical
security versus a concern with self-expression
(Inglehart & Baker, American Sociological Review,
2000).
The Three- (Maybe Four-) Age System
In cultural terms,H.
(sapiens) sapiens are "Stone Age" humans:
- they made needles, handles, fishhooks;
- they hunted with harpoons;
- they used mechanical devices to throw spears.
- Perhaps as long as 77,000 ya, they left pots in
Blombos Cave, in what is now South Africa, that display
abstract, symmetrical, geometric designs that may
represent the earliest known art.
- About 40,000 ya, they left the wall-art in the caves
at Altamira (Spain) and Lascaux (France).
The
term "Stone Age" refers to a "three-age" system for
organizing human prehistory, based on the use of tools,
introduced by C. J. Thomsen, a Danish archeologist, in the
early 19th century. The three canonical ages are the Stone
Age, Bronze Age, and Iron Age. The Stone Age is further
subdivided into early, middle, and late periods. Sometimes a
fourth age, the Copper Age, is interpolated between the
Stone Age and the Bronze Age (image from "Complex Behavior
Arose at Dawn of Humans" by Ann Gibbons, Science
03/16/2018).
As their names imply,
the prehistoric ages are determined largely by the kinds of
tools in use, but in fact the ages also carry broad
implications for social organization.
- The Stone Age. These humans were
hunter-gatherers, living a mobile lifestyle close by
sources of water. They made tools by hand from sharpened
stones, bones, reeds, branches, and other objects found
in nature.
- Paleolithic (Early Stone Age) peoples lived in small
bands of up to 100 people.
- Mesolithic (Epipaleolithic, or Middle Stone Age)
peoples divided into tribes and bands as their
population increased.
- Neolithic (Late Stone Age) peoples domesticated
animals and otherwise began the transition to
agriculture on stable farmsteads, and a hierarchical
social organization based on the tribal chief.
- The Bronze Age.These humans fashioned tools
from copper and then bronze alloys, and used a potter's
wheel (as opposed to their hands) to make pottery.
Bronze age agriculture involved the deliberate breeding
of livestock, as well as the beginnings of trade.
- The Iron Age. These humans used -- well, duh!
-- iron. Social development included the emergence of
cities and city-states.
- One of the earliest of these cities is Catalhoyuk,
on the Konya Plain in central Turkey, near the
present-day city of Antalya, established in roughly
7,000 BCE. For an overview of this Neolithic
archeological dig, see "The Origin of Home" by Annalee
Newitz, Scientific American, 03/2021.
See also "Women and Men at Catalhoyuk" by Ian Hodder,
Scientific American 01/2004.
The three-age system was developed to
organize our understanding of European history, though a
similar progression, with some glitches, can be found
outside Europe as well. Although we know little about the
mental lives of prehistoric people, the general thrust of
evolutionary psychology is that the heuristics, errors, and
biases that litter modern thought processes are, in fact,
evolutionary holdovers from prehistoric times. In other
words, these patterns of thought evolved precisely because
they aided survival in the EEA.
Paleolithic Cave Art
The meaning of paleolithic cave art remains
a mystery. The most common interpretation is that it has a
spiritual or supernatural nature. Other authorities
suggest that much of it, especially the paintings of
genitalia and other sexual anatomy, is the work of
adolescent boys with too much time on their hands.
Two recent books that review the
controversy are:
- The Cave Painters: Probing the Mysteries of the
World's First Artists by Gregory Curtis (Knopf,
2006), which tends toward the conventional view;
- The Nature of Paleolithic Art by R. Dale
Guthrie (Chicago, 2006), which argues for the
revisionist view.
For succinct coverage of the controversy, see "Secrets of
the Cave Paintings" by William H. McNeill (
New York
Review of Books, October 19, 2006).
One thing is
for certain: at least by 40,000 ya, something similar to
the modern mind had emerged. This is indicated not
just by the "representational" cave paintings such as
found at Altamira, Lascaux, and other sites, but also by
works of imagination. A prime example is "The Lion
Man", uncovered in southwest Germany and now in the
British Museum, in which a mammoth tusk has been carved
with a lion's head and a human body. The Altamira
painters might have seen the bison, horses, and
deer that they painted on the cave walls; but they never
saw anything like a man with a lion's head. They had
to have imagined it. At the time of its
discovery, the "Lion-Man" was the earliest work of the
imagination that has been preserved.
Meanwhile, new archeological techniques have pushed the
dates for the earliest cave art somewhat backwards in time.
Employing a "radium-thorium" dating technique that is, by
virtue of the chemistry involved, more reliable and precise
than the traditional radiocarbon technique for the dating of
older antiquities, a group led by Alistair Pike has dated
some cave art in El Castillo, in Spain to 40,800 ya, and the
cave art at Altamira to at least 35,600 ya. Bone
flutes were discovered in a cave in Germany have been dated
to as early as 43 thousand ya; and figurative sculptures
discovered in Germany have been dated to as old as 40,000
ya.
The conventional wisdom is that these
discoveries indicating that the first flowering of human
creativity -- the emergence, if you will, of the
characteristically human mind -- occurred about 40,000 ya,
and took of from there (and then).
And,
importantly, not just in Europe. In 2018, a team of
archeologists discovered a set of cave paintings in Borneo
-- hand prints and depictions a kind of cattle, among
other things -- that are more than 40,000 years old,
as determined by an alternative form of dating based on
the decay of uranium into thorium. The image to the
left, part of a panel that is about 5 yards wide, depicts
a local buffalo being hunted by creatures who share some
human and some animal features (much like the "Lion-Man"
above). The point isn't that imaginative, narrative
art emerged in Borneo before it emerged in Europe (though
that may have been the case, as some of the images may be
as old as 52,000 years). The point is that art
emerged at roughly the same time on opposite sides of the
globe (for more detail, see "The First Story" by Kate
Wong, Scientific American, 03/2020).
On the other hand, recent discoveries from
Africa appear to push back the date for the emergence of
human creative thinking. European cave art may go
back about 41,000 years, but there is other evidence of
human technological and artistic creativity dating from
much further back in time -- in Africa. Flaked-stone
tools have been uncovered in Ethiopia dating back 2.6
million years, and at Turkana, in Kenya, 1.76 million ya,
and in South Africa to 500 thousand ya; there is evidence
that cave dwellers in South Africa controlled fire 1
million ya. And a block of red ocher marked with
completely non-functional parallel lines and
cross-hatchings, dating to 75,000-65,000 MP was discovered
in the Blombos Cave in South Africa in 2000.
Admittedly, the evidence from before 40,000
ya is scattered and spotty, but it's there. The
revisionist interpretation is that technological and
artistic creativity first emerged in Africa more than a
million ya, but didn't really begin to bloom until much
more recently, when human population density reached a
critical point -- which happened to occur in Europe, not
Africa (see "the Origins of Creativity" by Heather
Pringle, Scientific American, March 2013).
Except,
except.... In 2017 Basran Burhan, an Indonesian
amateur archeologist, working with a team of professionals
searching for evidence of Paleolithic settlement, took the
road not traveled into a hidden valley inhabited by a
tribe which claimed never to have seen a Westerner before
(sounds like a movie), entered a cave, and came upon a
wall painting of a warty pig (a boar, kind of like a
javelina only bigger) that was eventually dated to 45,500
years ago, making it the oldest known example of
figurative cave art; also some hand silhouettes.
Exploring other caves yielded an abundance of such images
(Burhan is now a graduate student in Australia).
Paleoarcheology is pretty Eurocentric, given and
initially, several professional journals refused to
publish the research. But the evidence is now clear
to everyone. While there's no point in debating who
did what first, as there are always new caves to explore,
and new discoveries to be made, figurative art did not
arise exclusively in Europe, as had previously been
thought, but seems to have arisen independently in a
number of locations, in Europe ah apologies to Mao
Tse-tung, a "great leap forward" in human culture that
occurred somewhere in the interval between 30,000 and
60,000 years ago -- and probably closer to the latter than
the former.
Paleoindians first crossed the Bering
Strait to the Americas at least 12,000 ya (the date given
to the "Clovis" spear points found in New Mexico), quickly
migrating all the way down to the tip of South
America. And they left rock art along the way.
The oldest known rock art in the Americas has been found
in the the Chiribiquete area of the Amazon -- more
than 75,000 paintings, some dating to 20,000 ya (image
from "The Amazon's First Storytellers" by Thomas Peschak,
National Geographic, 07/2023).
Stay tuned for further revision!
For an illustrated timeline of paleolithic
art, see "First Artists" by Chip Walter, National
Geographic, January 2015.
Epochs in Human History
Just as it seems likely that human
mental life changed with progress from the Stone Age to the
Bronze age, it is also a reasonable hypothesis that patterns
of thought continued to change with further economics,
political, and social development.
Some of the milestones of historical
development have already been listed -- literacy, industrial
capitalism, and modernism.
Here are some other
possibilities:
- The Ancient Era (roughly up to the sack of Rome in the
4th century CE
- The Middle Ages (4th-15th c.)
- Early Modern Period (14th-18th c.)
- Modern Era (18th-20th c.)
- Post-Modern Era (since World War II)
Again, these epochs were developed
with reference to Europe, but analogs can be found outside
of European culture. It is entirely possible that, in
significant ways, people who lived in these eras thought
differently than we do.
The cultural view of development,
which holds that societies and cultures develop much like
species evolve and individuals grow, has long been popular
in other social sciences, such as economics and political
science. To a great extent, the origins of the
cultural view of development can be traced to the writings
of Karl Marx. Originally, Marx argued that all
societies went through four stages of economic
development. Later, working with Friedrich Engels, he
added two other stages:
- tribal,
- ancient,
- feudal,
- bourgeois [capitalist],
- socialism, and
- communism.
But long before Marx, the 18th century
philosopher Giambattista Vico (1688-1744) argued that
history proceeded in repeating cycles of three stages
(actually, the pattern is more like a spiral, because
history does not repeat itself exactly, although the general
theme does):
- The age of gods, characterized by the emergence of a
"family state" governed by a patriarch who holds absolute
power (in recent times, think of the early Christian
Church).
- The age of heroes,characterized by aristocratic
commonwealths (think of the heroic warriors of medieval
Christian Europe).
- The age of man, characterized by the rise of democratic
republics, which eventually generates unrest and disorder,
leading to a new barbarism that starts the cycle all over
again (think of the Enlightenment in Europe).
In 1960, the American economic
historian W.W. Rostow offered a non-Marxist alternative
conception of "The Stages of Growth" (the title of his
book):
- the traditional society
- the preconditions for take-off,
- the take-off,
- the drive to maturity; and
- the age of high mass-consumption.
Along the same lines, in 1965 A.F.K.
Organski, a comparative political scientist, proposed four
stages of political development:
- the politics of primitive unification,
- the politics of industrialization,
- the politics of national welfare, and
- the politics of abundance.
Most recently, Francis Fukuyama traced
political development through a series of stages in The
Origins of Political Order: From Prehuman Times to the
French Revolution (2011); a second volume,
Political Order and Political Decay: From the Industrial
Revolution to the Present Day (2014) tracks political
development since the french Revolution of 1789.
According to this view:
- Primitive hunter-gatherers inherited the violent
tendencies of their primate forebears, which forced them
to gather together into small, protective social groups.
- From these beginnings emerged tribes (and religion,
first in the form of ancestor worship).
- Then organized states -- first led by warlords, later by
hereditary kinds, city-states and later ration-states.
- The rule of law emerged when monarchies were made
accountable to elected bodies (as in England's Magna
Carta).
- Democratization, as well, proceeded along a series of
stages. These, in turn,were traced in a book by
Fukuyama's mentor, Samuel P. Huntington (who famously
predicted a "clash of civilizations" between Christianity
and Islam).
- The phases of political development are, in Fukuyama's
view, independent of corresponding stages of economic and
cultural development. When liberal democracy is
coupled with a market-oriented economy, you get what
Fukuyama described, in an earlier book, as "The End of
History". That is to say, no further development is
possible, because there's nothing left to develop
toward. In this respect, Fukuyama departs from Marx,
who believed that political and economic evolution would
end with communism.
Stage theories of political and
economic development are about as popular in social science
as stage theories of cognitive or socio-emotional
development have been in psychology! Note, however,
the implications of the term development, which
suggests that some societies are more "developed" -- hence,
in some sense better than others. Hence, the
familiar distinction between developed and undeveloped
or underdeveloped societies. This implication
is somewhat unsavory, just as is the suggestion, based on a
misreading of evolutionary theory, that some species (e.g.,
"lower animals") are less developed than others (e.g.,
humans). For this reason, contemporary political and
social thinkers often prefer to talk of social or cultural diversity
rather than social or cultural development, thereby
embracing the notion that all social and cultural
arrangements are equally good. This emphasis on diversity
is also characteristic of modern social and cultural
psychology.
From Cultural Development to Cultural
Psychology
In addition to the new evolutionary
psychology, a new cultural psychology is emerging that
addresses cultural differences in thought processes without
necessarily implying that one culture is more or less
"developed" than another.
- In some ways, cultural psychology has its origins in
19th century imperialism, as researchers from the
countries of Europe tried to understand how the thought
patterns of those they colonized, in Africa, Asia, and
the Americas, might differ from their own.
Naturally, this quest for understanding the "primitive"
or "savage" mind often had more than a little tinge of
racism in it -- at the very least, the investigators
seem to have been under the sway of cultural stereotypes
of the people they studied. One popular
hypothesis, for example, was that the "primitive mind"
might be more susceptible to visual illusions.
- A
famous example is the 1895 Torres Island Expedition,
to a part of New Guinea then under British control, by
W.H.R. Rivers, a pioneering British
psychologist. In fact, Rivers and his colleagues
found only minimal striking differences in sensory
acuity between the Torres Island natives and European
control subjects. Of course, you wouldn't expect to
find much cultural variation in mental functions like
perception, which lie so close to the physiology that
we all share in common. And it turned out that
while Torres Islanders were indeed more susceptible to
the horizontal-vertical illusion, they were less
susceptible to the Muller-Lyer illusion.
- In 1895, psychologists really didn't have the
methods or equipment to study the "higher" mental
functions, where such differences might be
observed. Nor, for that matter, did they have
any philosophical warrant to do so. Recall from
the Introduction
that Wilhelm Wundt, the leading psychologist of the
time, denied that anything other than sensation and
perception were amenable to controlled experimental
investigation.
- The first
cross-cultural investigation of "higher" mental
functions was undertaken by F.C. Bartlett, he of the
"War of the Ghosts" and the "Reconstruction Principle"
of Memory, who
studied herdsmen in Swaziland, a British colony in
East Africa (Bartlett had been a student of Rivers).
Bartlett found that these herdsmen had extraordinarily
good memory where their cattle were concerned -- an
outcome he interpreted in line with his reconstructive
theory of memory. In his view, this superior
memory reflected the "strong sentiments" that people
develop around institutionalized, culturally valued
activities. Cattle are more important for Swazi
herdsmen than for Cambridge undergraduates, and this
interest led to superior memory. But notice that
culture didn't change the reconstruction principle
itself -- it only determined what the subjects were interested
in.
- Cultural psychology was also stimulated by the rise in
Marxism, and the view that political and economic
changes in society would alter how individuals
thought. Chief among these theorists were
Aleksandr Luria and Lev Vygotsky, much of whose work was
published in Russian in the 1930s, and translated into
English only much later. Especially important in
this respect are Vygotsky's essays on Mind and
Society, translated and edited by Michael Cole in
1978.
- Vygotsky's general law of cultural development
states that "Any function in children's development
appears twice or on two planes. First it appears
on the social plane and then on the psychological
plane. First it appears between people as an
interpsychological category and then within the
individual child as an intrapsychological
category.... Social relations or relations among
people genetically underlie all higher functions and
their relationships ("The Genesis of Higher Mental
Functions", 1981, p. 163).
- Vygotsky emphasized the importance of social
interaction in learning. He defined the child's
zone of proximal development as "the distance
between the actual developmental level as determined
by independent problem solving and the level of
potential development as determined through problem
solving under adult guidance, or in collaboration with
more capable peers (Mind in Society, p.
86). In contrast to Piaget, who believed that
children must move from one stage of development to
another on their own, through their own discovery
learning, Vygotsky insisted that cognitive development
proceeded best when adults actively supported and
promoted the child's learning. But they shouldn't push
the child too far, so as to go beyond the boundaries
of the child's zone of proximal development.
- If none of this sounds particularly Marxist, you'd
be right. But if you were a Russian psychologist
living in Stalin's Soviet Union, you'd give your
theory of culture and cognition a Marxist twist too.
- One of the principal hypotheses of the early cultural
psychology was that there would be differences in
patterns of thought between literate and non-literate
cultures. As noted in the lecture on Language, words are
a powerful medium for representing knowledge, and syntax
is a powerful tool for thinking. But, as far back
as Plato, philosophers had speculated that the
availability of a written language might change how
people thought (in particular, Plato thought that
writing would wreck memory, because people would no
longer have any need for it).
- Scribner
and Cole (1981) studied the Vai people of Liberia, a
tribal group which had developed its own idiosyncratic
written language, otherwise rare in the tribal
cultures of Africa. The majority of Via men
(and almost all women) are illiterate, but some know
written Vai, while others also know Arabic (e.g., from
Koranic schooling, in which students are taught only
to memorize the Koran), while still others learned
English via formal schooling. Literacy in Vai
improved performance on a number of cognitive tasks,
but English literacy, as a product of formal
schooling, had even stronger effects. Koranic or
Arabic literacy had few positive effects, which means
that formal schooling, not literacy per se, was really
responsible for most of the apparent cognitive effects
of literacy.
- Scribner and Cole summarize their findings as
follows: "Literacy makes some difference to some
skills in some contexts" (1982, p. 234).
- And, of course, some evidence of cross-cultural
differences come from studies of the Sapir-Whorf
hypothesis, also discussed in the lecture on Language. On
the assumption that language is a reflection of culture,
evidence for the Whorffian hypothesis becomes evidence
of cross-cultural differences.
Notice however, that there's a
difference between studying the effects of language on
cognition and studying the effects of literacy. While
not everybody's literate, everybody's got language -- it's
part of being human. So while it's easy to
characterize literacy as an aspect of cultural development,
it's not so easy, or even appropriate, to imply that, for
example, speakers of English are any more "developed", just
by virtue of knowing English, than speakers of the Vai
language. So with the Sapir-Whorf hypothesis,
interests shifts from cultural development to
culture per se.
Partly as a result of increasing
cultural diversity in America and Europe, and increasing
appreciation of cultural differences, recent years have seen
a great increase of interest in cross-cultural psychological
research -- but without the implication that one culture is
more "developed" than another. Cultures, in this view,
are just different, and these differences are
psychological as well as behavioral, affecting how people
think. For a review of the early literature in this
area, see Triandis & Brislin (1984). This
literature has focused mostly on cultural differences in
social interaction, rather than in"pure" cognition or
emotion.
Certainly the most popular cultural
difference studied today has been characterized on a
dimension of individualism vs. collectivism (e.g.,
Triandis et al., 1988; Triandis, 1996).
- In individualistic cultures, such as Western Europe,
North America, and Australia, people view themselves as
independent individuals whose behavior is guided by
their own attitudes, beliefs, and interests.
- They foster an "independent" sense of self (Markus
& Kitayama, 1991).
- And they promote "linear", ""analytic", categorical"
modes of thinking (Nisbett, Peng, Choi, &
Norenzayam, 2001).
- In collectivist cultures, such as China, Japan, and
South Asia, as well as many parts of Africa and Latin
America, people view themselves as intimately connected
to their communities, with their behavior highly
responsive to the expectations of others and other
situational demands.
- They foster an "interdependent" sense of self.
- And they promote "holistic" or "dialectical"
thinking.
There is some evidence to support
these propositions, chiefly from studies comparing Chinese
or Japanese subjects with Americans. However, these
conclusions should be qualified. There is plenty of
variability within cultures, especially given the
opportunity for cultural contact. It is not clear that
a third-generation Japanese-American undergraduate at
Berkeley, for example, thinks any differently than her
Polish-American roommate. Sometimes, you get the feeling
that some cultural psychologists have a stereotyped vision
of the "exotic" cultures that interest them. If early
cultural psychology was sometimes motivated by racism, the
later version sometimes smacks of what Edward Said called Orientalism.
Not to put too fine a point on it: the
notion of "dialectical" thinking, in which the confrontation
between a thesis and its antithesis is resolved by a
synthesis, has its origins in European philosophy,
particularly Hegel and Marx. And perhaps the most
linear, categorical thinker of all time was Mao Tse-tung, a
Chinese who read both Marx and Hegel and -- again, not to
put too fine a point on it -- imprisoned or killed everyone
who disagreed with him.
Cultural differences are most
frequently cast in terms of developed vs. underdeveloped
cultures, or Eastern vs. Western cultures, but such
differences can be seen even within one of these categories.
- Within American culture, for example, Cohen and
Nisbett (1995) described a "culture of honor"
characteristic of the Old South (i.e., the states of the
former Confederacy) which differed in significant ways
from the culture of the North.
- Along the same lines, Hazareesingh (2015) has listed
five features that distinguish French thinking from that
characteristic of other countries:
- The use of history to structure reasoning.
- Importance of the nation and collective identity as
"French".
- Intense public debate about ideas.
- The role of the public intellectual in society.
- Give-and-take between rationality and creative
imagination.
From its beginning, scientific
psychology has been based on empirical findings obtained
from subjects of Western European heritage -- the
19th-century psychophysicists were Germans, after all, as
was Ebbinghaus; and as scientific psychology grew strength
in the 20th century, most of its subjects were American
college students -- who, themselves, were mostly white,
mostly of European heritage, and mostly relatively wealthy
(at least, from the middle class or above). In a
provocative article, Joseph Henrich, Steven Heine, and Ara
Norenzayan labeled these subjects
WIERDos -- for
Western, Educated, Industrialized, Rich, and Democratic (
Behavioral
& Brain Sciences, 2010). They suggested that
psychological theory had been distorted by excessive
reliance on them, and that psychologists should expand their
subject populations to include non-WIERDos -- and, perhaps,
to repeat classic experiments, on which so much theory has
already been established, on them as well. This idea
seems particularly relevant to personality, social, and
developmental psychology (we'll discuss culture-specific
syndromes of mental illness in the next lectures, on
Psychopathology and
Psychotherapy) -- though even such basic phenomena as
the Ebbinghaus illusion, discussed in the lectures on
Sensation and Perception,
appear to vary across cultures.
For introductions to cultural psychology, see the
following books by Prof. Michael Cole of UCSD, who --
along with Harry Triandis, at the University of Illinois
-- is the doyen of cultural psychology:
- Culture and Thought: A Psychological
Introduction by Michael Cole & Sylvia
Scribner (1974)
- Cultural Psychology: A Once and Future
Discipline by Michael Cole (1996).
This page last
revised
11/07/2024.