The
cognitive perspective on social interaction begins with the
assumption -- actually, more like an axiom -- that humans are
intelligent creatures. We do not behave merely by reflex,
taxis, instinct, and conditioned response. Rather, our
behavior is a response to the meaning of the stimulus,
and reflects active cognitive processes of perceiving, learning,
remembering, thinking, and communicating through
language. But humans are also social creatures. Our
experiences, thoughts, and actions take place in an explicitly
social context of cooperation, competition, and exchange, family
and group memberships, institutional, social, and cultural
structures. For that last reason, psychologists need
to understand the relations between psychological processes
within the individual and social processes that take place in
the world outside.
This cognitive perspective is
echoed in a famous
line from Harper Lee's To Kill a Mockingbird (1960). The novel
involves a widowed Southern lawyer, Atticus Finch, and his two young children, Scout
and Jem. After a particularly difficult first day
back in school, Atticus tells Scout: "You never really
understand a person until you consider things from
his point of view -- until you climb into his skin
and walk around in it".
We'll
see a little later how understanding the subject's
point of view can shed new light on a certain classic
psychological experiment.
While much of cognitive psychology is concerned with how the individual acquires, represents, and uses knowledge in perception, memory, thought, and language, social psychology is concerned with the role that cognitive (as well as emotional and motivational) processes play in social interactions -- between individuals, and between individuals and groups. As such, social psychology is not just the study of mind in social relations -- it is also very much the study of mind in action.
Social
psychology is part of psychology, and the history of psychology
begins with declarations of its impossibility as a
science. In the 17th century, Descartes had drawn a sharp
distinction between humans and animals -- humans had minds,
whereas animals operated solely by physical (physiological)
mechanisms like the reflex. But according to Descartes'
doctrine of substance dualism, mind was made of an
immaterial substance. In the later 18th century, Kant drew
the inescapable conclusion:
Kant may
have been the smartest man of his time, but his assertions about
psychology were quickly disproved, in the 19th century, by
studies of sensation and perception.
So, by
1867-1868, a full-fledged science of psychology was off and
running.
Wundt had
held that an experimental, quantitative psychology had to be
limited to "immediate" experience, by which he meant sensation
and perception. But very quickly, the domain of
experimental research in psychology expanded:
Many of
these experiments involved only a few subjects. However,
researchers began to recognize important individual differences
in mental function:
While Kant
had declared psychology an impossible science, Wundt doubted
that topics in social psychology could be studied with the new
science. Wundt distinguished between Naturwissenschaft,
including studies of sensation and perception, and Geisteswissenschaft
including everything else. He also distinguished between
experimental psychology, which could be accomplished in the
laboratory, and Volkerpsycholgie, which had to be based
on uncontrolled field studies. Just as the
psychophysicists had proved Kant wrong, and just as Ebbinghaus
and Hull had proved Wundt wrong about higher cognitive
functions, Wundt was to be on the losing side again.
The arc begun by the psychophysicists' studies of sensation and perception was completed by Sherif's studies of social influence on the autokinetic effect (1935), and Asch's (1956) studies of conformity in perception.
As
experimental research in personality and social psychology
began, specialized societies were formed to encourage this work,
specialized journals were established to publish it:
Throughout
all this history, we can identify several milestone crises and
challenges:
Over the
years, social psychology has been defined in various ways:
The definition I favor is broader than any of these:
Social
psychology is the study of the relation between
the individual's internal mental structures and processes and
structures and processes that exist in the social world
outside the individual.
Following
Gleitman (1980), we can further identify four distinct domains
of social interaction:
There are,
in fact, two rather different versions of social psychology --
one as practiced by psychologists, the other as practiced by
sociologists.
The cognitive perspective in social psychology has its origins in symbolic interactionism, a term coined by Herbert Blumer (1937, 1989), a student of George Herbert Mead who founded the sociology department at UC Berkeley. In Blumer's view, symbolic interactionism rests on three premises:
As Blumer
makes clear, symbolic interactionism itself is rooted in the
work of Blumer's mentor, George Herbert Mead, author of the
seminal treatise Mind, Self, and Society (1934). A
number of Mead's concepts, as described by Blumer (1989), will
illustrate the connection:
But the cognitive perspective has roots that go back even further than Blumer and Mead, to what R.K. Merton (1976) has dubbed The Thomas Theorem, which appeared in a book on adolescence by William Isaac. Thomas and Dorothy Swain Thomas (1928, p. 529):
"If men define situations as real, they are real in their consequences."
Merton (1976, p. 174) has called this quote "Probably the single most consequential sentence ever put in print by an American sociologist".
Although The Thomas Theorem is properly attributed to Thomas and Thomas writing together, its essence had been articulated by W.I. Thomas, writing alone, some five years earlier (Thomas, 1923, p. 42-43):
"Preliminary to any self-determined act of behavior there is always a stage of examination and deliberation which we may call the definition of the situation. And actually not only concrete acts are dependent on the definition of the situation, but gradually a whole life-policy and the personality of the individual himself follow from a series of such definitions."
Similarly, Theodore Newcomb, discussing the findings of his pioneering study of the consistency of social behavior across situations, which actually found precious little consistency, attributed the individual's behavior in a particular situation to his beliefs about that situation (1929):
"There are always slight differences in both internal and external stimuli which are important in determining behavior, yet are not recordable.... situations are necessarily so different that large measurable consistency is not to be expected" (pp. 77).
"To cite an obvious example, whether or not Johnny engages in a fight may depend on whether or not he thinks he can 'lick' his opponent" (p. 39, emphasis added).
The cognitive perspective in psychology was neatly summed up by a British psychologist, Sir Frederick C. Bartlett, in his critique of classical psychophysics and of Ebbinghaus' research on memory:
"The psychologist, of all people, must not stand in awe of the stimulus."
For Bartlett, perception was not merely the analysis of a stimulus object or event; rather, perception involved the construction of a mental representation of the stimulus. And memory was not merely the reproduction of some past event; rather, remembering involved the reconstruction of that event. Both construction and reconstruction involved "higher" cognitive activities such as reasoning, inference, judgment, and problem-solving -- what Bartlett called "effort after meaning".
But that was then. Just as social psychology was beginning to get going, the behaviorist revolution initiated by John B. Watson (1913, 1919) took hold in psychology in general, and in social psychology in particular. Whereas William James had defined psychology as the science of mental life, Watson saw a conflict between the private and subjective nature of mental life, and the requirement of science for objective, publicly observable facts. Accordingly, he redefined psychology as a science of behavior, and restricted a scientific analysis to publicly observable environmental stimuli and publicly observable behavioral responses to them.
For
Watson:
The behaviorist perspective on human behavior, including human social behavior, can be summarized by a Doctrine of Situationism expressed most vigorously by B.F. Skinner.
For Skinner, as for Watson:
The behaviorist perspective was quickly
embraced by social psychology, particularly in an important
textbook by Floyd Allport (1924).
Very quickly, and especially in the years after World War II, social psychology -- and especially American social psychology -- evolved as a variant on functional behaviorism. In this way, environmental control was equated to stimulus control. Cross-situational variability, not cross-situational consistency, was to be expected in behavior, depending on the individual's reinforcement history, and on the conditioned stimuli and discriminative stimuli present in the environment. Throughout this period, the emphasis of (American social psychology) was on the situational control of individual behavior -- by which social psychologists meant the objective situation, not the situation as defined, as cognitively constructed, by the individual.
This can be seen in the classical definition of social psychology, offered by Floyd Allport's younger brother Gordon (G. Allport, 1954, p. 5):
"With few exceptions, social psychologists regard their discipline as an attempt to understand and explain how the thought, feeling, and behavior of individuals are influenced by the actual, imaged, or implied presence of other human beings.... [S]ocial psychology wishes to know how any given member of a society is affected by all the social stimuli that surround him."
G. Allport was not a strict behaviorist: he was interested in thoughts and feelings as well as in behavior. But still, the behaviorist view can be seen in his emphasis on thoughts, feelings, and behaviors as responses to stimuli impinging on the individual from the external social environment.
Whereas
the behaviorist approach conceived of social behavior as a
more-or-less mechanical (conditioned or unconditioned) responses
to stimuli in the social environment, the first glimmerings of a
cognitive approach began to emerge in the 1950s.
The first
reaction to the behaviorist viewpoint came by way of Gestalt
psychology, a movement led by Kurt Koffka, Wolfgang Kohler, and
Max Wertheimer, and which arose in Europe as a reaction to the
atomism of classical 19th-century Structuralism, with its
emphasis on stimulus determination/ But Gestalt psychology
also had appeal in opposition to early 20th century behaviorism,
especially of the kind espoused by Watson, with its atomistic
description of both stimulus and response. Gestalt,
of course, roughly translates as "whole configuration", and the
Gestalt theorists focused on the tendency of the mind to
organize individual stimuli into groups or sets -- in broader
terms, to fuse individual stimulus elements into a perceptual
whole. From a Gestalt point of view, we cannot analyze
perceptual experience into its elementary constituents (as the
Structuralists sought to do), because the individual elements
interact and combine with each other in such a way that "the
whole is different than the sum of its parts". The
Gestalt principles of perception, such as proximity,
similarity, and symmetry made it clear that
perception was not determined solely by the stimulus, but also
by internal processes. Similar principles were applied to
memory, such as the von Restorff effect, that memory is
better for stimuli that stand out against their background; and
also to thinking, such as Kohler's own studies of insight
in problem-solving. Gestalt psychology, along with
(honest!) psychoanalytic object-relations theory, kept interest
in cognition alive during the dark days of behaviorist hegemony
-- and what it did for "experimental" psychology, it also did
for personality and social psychology.
In fact, by the mid-1950s cognitivism was visible enough that Martin Sheerer was commissioned to write a whole chapter on the approach for the first edition of Lindzey's Handbook of Social Psychology (1954). Scheerer is largely forgotten now, but he and Kurt Goldstein had published an important monograph on Abstract and Concrete Behavior (1941), in which they distinguished between two types of behavior, abstract and concrete, in turn dependent on corresponding abstract and concrete attitudes -- which, in turn they construed as "capacity levels of the entire personality" (p. 1). People have abstract or concrete attitudes, in differing degrees, and these attitudes determine how they will behave.
The
Gestalt viewpoint was initially brought into social psychology
by Kurt Lewin, especially in the papers from the 1930s and 1940s
collected as Field Theory in Social Science
(1951). Lewin argued for a dynamic psychology in which
behavior was determined by various psychological forces (Lewin
was especially interested in conflict. For our purposes,
the most important idea is that of the Life Space, which
consists of the "Gestalt" of the person and the psychological
environment -- by which Lewin meant not the physical
environment, as it might be described by the behaviorists, but
rather the perceived environment -- or, better yet, the
meaning of the environment, which of course was grist for
the cognitive mill.
A more explicitly cognitive take on social interaction was supplied by Fritz Heider in his Psychology of Interpersonal Relations (1958). Heider agreed with Lewin about the interdependence of the person and the situation, and that what was important about the situation was how the person viewed it. He focused on "common-sense psychology, or what we would now call folk psychology -- the views about mind and behavior held by ordinary people on the street, as opposed to the scientific theories developed by professional psychologists. After all, he argued, it's folk psychology that determines our behavior toward other people; just as important, Heider argued that folk psychology is also, often scientifically correct.
Heider agreed with Brunswik (1934), whom we shall discuss more fully in the lectures on Social Perception, that person perception was governed by the same principles as the "impersonal" perception of nonsocial objects. At the same time, he argued that persons have properties that are not possessed by impersonal objects such as tables and chairs: these include abilities, emotions, intentions, wishes, sentiments, purposes, and other aspects of mind. Person perception is the process by which we perceive these qualities in other people.followed Moreover, Heider acknowledged that, in the social case, the object of perception, who is a sentient being like the perceiver him- or herself, is perceiving the perceiver in turn.
The
Perceiver (P) is perceiving the Other (O), who is perceiving P
in turn.
The big concept with Heider is phenomenal causality (Heider, 1944), a topic which we'll discuss further in the lectures on Social Judgment. In social perception, we are trying to understand another person's social behavior. Our understanding will determine how we behave toward that person. It doesn't matter so much what really caused the behavior -- that is a topic for scientific psychology. What matters is what we perceive, or believe, caused the behavior. Therefore, it's important to understand causality from the perceiver's point of view.
In social perception, the object of perception is also a sentient being, and P is always aware that O's perception will have an effect on P (by determining O's behavior). O's perceptions, and the behavior that follows from them, will affect P in three different ways:
Heider is sometimes called the "father of situationism" in social psychology (e.g. Ross & Nisbett, 1991), but he's not. As I'll make clear later, Heider, like Lewin, argued that the person and the situation constituted an interdependent whole. If "the situation" means the psychological situation, as perceived by the individual (Ross & Nisbett acknowledge this), and if the perceived situation is not determined wholly by the stimulus input, then, at the very least, the person makes just as important a contribution to behavior as the situation does.
Heider also initiated an important theoretical tradition in social psychology known generically as balance theory (1946). All of the balance theories assume that people attempt to achieve consistency among their cognitions, defined broadly to include beliefs, knowledge, expectations, attitudes, and all sorts of other internal mental states and dispositions of the sort that were thoroughly rejected by the behaviorists. Similarly, any inconsistencies among cognitions are assumed to be affectively aversive, leading to various cognitive maneuvers intended to reduce the discrepancies and the consequent negative feelings.
Heider's
own theory was called the p-o-x theory, because it dealt
with three elements: the attitude of a person (p) toward
another person (o) and an object (x) belonging to
o or is related to o in some way.. Heider
understood that we have lots of attitudes towards lots of
people, and things, but he argued that we try to impose some
order on this vast network of attitudes. In line with
Gestalt theory, he argued that p, o, and x
constituted a unit or a Gestalt-like whole bound together either
by unit relations like family, nationality, or gender;
or by sentiment relations such as liking, admiration,
and approval. Both sorts of relations tend toward harmony.
More generally:
When a state of imbalance occurs, there are several things that p can do to set things right (remember, balance or imbalance is always defined with respect to p).
Some of the most famous
evidence supporting dissonance theory comes from studies using
the forced compliance paradigm, in which subjects are
asked to do something that contradicts their personal attitudes
or beliefs. To take a classic example, Festinger and
Carlsmith (1959) engaged subjects in tasks that were extremely
boring, such as turning a series of knobs for precisely 1/4
turn. After completion of their task, the subjects were
then asked to tell the next subject, newly arrived form the
experiment, that the tasks involved were very interesting
(which, decidedly, they were not). Some subjects were
given $20 to lie in this way, while others were given only
$1. The lie was intended to induce cognitive
dissonance.
Here's why this happened, according to cognitive dissonance theory.
These notions of discounting and insufficient justification played a big role in certain theories of causal attribution, as discussed in the lectures on Social Judgment.
Note: Alternative interpretations of the F&C experiment, and others like it, are always possible. For example, in a resurgence of behaviorism, Bem's (1967, 1972) self-perception theory denied that subjects in the low-payment condition changed their attitudes, because it denied that they had any attitudes to begin with. According to self-perception theory, people don't have attitudes stored in memory, as it were, ready to be retrieved when asked their opinion about some topic. Instead, they compute what their attitude must be, on line, based on their observations of their own behavior. In the large-payment condition, the behavior could be discounted as relevant to one's attitude, much as Festinger argued: it is sufficient justification to explain the behavior. However, because the smaller payment was insufficient to justify the behavior, the subjects therefore inferred that they had the corresponding attitude -- i.e., that the tasks really were engaging and exciting.
A similar sort of argument was made by Lepper, Greene, and Nisbett (1973) in a seminal experiment on intrinsic motivation. This experiment is discussed more fully in the General Psychology lecture on Motivation.
The Festinger &
Carlsmith study was about belief (concerning the tasks), but it
could just as easily be taken as a study of attitude --
which many theorists, following Allport (1935), have argued is
the central concept in social psychology, distinguishing it from
other subfields (I don't think that's true, but I respect those
who do). That is, the subjects may have come to believe
that their tasks were engaging and interesting; or they could
have changed their attitude towards the tasks, from negative to
positive. It works either way. But as a matter of
historical interest, the balance theories, and especially
Festinger's theory of cognitive dissonance, led to an important
shift in the study of attitudes.
McGuire (1986) has
traced three (actually, four) periods of social-psychological
research on attitudes.,
Attitudes are "cognitive" in the broad sense that they are internal mental states that dispose people to respond in particular ways to particular objects and events. But strictly speaking, attitudes are emotional, not cognitive constructs, because they are less about knowledge and more about positive and negative feelings toward the attitude objects. Still, feelings can be based on knowledge and beliefs, and feelings can change when knowledge and belief change. Accordingly, the process of persuasion and attitude change is relevant to social cognition: how do people come to think, and feel, differently about something?
Classic research on attitude formation and change, characteristic of the 1930s and 1940s, and thus focused on things like the Nazi and Soviet propaganda machines, focused on processes of persuasion and communication -- in the "Lasswellian" formulation of the Yale Communications Research Program, "Who says what to whom in what channel with what effect" (Lasswell, 1948, p. 117; Hovland et al., 1953, p. 12). Thus, whether an individual comes to hold an attitude, or changes his attitude, depends on a number of different elements:
The 1950s and 1960s saw
the influence of balance and dissonance theories.
Dissonance between a pre-existing attitude and a persuasive
communication, or between two attitudes, or between attitudes
and behavior. One way to do this is to change the
perception of the behavior. Another way is to engage in
selective perception, learning, or memory that that favors
attitude-consistent information.
And finally, beginning in the 1970s, following the cognitive revolution in social psychology, theories of attitude change more expressly focused on information-processing and other aspects of cognition.
For an excellent exposition of Lewin's field theory, see the treatments by Hall and Lindzey in Theories of Personality (1e, 1957); for Heider and the balance theories, see the corresponding presentation by Shaw and Costanzo in Theories of Social Psychology (1e, 1970) -- an important textbook that was expressly modeled on the success of Hall and Lindzey. Hall and Lindzey, for their part, modeled their text on Hilgard's classic Theories of Learning (1e, 1948).
Beginning in the 1950s, and especially in the 1960s, psychology became disenchanted with the radical behaviorism of Watson and of its most devoted exponent, B.F. Skinner. In the mid-1960s, around the time that Neisser's seminal textbook, Cognitive Psychology (1967) was published, the cognitive revolution in experimental psychology washed over into social psychology. The thrust of the cognitive revolution in social psychology was to reassert The Thomas Theorem that what controls individual experience, thought, and action is not the situation, as it might be objectively described by a third-person, but rather the individual's mental representation of the situation, which is in turn a product of his or her internal, cognitive processes -- though, frankly, social psychologists did not always make attribution to the Thomases themselves.
In some respects, the transition between a crypto-behavioristic approach to social psychology and a more cognitive point of view is illustrated by two lines of research separated by less than five years.
The first is Stanley Milgram's
classic studies of obedience to authority (1963,
1964). As is well known, Milgram contrived a situation in
which two subjects were brought into the laboratory for a study
ostensibly on the effect of punishment on learning. One
subject was ostensibly assigned to the role of teacher, the
other to the role of learner. However, only one of these
individuals was an actual subject: the person assigned to the
role of learner was, in fact, a confederate of the experimenter,
whose behavior in the experiment followed a pre-arranged
script. When the "learner" made mistakes, the "teacher"
was supposed to administer electric shocks as punishment.
While the level of shock began as "slight", the teacher was supposed to increase the shock level with each new error, eventually reaching a very intense, very dangerous level labeled "XXX". At this high level, the "learner" was instructed to complain, and then go silent. If the "teacher" balked at increasing the shock level, the experimenter simply replied that "the experiment requires that you continue". In his original experiment, Milgram reported that roughly 65% of subjects followed orders, administering the very highest level of shock.
Subsequent
experiments
examined
the conditions of obedience to authority. For example,
Milgram found that the percentage of obedient "teachers" varied
depending on their proximity to the "learner", on the proximity
of the authority (i.e., the experimenter), and on the
institutional context in which the experiment took place.
Throughout the series of studies, Milgram's emphasis was on
features of the situation controlling the subject's obedient
behavior -- as he put it, "The sheer strength of obedient
tendencies manifested in this situation".
The Milgram experiment has been given the move treatment not once but twice: first as the Tenth Level, a 1976 television movie, starring William Shatner (of Star Trek) as Milgram; then, in 2015, in Experimenter, starring Peter Sarsgaard as Milgram (and Lellan Lutz as Shatner).
A related adventure in psycho-Hollywood is The Stanford Prison Experiment, starring Billy Crudup as Phillip Zimbardo, a psychologist at Stanford whose experiment was inspired by Milgram's.
Milgram's experiment aroused considerable controversy, partly on ethical grounds -- it was the immediate stimulus to the institutionalization of "institutional review boards" for the protection of human subjects in research. But it also aroused criticism on methodological grounds -- specifically, that Milgram's experimental situation contained cues that suggested that things were not really as they seemed on the surface. If his deception was so transparent, then what looks like unquestioning obedience to authority might not be obedience after all.
Chief among these critics was Martin T. Orne (Orne & Holland, 1968), who argued that the subject's perception of the experimental setting is a critical determinant of his behavior within that situation (full disclosure: Orne was my mentor in graduate school). From Orne's point of view, Milgram's experiment contained three critical cues. (1) With respect to the ostensible purpose of the experiment, to study the effect of punishment on learning: the "teacher" was not doing anything that the experimenter couldn't have done perfectly well himself. Put another way, the "teacher" might well have asked himself, "What am I doing here?". (2) Although the ostensible purpose of the experiment was to study the effect of punishment on learning, in fact the experimenter stayed in the room with the "teacher", and did not make any observations of the "learner". This might well have communicated that the "teacher", not the "learner", was the real subject of the experiment. (3) When the "learner" began to complain about the intensity of the shock, the impassive behavior of the experimenter is totally at odds with the situation apparently unfolding. For example, the experimenter did not even bother to check on the "learner" when he stopped responding. Taken together, this constellation of cues must clearly have communicated to the "teacher" that there was something "fishy" about the whole business.
Orne's critique of
Milgram was framed by his social-psychological analysis of
psychological research in general (see Orne, 1962, 1970,
1973). All too often, Orne argued, psychologists treat
experimental subjects as if they were passive recipients of
experimental manipulation -- beakers, if you will, filled with
chemicals to see what reaction will occur. Instead, Orne
argued, experimental subjects are sentient beings, actively
involved in the social interaction known as "taking part in an
experiment". For that reason, Orne argued, the
experimental setting has to be seen "from the subject's point of
view".
Orne's
emphasis on viewing the experiment "from the subject's point of
view" is clearly related to Thomas's emphasis on "the definition
of the situation", described below). As such, Orne's
analysis of the social psychology of the psychological
experiment constitutes an early example of the revival of the
cognitive perspective on social interaction.
To repeat:
The cognitive perspective was quickly translated into actual experimental research. Consider, for example, the classic research on bystander intervention, a form of altruism, published by Darley and Latane (1968). In these experiments, subjects were recruited for an experiment involving the completion of some personality questionnaires. Some subjects were run alone in a research cubicle, others were run in groups. During the experimental session, after the subject(s) were left alone to their work, the experimenters contrived an emergency -- smoke blowing into the room through ventilation ducts, or a research assistant falling in an adjacent room. The principal finding of the experiment was that subjects were more likely to seek or render assistance when they were alone, than when they were in a group -- in other words, that the presence of other people deterred helping behavior.
A result like this could admit of a purely situationist interpretation, a la Milgram -- that behavior was influenced by some objective feature of the situation, such as whether others were present or not. Instead, Darley and Latane offered a cognitive interpretation of the result, in terms of an analysis of the deterrents to helping:
The situation is, first and foremost, one that is ambiguous. Two people fighting in the park may be intent on injuring each other; or it may be horseplay among friends; or they may be rehearsing for a school play. Accordingly, we tend to look to other people for clarification. But in this instance, the other people are doing the same thing -- a condition that Darley and Latane called pluralistic ignorance. Everybody's looking around for clarification, but nobody's doing anything -- and that lack of action helps define the situation as a non-emergency.
Even if the situation has been defined as an emergency, other cognitive factors may come into play to create the bystander intervention effect. For example, the presence of other people may lead to inaction through diffusion of responsibility -- if each actor believes that someone else has already taken action, then nobody will think that any action is necessary. Finally, the individual's self-efficacy beliefs may preclude action (Darley and Latane did not use this precise term, which was coined by Albert Bandura, but this is what they meant): we may easily believe that someone else present in the situation has more skills than we do to render proper assistance.
Viewed objectively, the presence of other people deters helping behavior. But from a psychological point of view, the actual determinants of behavior are not situational, but rather cognitive in nature, because they lie in people's individual beliefs and expectations concerning themselves and the situation in which they find themselves.
Orne's
analysis of the experimental situation, and the bystander
intervention experiments of Darley and Latane, exemplify the
cognitive perspective on social interaction as it re-emerged in
the wake of the cognitive revolution. The cognitive
perspective, put bluntly, is that cognition mediates the
person's response to environmental events.
The classic framework for the
analysis of social behavior was provided by Kurt Lewin
(1890-1947). Lewin took his PhD from the University of
Berlin in 1914, trained in the tradition of Wundtian
structuralism, but soon shifted his allegiance to the Gestalt
school of psychology. He emigrated to the United States in
1933, a refugee from Hitler's Europe, at which time he
Americanized the pronunciation of his name to Loo-win
(though, I suppose the adjectival form is still pronounced Loo-vinian!).
Lewin initially taught at Iowa, then founded the Research Center for Group Dynamics at MIT (it subsequently moved to Michigan, where it remains as a component of the Institute for Social Research). Through his American students, particularly Leon Festinger (1919-1989), and Festinger's students (who include Stanley Schachter and Philip Zimbardo), Lewin became widely influential in American social psychology. His point of view is best represented in his early books, A Dynamic Theory of Personality (1935) and Principles of Topographical Psychology (1936).
Employing the conventions of mathematics, Lewin asserted that
B = f(P, E),
|
where B = the individual's overt behavior: behaviors that are publicly observable. For Lewin, every behavior is a social behavior, in that the individual's behavior is always in some way directed toward another person. |
In other words... | P = personal
determinants: mental (cognitive, emotional, and
motivational) states and dispositions residing within
the individual's mind, such as beliefs, feelings,
motives, traits, and attitudes.
For Lewin, P represents all the causal factors that reside within the individual. |
Behavior
is a function of both personal and environmental factors. |
E = environmental
determinants: factors impinging on the individual
from outside, including aspects of the physical
ecology (temperature, humidity, altitude, etc.)
and aspects of the sociocultural ecology (the
presence and behavior of other people, constraints
imposed by social structures, social roles,
situational demands and expectations, social incentives,
etc.
For Lewin, E represents all the causal factors that reside in the world outside the individual. But for Lewin, because every behavior is social behavior, every situation is really a social situation, whose dominant features are the behavior of other people, as well as wider social and cultural forces. |
This bit
of pseudo-mathematics represents the idea that personal and
environmental determinants combine somehow to cause individuals
to do what they do. The comma (,) in the equation
indicated that Lewin was open as to precisely how these factors
combine -- which turns out to be a nontrivial detail!
Perhaps the easiest way to think about how personal and environmental determinants combine to produce individual behavior is to think of them as independent of each other. This is certainly the perspective adopted by traditional personality and social psychology.
As subfields within psychology, personality and social psychology have historically emphasized different aspects of Lewin's formula.
Traditional personality psychology assumes that behavior is primarily determined by features of the person such as his or her beliefs, attitudes, values, emotions, motives, and traits, and that situational factors are largely irrelevant.
B = f(P).
The canonical method of traditional personality psychology is to construct a "psychological test" to measure some personality trait, and then to use this information to predict individual behavior in some specific situation. The test might take the form of a self-report questionnaire, a rating scale (completed by the subjects themselves or by others who know them well), or even a sample of actual behavior.
So, for example individual differences in friendliness, assessed
by means of a self-report questionnaire, would be used to
predict whether a person would smile in some
situation. In this research, which often uses the
technique of multiple regression analysis (a variant on the
correlation coefficient), the trait measure (e.g., friendliness)
serves as the predictor variable, and the behavioral
measure (e.g., smiling) serves as the criterion variable.
A similar method is used to study the relationship between attitudes and behavior. For historical reasons, having largely to do with an interest in attitude change in response to persuasive communications, attitudes have primarily been studied by social psychologists. But the logic is the same: attitudes, which are internal dispositions to evaluate certain objects or ideas positively or negatively, are held to cause the individuals who hold these attitudes to behave in particular ways. Thus, in the 2000 election, a registered Democrat was (probably) more likely to vote for Al Gore than for George Bush -- though not enough of them did, from Gore's point of view.
These dispositions are commonly studied in the form of traits and attitudes. However, other dispositions are also relevant to behavior, such as moods, motives, values, and beliefs.
The canonical method of traditional personality psychology exemplifies the doctrine of traits, derived to a great extent from the work of Gordon Allport (1937):
Social behavior varies as a function of internal dispositions that render it coherent, stable, consistent, and predictable.
G. Allport (1937) defined a personality trait as:
"a generalized and focalized neuropsychic system... with the capacity to render many stimuli functionally equivalent, and to initiate and guide consistent (equivalent) forms of adaptive and expressive behavior."
For Allport, there is an analogy between personality traits and physical traits. Just as physical traits are stable dispositions to appear in a particular way, so personality traits are stable dispositions to behave in particular ways. Traits are internal to the person. Although not necessarily genetic in origin -- they could be acquired through a history of learning -- they are somehow represented in the nervous system. These personal characteristics, once established, then mediate between the environment and behavior. As Allport put it, traits "render situations functionally equivalent", in that they dispose the person to display similar sorts of behaviors in them.
Further,
Allport contrasted two views of traits:
Allport himself preferred the biophysical view: for him, personality traits were real in precisely the same way that physical traits were real, and were subject to measurement in precisely the same way that physical attributes were. This view is very popular, especially at Berkeley -- it's the premise of the Institute for Personality Assessment and Research, forerunner to the Institute for Personality and Social Research -- but it is also very controversial, for reasons that will become clearer later.
In this course, I take an agnostic position on the biophysical view: personally, I do not believe that traits are important determinants of behavior, and so I think it is a mistake to make them the center of personality research. But even if traits exist, in the biophysical sense that Allport believed they existed, they are also social constructions. We, all of us and every day, label and categorize people in terms of their traits. And it is this categorization process that is a topic for research in social cognition.
In fact, even from the biophysical view, personality assessment can be viewed as a process of social judgment -- in which the judge attributes traits to a person based on his or her scores on various personality tests. A great deal of research within traditional personality psychology has been devoted to the question of how accurate these attributions are -- a line of research that implicitly assumes that traits have an existence independent of the judge. But in this course, we will set aside the important and interesting question of accuracy, and focus on the cognitive processes by which trait attributions are made.
Traditional social psychology, by contrast, assumes that behavior is primarily determined by features of the environment, and especially features of the sociocultural ecology, such as interpersonal, organizational, and cultural factors, and that individual differences in personality are largely irrelevant.
B = f(E).
This viewpoint, which is congenial to the behaviorism espoused by John B. Watson and B.F. Skinner, is exemplified by traditional research on social influence, or the effects on behavior of the presence or behavior of other people.
The canonical method of traditional social psychology is to manipulate some aspect of the social environment (such as whether behavior is private or public, or whether the subject receives information about other people's attitudes and opinions), and observe the effects of this manipulation on behavior in some specific situation. All subjects might be exposed to all conditions (this is known as a within-subjects design), or different groups of subjects might be randomly assigned to each condition (this is known as a between-groups design).
So, for example, we might
arrange an encounter between a subject and an acquaintance or a
stranger, and see if smiling occurs more often in one situation
than the other. In this research, which often uses the
technique of analysis of variance (a variation on the t-test),
the manipulated variable (e.g., presence of acquaintances or
strangers) serves as the independent variable, and the
observed variable (e.g., smiling) serves as the dependent
variable.
Just to confuse things, sometimes in regression analyses, the predictors are labeled as independent variables, and the criteria are labeled as dependent variables. This is because, mathematically, multiple regression is formally equivalent to the analysis of variance.
The canonical method of experimental social psychology exemplifies the doctrine of situationism:
Social behavior varies as a function of features of the external environment, particularly the social situation, that elicit behavior directly, or that communicate social expectations, demands, and incentives.
These features of the situation may be found in the external physical environment. More likely, though, they are to be found in the external social environment, such as the presence and activities of other people, social demands, and social rewards.
The doctrine of situationism is sometimes attributed to Kurt Lewin himself -- see, for example, The Person and the Situation: Perspectives of Social Psychology (1991) by L. Ross & R.E. Nisbett. It is true that, as Ross and Nisbett note, "the main point of Lewin's situationism was that the social context creates potent forces producing or constraining behavior" (p. 9). But Lewin's field theory held that the person and the environment were part of a single "field", and so the idea of the environment acting on the person isn't really consistent with his views. Lewin's views are actually more compatible with another doctrine, concerning interactionism, discussed later.
A better source for situationism in social
psychology is found in the radical behaviorism of B.F. Skinner
-- a point made by Zimbardo ("Experimental social psychology:
Behaviorism with minds and matters" in Reflections on 100
Years of Experimental Social Psychology, ed. by A.
Rodriques & R.V Levine, 1999). Consider, for example,
the following quotation from Skinner's introductory psychology
text, Science and Human Behavior (1953):
The free inner man who is held responsible for the behavior of the external biological organism is only a prescientific substitute for the kinds of causes which are discovered in the course of a scientific analysis. All these alternative causes lie outside the individual (emphasis added).
Skinner mostly studied learning in nonhuman animals (chiefly rats and pigeons), but he had no difficulty generalizing from the nonhuman to the human case. No matter the organism, behavior is under the control of eliciting and discriminative stimuli in the environment, and subject to selection by the organism's history of reinforcement. Accordingly, there is no need to make reference to any mental states (such as belief, feeling, or desire), or for that matter any trait (such as neuroticism or extraversion) as either initiating behavior or as mediating between environmental stimulus and organismal response. To the extent that Skinner considered such matters, he viewed personality traits as habits established through learning.
Despite the cognitive revolution in psychology, which displaced Skinnerian behaviorism from its hegemonic position in the field, situationism remains powerful in social psychology today. In a tutorial on social psychology prepared for neuroscientists, Lieberman (2005) reasserted the power of the situation, as well as the related doctrine of situation blindness:
This
situationist viewpoint is exemplified by the classic topics in
social psychology -- especially the literature on social impact,
conformity, and compliance. But situationism also lies at the
core of what we might think of as the "Four As" of social
psychology:
For most of the 20th century, personality and social psychology proceeded largely independently of each other (in the 1930s, Gordon Allport wrote a seminal text on personality, while his brother Floyd did the same for social psychology). In many psychology departments, personality and social psychology were represented by different groups of faculty, the same way that cognitive and clinical psychology are. And, pretty much, each group treated the other with benign collegial neglect. But in the 1960s, partly as a late result of the hegemony of behaviorism in psychology, as well there arose a trait-situation controversy over which factors were more powerful predictors of behavior -- internal traits or external situations. This debate, which focused on statistical comparisons of the percentage of behavioral variance that was accounted for by traits and by situational factors, came to a head in the late 1970s and 1980s (long after behaviorism had passed from the scene) devolved into a contest over whose "effect size" was bigger.
Mischel (1968), in a review of available research, concluded that the modal correlation between subjects' scores on a personality test and their actual behavior in some specific test situation was about r = .30 -- a figure indicating that traits account for about 10% of behavioral variance. Mischel famously (and derisively) dubbed this figure the personality coefficient. Mischel also suggested that the perceived situation would account for more behavioral variance than traits, but he did not actually test this proposal.
A counterattack by Funder and Ozer (1983) sampled from the classical social-psychological literature on situational influence, translated t values and F ratios into correlation coefficients, and determined that the effect of situational variance amounted to a correlation of about r = .45 -- a figure indicating that situations account for about 20% of behavioral variance. So, most variance isn't accounted for by situations, either. (Note, however, that Mischel was talking about the effect of the perceived situation, while F&O analyzed the effect of the objective situation, as experimentally manipulated). Apparently, neither traits nor situations account for "most" behavioral variance.
So what began as a stereotypically masculine "Battle of the Correlation Coefficients", intended to determine whose was bigger, ended up looking more like a fight in an elementary schoolyard, with each side shouting "So's your mother" at the other one. In retrospect, the Battle of the Effect Sizes was essentially a pointless exercise, and generated much more heat than light. It's over now, and in most psychology departments personality and social psychologists work side by side, as indeed they do at Berkeley -- though they still keep their hands on their swords.
However, these traditional formulations are largely misleading. Nobody believes that one factor is exclusively responsible for behavior, and the other is wholly irrelevant. Dispositional and situational factors probably combine somehow to cause behavior to occur.
As noted earlier, one possibility is that P and E are independent -- that is, that each set of factors exerts its own separate influence on behavior, without affecting the other in any way. This notion lies at the heart of the traditional situation, in which personality and social psychology were situated as separate and independent subfields of psychology. In such a situation, behavior is partly predicted by personality traits, and partly affected by situational manipulations. In mathematical terms, personal and environmental factors are additive:
B
= f(P + E).
If P
and E are independent:
Thus, friendly
people may smile more than unfriendly people, and people may
smile more at acquaintances than at strangers, but the
difference between friendly and unfriendly subjects is constant
across the two situations (subtract the means), and the
difference between acquaintance and stranger targets is constant
across levels of friendliness (again, subtract the means).
Statistically speaking, there is no interaction between
these main effects.
But this was not Lewin's idea at all. Lewin sought to apply the principles of Gestalt psychology to the study of social behavior. The Gestalt school is known for its assertion that "the whole is greater than the sum of its parts". Applied to perception, this means that perception encompasses the entire stimulus field. Individual stimulus elements form a coherent, integrated whole, and cannot be isolated from each other. Similarly, Lewin argued that social behavior is responsive to the entire field of social stimuli -- not just the other person immediately present, but also the wider social context in which the interaction occurs. Lewin went even farther to assert that the social situation includes the person him- or herself: the person is part of the stimulus field to which he or she responds.
Lewin expressed this basic idea throughout his writings, in various ways:
1933 |
1939 |
1940 |
1943 |
1946 | |||
Statements like these show why claims that Lewin is the godfather of situationism in social psychology are, simply, wrong. Lewin, influenced by Gestalt psychology, was a field theorist -- he believed that the person and the environment were interdependent elements constituting a unified psychological field. The challenge is how to understand this interdependence.
The trait-situation
controversy faded partly due to exhaustion of the participants,
but also because psychologists began to consider a more
interesting possibility -- that personal and environmental
determinants interacted with each other in a variety of
ways. This comes closer to Lewin's own
position. Remember that he was heavily influenced by
Gestalt psychology, and believed that the person and the
environment constituted an organized and integrated field
in which behavior takes place. In this behavioral field,
the person and the environment are inextricably intertwined.
The Doctrine of Interactionism proposed by K.S. Bowers (1973), holds that people influence the situations that, in turn, influence their behavior. As he put it:
"Both behavior and reinforcement are subject to selection by biocognitive structures. These structures include the biological substrates of mental processes; and the cognitive system which organizes them. Interactionists agree that a person's behavior is determined by the situation in which it occurs. But they also assert that the situation itself is largely determined by the person.... An interactionist or biocognitive view denies the primacy of either traits or situations in the determination of behavior.... More specifically, interactionism argues that situations are as much a function of the person as the person's behavior is a function of the situation."
The doctrine of interactionism was originally intended to counter the doctrine of situationism:
Personal and environmental factors are interdependent -- in particular, people create the environments to which they respond.
Interactionism agrees that people's behavior is influenced by the situations in which they find themselves. But because it views people as part of the environment, it holds that personal factors of the sort envisioned in the doctrine of traits can still play an important role in behavior.
From an interactionist
perspective, different kinds of people show different patterns
of response across different situations. In mathematical
terms, personal and situational factors are multiplicative:
B
= f(P x E).
Thus, for example,
friendly people might smile more than unfriendly people, but
this difference would be bigger when they encounter a stranger
than when they encounter a friend. Or, put another way,
friendly people might discriminate less between the two
situations than unfriendly people would. Such a situation
is known statistically as the person-by-situation
interaction.
The
person-by-situation interaction takes a number of forms.
In some interpretations, the Person by Situation interaction is modeled on the statistical model of the analysis of variance, where independent variables influence dependent variables individually as main effects, or combined in interactions. The attraction of this model is evident in early statistical analyses of the power of interactions, including some cited by Bowers himself, which are all based on the ANOVA model.
In the mid-1970s Norman Endler (1973, 1975) introduced "S-R inventories" of personality which assess the effects of situations and response modes, as well as of individual differences on expressions of traits such as anxiety or hostility. These inventories asked subjects to report not only how likely a particular situation would elicit an anxious or hostile response (for example), but also how likely they would be to display anxiety or hostility in a particular manner in each situation. When administered to a large group of subjects, the data generated by these inventories can be analyzed to yield estimates of the variance accounted for by various causal factors, including the main effect of persons, collapsed across situations (and response modes), the main effect of situations, collapsed across persons (and, again, response modes), and the interaction of the person and the situation (averaging across response modes), as well as individual differences in the pattern of behavior across situations.
For
example, Dworkin and Kihlstrom (1978) constructed an "S-R
Inventory of Dominance" that included a number of different
stimulus situations calling for dominant behavior:
For each
situation, the questionnaire also posed a number of possible
responses:
This is the general pattern of results from S-R Inventory studies that have been conducted in various domains, and collectively these studies have been taken as evidence that, indeed, the person-by-situation interaction is more powerful than either persons or situations taken in isolation -- or, for that matter, the sum of persons and situations taken independently.
These results are interesting, and they helped to break through a seemingly endless person-vs.-situation debate in the 1970s. But they also miss the entire point of interactionism -- which, in the Lewinian point of view, that persons are part of the situations to which they respond -- or, put another way, that persons and situations together constitute a unified field in which behavior takes place.
The ANOVA model is simply blind to the dynamic interplay of persons and situations: it has no way of revealing how persons create the situations to which they respond. Moreover, because the statistical model of ANOVA assumes that causality is unidirectional -- that is, that it proceeds from independent variable to dependent variable -- it misses the complexity of causal relations. These deficiencies are corrected by the doctrine of reciprocal determinism, as well as by an analysis of the dialectic between the person and the situation.
But Bowers wasn't really talking about statistical interactions (just as Lewin wasn't talking about P and E as independent variables). Bowers spent so much time discussing the results from S-R inventory method because, at the time, that was the only data he had available. But, like Lewin, Bowers had something else in mind. Bowers' interactions refer to the dynamic interplay between the person and the situation, in which people help create the situations to which they in turn respond.
So how do people shape their environments?
David Buss (Journal of Personality & Social Psychology, 1987) has identified three ways in which people affect their own environments: evocation, selection, and manipulation.
The mere presence of a person in an environment alters that environment, independent of his or her traits, attitudes, or behaviors -- or even in the absence of any behavior a all. In evocation, the individual unintentionally (and even unconsciously) evokes behavior from others which, changes the situation for the evoking person. The environment is not changed by the person's deliberate, voluntary acts; the effects of evocation are probably related to the person's physical appearance. Because the environment consists of other people, evocation effects are mediated by others' cognitive structures and processes, such as their beliefs and expectations. As an example, the physical appearance of a newborn baby's external genitalia structures the environment around his or her parents' and culture's beliefs about gender roles.
People deliberately choose to enter one environment as opposed to another, perhaps out of a desire to match their environments with their individual personalities. The point of selection is that the match between the person and the environment is nonrandom. Individuals choose environments that are congruent with their own personalities, supporting and promoting their own preferences and tendencies. Each choice pre-empts alternatives (recall the character played by Gwyneth Paltrow in the film Sliding Doors). Thus, the individual has little or no opportunity to engage in new behaviors. In any event, through selection processes the environment is to some extent of the person's own making, because he or she actively chooses to be in one environment as opposed to others. The principle lies at the core of clinical behavior therapy: a patient who wants to change his or her behavior must put him- or herself in an environment that will support the change, and avoid environments that will oppose it. Note, too, that environmental selection is often more complex than a simple act of will. Sometimes choices simply aren't available, and sometimes the selection is made by external forces. Again, a familiar example are the constraints imposed by culture-specific gender roles. The effects of prejudice and discrimination against racial minorities and other outgroups is partly a matter of evocation (because racial outgroups differ in appearance from racial ingroups), and partly a matter of selection by virtue of "choices" for the minority outgroup made by the majority ingroup.
People engage in overt behavioral activities that alter the objective environment -- that is, the environment as it is publicly experienced by everyone in it. Here we have deliberate, overt behavior that is intended to alter the environment. Manipulation goes beyond the choice among available environments, and has the effect of creating an environment that would not otherwise be available. Finding themselves in a particular environment, and unable to select a different one, people engage in behaviors that will modify the character of their environment, as it would be objectively described by an independent observer. Environmental manipulation underlies all acts of instrumental or operant behavior, where the organism's behavior operates on the environment, changing it in some way, so that it more closely conforms to the organism's goals and purposes.
Evocation, selection, and manipulation all change the environment through behavior: either the behavior of the person him- or herself or that of other people. In each case, someone does something overtly that changes the objective character of the environment -- that is, changes the environment for everyone in it, not just for the person itself. But these three modes do not exhaust the effects of the person on the environment. There is a fourth mode, one that Buss himself does not explicitly recognize, perhaps because his evolutionary perspective effectively blinds him to it (Cantor & Kihlstrom, 1987).
People engage in covert mental activities that alter their mental representations of their subjective environment -- that is, the environment as they privately experience it. As opposed to behavioral manipulation, cognitive transformation does not act on the objective environment -- the environment as it would be described in the third person by an objective observer. Rather, transformation acts on the subjective environment. Through cognitive transformations, people can change their internal, mental representations of the external physical and social environment -- perceiving it differently, categorizing it differently, giving it a different meaning, than before. In cognitive transformation, the objective features of the environment remain intact -- they have not been altered through evocation, selection, and manipulation. Rather, the person's covert mental activity has altered the environment for that person only; the environment is unchanged for everyone else -- unless and until the cognitive transformation leads the person to engage in (evocative, selective, and manipulative) behavior that will, in fact, change the environment for everyone.
The difference between the objective and the subjective environment, and between behavioral manipulation and cognitive transformation, is illustrated by some classic research on delay of gratification in young children . Delay of gratification has to do with people's ability to tolerate frustration and control their impulses. It's a pretty important aspect of socialization. Max Weber, a pioneering sociologist, thought that delay of gratification was the basis for the "Protestant ethic" of self-restraint and the negation of pleasure (obviously, he didn't know many Protestants!) which he thought lay at the basis of capitalism. But every culture requires some ability for ego-control: to plan ahead and tolerate delays. This ability is generally acquired early in life, as a result of socialization. Of course, even within a culture individuals will differ in their ability to delay gratification: as traditional personality psychologists might put it, some people have it, some people don't.
In a study
by Funder, Block, & Block (1983), conducted at UCB,
nursery-school teachers administered a test of intelligence, and
also rated their pupils' personalities on an instrument (known
as the California Q Set) that provided measures of two
higher-order dimensions of personality:
A "Big-Five Contrarian"Why didn't Funder et al. assess the children's personalities in terms of The Big Five? At the time the study was done, consensus had not yet developed around The Big Five as the structure of personality. There were other competing systems, including Hans Eysenck's four-factor proposal (neuroticism, extraversion, psychoticism, and intelligence) and the two-factor system (ego-control and ego-resiliency) proposed by Jack and Jean Block, who after all were Funder's co-investigators. These two competing systems still undergird much research, and in fact Jack Block has on numerous occasions (e.g., 1995, 2001) pronounced himself a "Big Five Contrarian", and offered trenchant critiques of the research supporting The Big Five as a universally applicable structure for personality description. |
Around the
same time as the assessments were made, Funder et al. engaged
the children in an experimental assessment of their ability to
delay gratification employing two different situations:
Predictor Variable | r |
IQ | .21 |
Ego Control | .25 |
Ego Resiliency | .23 |
"Is unable to delay gratification" | -.27 |
All four predictors -- IQ, ego control, ego resiliency, and the
specific item "is unable to delay gratification" correlated in
the range of .20 < r < 30 with actual delay behavior
(the correlation between delay and the specific item was, of
course, negative).
Note that the magnitude
of these correlations is in line with Mischel's conclusion about
the "personality coefficient": that a correlation of about r
= .30 is the upper limit on the relationship between personality
traits and behavior. Personality in general predicts
behavior in particular, but that prediction is relatively
weak. Put another way, there is a ceiling on the extent to
which we an predict behavior in a particular situation, knowing
the individual's personality traits. Put more bluntly,
there's more to behavior than personality traits.
Research on delay of gratification in children also illustrates the power of the situation. In a study by Mischel and Ebbesen (1970), children were asked which of two rewards, cookies or pretzels, they preferred. Then the children were told that the experimenter would go away for while (actually, about 15 minutes). If they waited for the experimenter to return, they would receive their preferred reward. But if they could not wait, they would receive heir nonpreferred reward. Then the experimenter left. In one condition of the experiment, he took the cookies and pretzels with him. In other conditions, he left one ,or the other, or both behind. How long did the children wait before signaling the experimenter to return?
In fact, children who waited in
the absence of both rewards were able to wait a fairly long time
-- some of them even outlasted the experimenter! If either
the preferred or the nonpreferred reward was left with the
child, waiting time decreased. And if the child was left
to wait in the presence of both rewards, waiting time
dropped almost to zero. Young children cannot delay
gratification long in the presence of a reward. But this
experiment shows clearly the influence of the situation --
whether rewards are present or not -- on the child's
behavior.
Another counterattack
in the trait-situation debate, by Funder & Harris (1986),
re-analyzed data collected by Mischel and his colleagues on the
"situational" determinants of delay of gratification, yielding a
weighted average effect size of about .45, again corresponding
to about 20% of variance explained. Funder and Harris
conceded that Mischel's research yielded dispositional effects
accounting for only about 8% of the variance (i.e., almost
exactly what would be expected on the basis of a personality
coefficient of .30), but suggested that stronger dispositional
effects would be obtained in studies that improved the
assessment of dispositional characteristics, and that assessed
behavior closer in time to the assessment of the
dispositions. However, the delay-of-gratification data
collected by Funder, Block, and Block (1983), which met those
requirements, yielded correlations in the range of .11 to .27 --
very similar to those obtained by Mischel (Kihlstrom, 1986).
In the experiment just
described, the children themselves have no choice in their
environment: they are forced to wait in the presence of one or
the other reward, or both, or neither. But consider a
child who is given a choice If he is asked whether he
would like to wait in the presence or absence of the reward, or
in the presence of the preferred or nonpreferred reward, his
choice will affect his behavior by determining whether he
will wait in a situation in which long delays are easy or hard
to achieve. If he chooses to wait in the presence of
both rewards, he may be unable to wait for very long. If
he chooses to wait in the absence of both rewards, he may very
well outlast the experimenter!
In other words, these children were behaving in such a way as to put the rewards out of sight -- to change the environment into one in which the reward, though physically present, is not really there.
A formal experiment
by Mischel, Ebbesen, and Zeiss (1972, Exp. 1) makes the same
point. Children were given the choice of marshmallows or
pretzels, and then asked to wait for the experimenter to return
before he could receive the preferred reward. The child
could also signal for the experimenter to return, in which case
he would get the nonpreferred reward. Children who were
given no distraction could not wait very long. But
children who were told to play with a "Slinky" toy were able to
wait much longer.
The distraction doesn't even have to involve overt behavior, like playing with a toy. In another condition of the "Slinky" experiment, children were simply instructed to spend their time thinking of "anything that's fun to think of". These children were able to delay gratification even longer than those who played with the "Slinky" toy.
By putting the reward out of sight, or by directing their
attention elsewhere (e.g., to a game, a song, or a Slinky toy,
or just "fun" thoughts), they have effectively altered their
environment from one in which they cannot wait very long to one
in which they can.
Even in
the absence of overt behavior, it is possible to transform the
subjective mental representation of the situation in
such a way as to promote delay of gratification. An
experiment on the role of ideation in delay of gratification by
Mischel and Baker (1975) illustrates the power of such
cognitive transformations. In this experiment the children
were given the usual choice of marshmallows vs. pretzels, and
during the waiting period the preferred reward remained in
sight.
Ideational Instructions |
|
Consummatory | Transformative |
Look at the marshmallows. They are sweet and chewy and soft. When you look at marshmallows, think about how sweet they are when you eat them.... When you look at marshmallows, think about how soft and sticky they are in your mouth when you eat them..... | When you look at
marshmallows, think about how white and puffy they
are. Clouds are white and puffy too -- when you look
at marshmallows, think about clouds.... The moon is
round and white. When you look at marshmallows, think
about the moon....
|
Look at the pretzels; they are crunchy and salty. When you look at pretzels, think about how crunchy they are. When you look at pretzels, think about how salty they taste when you lick them or chew them.... | When you look at pretzels you can think about how long and brown they are. A log is long and brown. When you look at pretzels, think about logs and tree trunks. Or you can think about how round and tall they are. A pole is round and tall.... |
The results of the experiment were very striking. Children who focused their thoughts on the consummatory aspects of their preferred reward were not able to delay gratification for very long. Those who transformed their preferred reward into something that did not taste sticky-sweet, or crunchy-salty, were able to delay for a long time. What caused their delay (or lack thereof) was not their personality traits, or the environment, or even their own behavior. The important factor was how they thought about the rewards.
Thus the notion of delay of gratification as a trait
that people have, proves to be an
oversimplification. Ego-control is not just a personality
characteristic, it is also a product of strategic
activity. Delay of gratification is accomplished through a
combination of selection, manipulation, and transformation, all
oriented around the general principle of "out of sight, out of
mind".
Through the process of social learning, children acquire both knowledge of effective behavioral and cognitive strategies for delay of gratification, and the ability to deploy these strategies effectively.
Traditionally, the person-by-situation
interaction has been characterized as unidirectional. That
is to say, in Bowers' preferred formulation, people somehow
affect the situations they are in, and this interaction, along
with the main effects of the person and of the situation, causes
behavior to occur. But of course, it may also be that, in
addition to the person influencing the situation to which he or
she responds, the environment also shapes the internal states
and dispositions of the person in it. If people can
influence their environments, why shouldn't their environments
influence them?
Albert Bandura has
called this state of affairs reciprocal determinism.
In reciprocal determinism, causality is bidirectional:
The doctrine of
reciprocal determinism is essentially a more dynamic extension
of the doctrine of interactionism:
Where interactionism asserts that people are a part of their own environment, reciprocal determinism asserts that people, their environments, and the behavior that takes place within those environments form a complex, dynamic, interlocking system characterized by nonlinear, bidirectional, causal relations.
In a very real sense, reciprocal determinism is the state of affairs envisioned by complexity theory (also sometimes known as chaos or catastrophe theory).
But if the
causal relations between P and E can be
bidirectional, why can't the causal relations between P
and B, and between E and B, be
bidirectional as well?
In other words, each element in Lewin's formula -- not just P and E but B as well, is both cause and effect. Because the reciprocal causal relations involve three elements, Bandura also has labeled this expanded notion of reciprocal determinism triadic reciprocality.
Reciprocal
determinism in general, and triadic reciprocality in particular,
entails a very interesting situation in which everything is
simultaneously both the cause and the effect of everything
else.
Still, when everything in a system affects everything else, things are going to get awfully complicated awfully quickly. For this reason, it is rarely possible to study triadic reciprocality in all of its dynamic glory. Fortunately, the lack of simultaneity works to our advantage, permitting an analytic decomposition in which we break triadic reciprocality down into its bidirectional segments, or dialectics (from the Greek word meaning "dialog"). This yields The Three Dialectics in Social Behavior:
P <---------------> B | The dialectic between the person and his or her behavior. |
E <---------------> B | The dialectic between the environment and the behavior that occurs in it. |
P <---------------> E | The dialectic between the person and the environment in which his or her behavior takes place. |
According to the doctrine of interactionism, people create their own situations through their thoughts and actions; and according to the doctrine of reciprocal determinism, a person's behavior feeds back to alter the person him- or herself. These dynamics are played out even in what the social psychologist Henri Tajfel has called the minimal group situation -- a simple dyadic relationship consisting of only two people interacting with each other.
The general social interaction cycle (Cantor & Kihlstrom, 1987) is a conceptual scheme for representing any dyadic social interaction, whether as mundane as buying a toothbrush or as monumental as proposing marriage. In the scheme, two participants are assigned the role of Actor and Target, respectively. This assignment is of course somewhat arbitrary, because each individual is both an actor and the target of the other's actions. For convenience, the Actor role is assigned to the individual who initiates the social interaction, as in the following illustration.
The General Social Interaction Cycle is derived from earlier descriptions of a General Social Interaction Sequence by Darley and Fazio (American Psychologist, 1980) and by Jones (Science, 1986). However, for reasons that will be come clear presently, the "sequence" of transactions between Actor and Target is better conceptualized as a cycle -- even a set of embedded cycles.
First,
the Actor enters the situation
-- the immediate context in which he or she physically
encounters the target (from this point on, for simplicity in
exposition, we'll call the Actor "she" and the Target "he".
Some of these skills
are cognitive in nature, such as her ability to "read people";
others are motoric, such as a particular way of walking, or
using her hands.This fund of social knowledge may be
characterized as social intelligence (Cantor &
Kihlstrom, 1987, 1989; Kihlstrom & Cantor, 1989,
2000).
Social
intelligence is not to be confused with intelligence in
the sense of IQ tests. In referring to social
intelligence, Cantor and Kihlstrom are not interested in whether
people are smart or stupid in social situations. Rather,
they use the word intelligence in a manner closer to its
military sense of information that is used to guide
action.
Social
intelligence comes in two types:
The social intelligence view of personality begins with the proposition that personality is reflected in social behavior. Individual differences in social behavior do not reflect individual differences in personality traits (whatever they are), but rather reflect individual differences in the individual's cognitive resources -- the fund of declarative social knowledge that the individual brings into the social situation, and the repertoire of procedural social knowledge that the individual brings to bear on social interaction.
In this respect, social cognition is the study of social intelligence.
As she
begins the interaction, the Actor
forms an impression of the situation -- of the
target, and of the immediate environmental context: Does he
still seem interested? Is this a good time to ask?
This impression combines knowledge derived from two general
sources:
Assuming
that the Actor has asked him for a date, attention now shifts to the Target,
who now has to do something in response to the Actor's initial
salvo.
The Target knows he's free Friday night, but that's not decisive. Should he play hard to get? Should he wait to see if he gets a better offer from someone else?
On the basis of the impression he's formed, the Target responds: He decides to keep his options open for Friday night, but doesn't want to spurn the Actor entirely, so he says he can't see her Friday, but proposes that they go out on Saturday instead.
Now
attention shifts back to the Actor,
who has to interpret his response,
and revise her impression of the
situation accordingly.
On the
basis of her impression, the Actor
responds to the Target:
Now the ball is back in the Target's court: he has to interpret her response, revise his impression, and figure out what to do next.
In any event, each participant in this social interaction is behaving in accordance with his or her construal of him- or herself, and of the other, and of the situation in which they meet. Each of these construals is modified by the other's behavior, and his or her own. And it's these individual construals, in the end, that lead the participants to behave the way they do.
The way to understand an individual's behavior is to understand the individual's construal of the situation.
The foregoing analysis of the General Social Interaction Cycle shows how a person's beliefs, translated into behavior, can influence another person's behavior in such a way as to change the environment for both of them. Cognitive transformations of the environment are not entirely private, because they lead to overt, public behavior which can alter the objective situation for other people as well. This process is exemplified by what the American sociologist Robert K. Merton (1947) called the "self-fulfilling prophecy", and what social psychologists have come to call expectancy confirmation effects.
The self-fulfilling prophecy is crucial to the analysis of social interaction. Each participant brings into the situation a set of conceptual baggage, variously defined as beliefs and expectations (or, in Merton's terms, "prophecies"), which guide his or her subsequent behavior -- behavior that, in various ways, tends to elicit behavior from others that is consonant with the actor's beliefs and expectations. In Merton's analysis, the self-fulfilling prophecy is unconscious: while the actor may well be aware of his or her beliefs and expectations, and of the relation between these mental states and his or her own behavior, the actor may well be quite unaware of the effects of his or her behavior on the behavior of the other person.
Merton's analysis was very provocative, but his evidence for self-fulfilling prophecies was, admittedly, somewhat impressionistic. Among the earliest (and also very provocative) experimental demonstrations of the self-fulfilling prophecy were conducted by Robert Rosenthal (1963). Rosenthal had first raised the issue of experimenter bias in his doctoral dissertation on defense mechanisms, at UCLA (1956), and be began formal work on the problem in his first faculty job, at the University of North Dakota (1957-1962), But he had difficulty publishing his research until he took a job at Harvard -- at which point people began taking him seriously!
Rosenthal's first demonstration of experimenter bias involved 12 undergraduate experimenters who were running rats in a maze-learning experiment in the context of an undergraduate course in research methods (Rosenthal & Fode, 1963). All the rats were genetically identical (as identical as lab rats can be), and all were treated identically before the experiment. Nevertheless, the undergraduate researchers were told that the rats were a specially bred "Berkeley strain" of animals; half were told that their rats had been bred to be "maze bright", and half were told that their rats were "maze dull". As it turned out, rats labeled as "maze bright" learned the maze faster than those who had been labeled as "maze dull" -- despite the fact that the rats were identical in every respect. Special controls insured that this outcome was not an artifact of mere error or deliberate cheating. Rather, the experimenters seemed to treat rats differently, in line with their expectations, and this differential treatment enhanced or retarded performance in fact.
Similar results were obtained in a study of person perception (Rosenthal, 1963), in which subjects were asked to rate the people pictured in photographs in terms of the degree of success experienced by the person. Again the data was collected by undergraduate research assistants, and again Rosenthal manipulated the experimenters' expectancies. Half were led to expect that "most" subjects would see the targets as failures; half were led to expect that "most" would see them as successes. The result of the experiment was very dramatic: non-overlapping distributions of perceived success. Subjects who were expected to perceive failure actually did, and subjects who were expected to perceive success actually did as well.
In a very dramatic experiment on experimenter bias, Burnham (1966) had experimenters test rats whom they believed to have received surgical brain lesions that would impair their task performance (control experimenters tested rats whom they believed to be neurologically intact). In fact, half the rats were lesioned, but the other half were intact. Intact rats whom the experimenters believed to be lesioned performed worse than intact rats believed to be intact; and lesioned rats whom the experimenters believed to be intact performed better than lesioned rats believed to be intact. In fact, the experimenters' beliefs about their rats' neurological status had a somewhat larger effect on the rats task performance than their actual neurological status did!
The experimenter bias experiments were highly controversial -- in fact, Rosenthal initially had a hard time getting them published -- and they stimulated a large number of conceptual and methodological critiques, as well as some empirical failures to replicate. Nevertheless, there have been a large number of successful replications of the basic "E-bias" effect in many different domains. Rosenthal and Rubin (1978) summarized the results of some 345 studies of experimenter bias. Using an early version of a statistical technique known as meta-analysis, which combines the results of a number of different studies, Rosenthal and Rubin determined that the mean size of the experimenter bias effect was approximately 0.70 standard deviations. This is a huge effect by the standards of social science. Rosenthal and Rubin acknowledged the failures to replicate, but they calculated that the overall effect in 345 students was so strong that it would take 65,122 additional failures to reduce the magnitude of the E-bias effect to zero! Some investigators may not like it, but there is no doubt that experimenter bias is real and nontrivial.
How does
experimenter bias affect the actual outcomes of
experiments? Rosenthal (1963) discussed a number of
factors.
Rosenthal's next
shot in the E-bias debate was a study of "Pygmalion in the
Classroom" (Rosenthal & Jacobson, 1968), an experiment
involving children (grades 1-6) enrolled in a number of public
elementary schools. The schools involved a "tracking"
system in which children of low, medium, and high ability were
segregated into different classrooms. Early in the fall
semester of one school year, in addition to the usual standard
IQ test, the children were administered what was presented to
the teachers and (administrators) as a nonverbal IQ test that
would predict educational progress and identify "intellectual
late-bloomers" -- that is, children who might not be doing well
now, but who could be expected to catch up and even exceed the
achievement of their age-peers. In fact, the test had
nothing to do with IQ, or the prediction of late-blooming.
It was a fake. Nevertheless, the children's teachers were
given false feedback about the children's test scores,
identifying a random 20% of pupils in each class (that
is, regardless of ability level) as "late-bloomers".
Later, in the spring of that same academic year, the children were retested with the standard IQ test. On average, children in all classes and grade levels showed increases -- school really does make you smarter!. But those children who had been falsely identified as intellectual late-bloomers showed greater gains than the other children. This effect occurred in all grades, and in all tracks within each grade, but was most significant for grades 1 and 2.
The "Jacobson" of Rosenthal & Jacobson was Lenore Jacobson, who was a principal of an elementary school in South San Francisco, and actually initiated the study. She had read about Rosenthal's work on experimenter bias, and had the idea that the same sort of thing might happen with teachers' expectations of students. She wrote to Rosenthal about her idea, and the rest is history.
Rosenthal and
Jacobson's study aroused immediate controversy in the teaching
profession, and some criticism -- mainly statistical -- from
psychologists and other social scientists -- particularly
Elashoff and Snow (1970, 1971). However, the essential
point of the Pygmalion experiment -- that expectations can
become self-fulfilling prophecies -- has been borne out many
times since. Rosenthal and Rubin (1978) counted 112
studies of "Pygmalion"-type expectancy effects in classrooms
(students who are expected to perform better do), in clinics
(patients who are expected to get better do), and workplaces
(employees who are expected to perform better do). These
are the studies of "Everyday Situations" included in their
review, which yielded a very substantial mean effect size
averaging 0.88. So, again, there is no doubt that
Pygmalion effects occur, and that they are nontrivial.
Not all studies of experimenter bias and Pygmalion effects yield positive findings, raising questions about the reliability of the effects. Rosenthal and Rubin essentially invented meta-analysis precisely to demonstrate that, aggregated across all the available studies expectancy confirmation effects were real and couldn't be ignored. A little later, Glass et al. (1980) put meta-analysis to the same purpose in studies of psychotherapy outcome. These days, meta-analysis is the chief means for constructing a "quantitative" (as opposed to a "qualitative", narrative) review some segment of the scientific literature. And it all began with experimenter bias and Pygmalion effects.
Pygmalion in the Real Classroom
The original Pygmalion studies employed experimental designs, essentially turning classrooms into laboratory environments -- raising the question of whether, and how strongly, Pygmalion effects occurred under more natural classroom conditions. Madon, Jussim, and Eccles (1997) performed the largest such study to date, employing 1,539 6th-grade math students enrolled in the Michigan Study of Adolescent Life, a long-standing longitudinal study based at the University of Michigan. For this purpose, the investigators extracted a number of variables, among which were:
The study was complex, but the general idea was to determine whether teachers' expectations of student performance influenced actual student performance, taking into account baseline differences in math ability.
Despite the evidence for accuracy, the evidence for the self-fulfilling prophecy should not be dismissed. Madon et al. (1997) presented a conceptual model of how the Pygmalion effect occurs.
|
The important point of the Pygmalion study is that the children weren't really "late-bloomers" -- but their teachers believed that they were, and treated them accordingly, and the students themselves responded in kind. This is the three-stage model of the self-fulfilling prophecy, in which the effect is mediated by the teachers' behavior toward the students.
Thus, the
effect of the cognitive transformation (identifying some
students as late-bloomers) was mediated by behavioral
manipulations (treating students identified as late bloomers
differently than the other students). These behavioral
differences boiled down to four factors:
In other
words, teachers who believed that a child was a late bloomer
behaved as if that child were already doing well -- in
short, they treated late-bloomers as if they were already
smart. In so doing, they created a different
environment for the late bloomers than for the other children --
who after all were in the same physical classroom. And,
not surprisingly, the late-bloomers responded by -- well,
blooming.
Little or none of this is done verbally. As a result, Rosenthal became an early leader in the study of nonverbal communication through facial expressions, posture, and movement.
Subsequent analyses by Jussim (1986) and others suggest that the process is actually more complicated than this.
In the Pygmalion experiment, the teacher expectations were derived simply from information provided by the researchers, that some students in each class had been identified by a test as "intellectual late-bloomers". But in real life, these expectations have a variety of sources:
And once the teachers begin to actually interact with students, these expectations are themselves subject to processes that may maintain or change them:
- People (even researchers!) are subject to a confirmatory biases in hypothesis testing, in which they seek out, and attend more to, evidence that confirms an initial hypothesis, rather than evidence that disconfirms it.
- Expectations vary in terms of their flexibility -- that is, the degree to which they are vulnerable to disconfirming evidence; some expectations are so rigid that it seems nothing will alter them. For example, teachers who believe that intelligence is a stable global entity, a single quality that is inherited and essentially fixed at birth, are unlikely to be persuaded that a child is, in fact, learning.
- And there is also variance in the strength of disconfirming evidence -- if the confirmatory evidence is strong, and the disconfirmatory evidence is weak, the teacher's expectations are unlikely to be revised.
Regardless of whether
they change, teachers' differential expectations concerning
students lead to differential treatment of these students.
These are, essentially, the same sorts of factors identified by
Rosenthal and Jacobson:
The pathway between expectations and differential treatment is itself mediated by a number of factors:
- Psychological Mediators
- Perceptions of Control: teachers are likely to believe that they have more control over high-ability students -- and, conversely, that nothing they try to do for low-ability students will help them.
- Perceptions of Similarity: teachers are likely to think that high-ability students are more like them than low-ability students -- more alike socioeconomically, more likely to go on to higher education, more likely to have intellectual and cultural interests, etc.
- Encountering expectancy-disconfirming evidence may induce an unpleasant state of cognitive dissonance -- which is likely to be resolved by discounting the evidence, and retaining the expectancy; alternatively the teacher may attach a negative emotional valence to students who induce the dissonance in the first-place -- certainly to positive-expectancy students who fail, and maybe even negative-expectancy students who succeed, as well.
- Attributions: According to the "fundamental attributional error" (discussed further in the lectures on Social Judgment, teachers may be more likely to attribute a student's level of achievement to personal factors internal to the student, such as low ability or poor effort, rather than to external situational factors, such as poverty or a chaotic home life.
- Affect: If a teacher likes or dislikes a student, positive or negative expectations may be attached to that student by virtue of the halo effect.
- Situational Mediators
- Tracking -- in which students with (presumed) high abilities are taught in separate classrooms from those with (presumed) low ability, and often taught by the "best" and most experienced teachers -- only gives institutional weight to teacher's individual expectations. The expectations have been ratified, as it were, by policy decisions made at the level of the principal or the superintendent of schools.
- The same institutional ratification may result from ability grouping within a single classroom. If all the skilled readers are sharing books in one circle, while all the struggling readers are shunted off to one side, the struggling readers might not get the attention they need.
- Grade Level
And, of
course, the students will react to the way they are treated:
No matter how complicated, the self-fulfilling prophecy begins with a cognitive transformation -- in this case, by putting children in one category rather than another. The teachers then behaved in accordance with this cognitive categorization, and the children responded by behaving in a manner consistent with how they were treated. In other words:And the pathway between treatment and reaction is also mediated by a number of factors:
- Skill Development -- differential treatment may prevent students from acquiring the skills necessary to perform at a high level.
- Perceptions of Control: what goes for teachers, goes for students in reverse: students who think they have no control over academic outcomes may simply stop trying.
- Values: some students may simply come to devalue academic achievement, as irrelevant to their lives; one example of this is the view, among some African-American students, that academic success is "acting white".
- Self-Schemas: the student may have a particular theory about himself -- that he's no good at math, or that his brain is just not built to "get" chemistry; and he may also hold the theory that this theory itself is not subject to change (we'll talk more about self-schemas in the lectures on The Self).
- Self-Esteem, or the affective (as opposed to the cognitive) aspect of the self, is also important: students who have a low opinion of themselves and their abilities are likely to be particularly affected if their teachers treat them as if they are unworthy.
Perceivers can engage
in expectancy-confirmation processes with respect to targets:
that's what the classic self-fulfilling prophecy is all
about. And targets can react to perceivers by engaging in
self-verification activity. But it also seems that people
can be both actors and targets of their own actions, and bring
self-fulfilling prophecies down on themselves. This is
illustrated by the phenomenon of stereotype threat -- a
concept introduced by Claude Steele, a social psychologist who,
in 2014, was appointed Vice Chancellor and Provost here at UCB.
According to Steele and Aronson (1995), stereotype threat begins with an individual's awareness of some group stereotype applied to a group of which he or she is a member -- that, for example, or that African-Americans generally have low intellectual ability, or that girls and women carry a biologically indisposed toward math, science, and engineering -- or, for that matter, that white men can't jump (interestingly, it's hard to find negative stereotypes about white males in general). When members of stereotyped groups take tests where their performance is diagnostic of their individual ability, the possibility of failure poses a double threat: (1) the shame of confirming the group stereotype; and (2) the humiliation attached to personal failure. This "double whammy" increases anxiety, which in turn impairs performance -- especially when the performance category is highly salient, when the domain is highly self-relevant, and the test is believed to be diagnostic of one's own personal abilities.
The first
experimental demonstration of stereotype threat relied on the
stereotype of African-Americans as relatively low in
intellectual ability -- as reflected, for example, in the
infamous black-white differential in SAT and GRE scores (Steele
& Aronson, 1995). In the study, black and white
college students were recruited for an experiment in which they
were administered a difficult test of verbal ability.
Another experiment
focused on gender rather than racial stereotypes -- one
component being the idea that girls and women simply lack the
talent for math and science that boys and men have. As in
the earlier experiment, male and female college students were
recruited for a study involving a difficult test of mathematical
ability.
You can think of stereotype threat as a sort of self-fulfilling self-prophecy: a person subject to a negative group stereotype will behave in such a manner as to confirm that the stereotype is true.
Social Construction and theFaith-Based Presidency of George W. BushSocial cognition explores how people acquire, represent, and use social knowledge -- how they cognize the social environment in which they live. And it is an assumption of cognitive psychology in general that reality is independent of the perceiver -- that the goal of the perceiver is to construct a valid mental representation of reality. But one of the important insights of personality and social psychology is that, to some extent at least, people create the environment to which they respond. In other words, reality isn't so independent of the perceiver after all. In the run-up to the 2004 presidential election, Ron Suskind, a reporter for the Wall Street Journal, wrote an article about the 43rd president's "preternatural, faith-infused certainty in uncertain times". In addition to his political insights, Suskind also captured succinctly the difference between social cognition as an empirical process of acquiring knowledge about the world and social construction as a process of creating that world through action and belief. Suskind writes ("Without a Doubt", New York Times Magazine, 10/17/04).
|
There are
two fundamental aspects of social cognition:
Definitions of a situation... become an integral part of the situation and thus affect subsequent developments.... The self-fulfilling prophecy is, in the beginning, a false definition of the situation evoking a new behavior which makes the originally false conception come true. The specious validity of the self-fulfilling prophecy creates a reign of error. For the prophet will cite the actual course of events as proof that he was right from the very beginning. Such are the perversities of social logic.
Following Darley and Fazio (1980) and Jones (1986), we can distinguish between two different expectancy confirmation processes operating at the individual level:
These two modes can be related to the
person-situation interaction:
Through behavior, private beliefs can create a public reality.
Experimenter
bias and Pygmalion effects are part of a larger class of
phenomena called expectancy confirmation effects.
Expectancy confirmation effects come in two broad forms:
Perhaps the most analytic
studies of behavioral confirmation effects have been performed
by Mark Snyder, Bill Swann, and their colleagues (Snyder was
Swann's mentor in graduate school).
Many of these studies have employed the "getting acquainted" paradigm, in which subjects are assigned to interact with a previously unknown partner. The getting acquainted paradigm permits social psychologists to study a basic social process, by which one person gets acquainted with another, under controlled laboratory conditions.
Note: Just to keep things straight, Snyder & Swann published two different studies in 1978:
- one in the Journal of Personality & Social Psychology (JPSP), and
- the other in the Journal of Experimental Social Psychology (JESP).
In one
"getting acquainted" study, the subjects were given a
personality profile that described their female partner (who, in
this experiment, didn't actually exist) as somewhat
extraverted. Then the subjects were asked to select some
questions, from a larger set provided by the experimenter, that
they could use to find out what the person is really like -- to
fill in the details of the original personality profile.
The point of the study is that almost everyone is capable of giving a positive response to any of these questions. The most introverted person knows how to liven up a dull party (even if he wouldn't actually do it), and the most extraverted person sometimes has trouble opening up to others (if not very often). Because everyone can give a positive response to any of these questions, they do not really reveal anything in particular about their personalities. Introverts can look extraverted, if they're asked "extraverted" questions, and extraverts can look introverted, if they're asked introverted questions. But another way, anyone can appear either extraverted or introverted, depending on what they're asked. And what they're asked depends on the questioner's expectations.
In an extension of the getting-acquainted paradigm, Snyder and Swann again presented subjects with profiles depicting their partners as extraverted or introverted, and again allowed them to select questions in order to fill in the details and find out what their partner was really like. And again they observed a bias in question-selection, in which extraverts were asked questions that were congruent with extraversion, and introverts asked questions congruent with introversion. But this time, there really was an interaction partner (randomly assigned to conditions, of course), and she really got to respond to the questions. The conversation took place over a telephone (again, simulating a real-life "getting acquainted" scenario), permitting the partner's responses to be recorded and evaluated by judges who were kept blind to the expectations provided to the subjects or the subjects' questions (i.e., all they heard were the partners' answers). Again, partners whom the subjects believed to be extraverted received higher ratings on extraversion than partners whom the subjects believed to be introverted -- and vice-versa for introversion.
Here is the self-fulfilling prophecy in operation: actors who believed that targets were extraverted (or introverted) behaved in such a manner as to elicit extraverted (or introverted) behavior from the targets.
Similar findings were obtained in another study on social stereotypes associated with attractiveness (Berscheid is an expert on the social psychology of physical attraction and romantic love). In this study, male subjects were presented with a photograph of their interaction partner, instead of a narrative profile. The photograph depicted their female partners as either relatively attractive or relatively plain. Of course, the partners were randomly assigned to conditions, independent of their actual physical attractiveness, and because the entire interaction again took place over the telephone, the subjects were none the wiser.
Even before the conversation took place, the subjects were asked to describe their "anticipatory images" of their partners. Following the "halo effect", in which there is a tendency to see socially desirable qualities as correlated, the subjects expected attractive partners to be sociable, poised, humorous, and socially adept than their plain partners.
Then they actually interacted with their partners over the telephone: as before, naive judges listened to the partners' side of the conversation, and then rated their personalities. Partners who had been labeled as physically attractive received higher ratings of extraversion (e.g., more sociable, poised, sexually warm, and outgoing) than those who had been labeled as plain.
Again, we can see the self-fulfilling prophecy at work: Actors who believed that targets were extraverted (simply because they were physically attractive) behaved in such a way as to elicit extraverted behaviors from the targets.
These two
experiments, taken together, clearly demonstrate the mechanisms
of the self-fulfilling prophecy:
Another
experiment (one of my personal favorites in the entire
social-psychological literature) provides a more detailed
sequential analysis of the unfolding of a self-fulfilling
prophecy. This experiment employed a noise-gun
paradigm, adapted from the experimental study of
aggression, and involved three subjects, run two at a
time. The subjects, who were initially unknown to each
other, were recruited for an experiment on reaction times.
Such experiments are often boring, and so in order to "keep it
interesting" the task was presented as a game with some special
features:
Phase 1
of the experiment involved two subjects selected at random (the
third waited to participate in Phase 2). Both subjects
filled out a hostility questionnaire, and then each was randomly
assigned a role.
On the next trial, the subjects exchanged places, with the Target seated at the RT apparatus. Targets who had been labeled as hostile (and treated as such) selected a higher noise gun intensity than those who had been labeled cooperative (and also treated as such).
During the remainder of Phase 1, the Target and the Labeling Perceiver took three more turns each, over which Snyder and Swann generally observed an escalation of the labeling effect.
At the conclusion of
Phase 1, Labeling Perceivers rated targets who had been labeled
as hostile as more hostile, in fact, than those who had been
labeled as cooperative. Thus, individuals who had been
labeled as hostile (or cooperative), and treated as such,
actually came to behave in a hostile (or cooperative) manner.
In Phase 2 of the experiment, the Labeling Perceiver was dismissed, and the Target continued the experiment with the third subject, who was assigned the role of Naive Perceiver, because he received no information about the Target at the outset of the experiment.
For the first trial of Phase 2, the Target was seated at the noise gun, and the Naive Perceiver at the RT apparatus. As in Phase 2, Ts and NPs exchanged places for a total of 4 trials each.
Across the four trials of Phase
2, Targets who had been labeled as hostile at the outset of
Phase 1 chose higher noise-gun intensities than those who had
been labeled as cooperative, but the Target's label interacted
with the Target's attribution for his own behavior.
Targets who had been labeled as hostile in Phase 1 continued to
behave in a hostile manner in Phase 2, but only those who had
been encouraged to view their own behavior as a product of their
personality dispositions. There was no effect of label
among subjects who had been encouraged to view their own
behavior as a product of the situation.
At the conclusion of Phase 2, the Naive Perceivers rated Targets who had been labeled hostile as more hostile compared to those who had been labeled as cooperative -- but again, this effect occurred only for Targets who had been encouraged to view their own behavior in dispositional terms.
Putting the two phases together, we can see the full spectrum of self-fulfilling prophecy effects:
There are
many other experimental analyses of the self-fulfilling
prophecy, and of the General Social Interaction Cycle,
including:
Many of these studies illustrate a general characteristic of expectancy confirmation, which is an amplification effect -- a vicious (or, in some cases), a virtuous cycle, in which initial expectations lead to a little change in reality, which reinforces the expectation, which leads to a bigger change in reality, and so on.
Whether amplification occurs over time or not, the important point is that people who have expectations concerning other people tend to treat them in such a way as to elicit from those people behavior that tends to confirm their initial expectations. The self-fulfilling prophecy can be quite powerful: perceivers themselves are generally not aware of the effect, and so attribute the target's behavior to the target rather than to their own actions. And targets, for their part, are generally unaware of the perceivers' expectancies, so they have no opportunity to correct the perceivers' misperceptions.
This last point raises the question of the target's role in the self-fulfilling prophecy. The prophet's beliefs and expectations are important, but it's not the case that the target is passive in all of this. The Snyder & Swann experiment shows that the target's attributions for his or her own behavior can make a difference to whether the self-fulfilling prophecy actually gets fulfilled. Additional experiments on self-verification, mostly performed by Swann and his colleagues, show what can happen when the target actively tries to counteract the self-fulfilling prophecy.
In Swann's
view, there are two aspects of the process by which beliefs
create reality:
Here's a simple example of the
type: a study by James Hilton and John Darley (1985) employing
the standard "getting-acquainted" paradigm, involving two
genuine subjects (no confederates).
After an uncontrolled interaction, the perceivers were asked to rate the targets. Naive targets generally confirmed the perceivers' expectations. Those who were expected to be cold were rated as colder. But the informed targets did not show this effect -- if anything, they reversed it. Apparently, during the uncontrolled interaction, they strategically behaved in such a manner as to counter the perceivers' initial expectations.
To learn more about
exactly what targets can do to counter perceivers' expectations,
we turn to a series of studies by Bill Swann and his colleagues
-- one of which is also on my "Faustian" list.
In this experiment, subjects were classified as "likeable" or "dislikable" based on a set of adjective self-ratings. After being placed in the getting-acquainted situation, they were led to suspect that their interaction partner liked or disliked them (there was also a no-expectation control). In fact, subjects classified as likeable elicited more positive reactions from their interaction partners -- especially when the partner's appraisal was incongruent with their self-conception.
This experiment made
use of Mastermind, a parlor game popular in the 1970s in
which one player (the "codemaker") sets a pattern of colored
pegs in a pegboard, and the other player (the "codebreaker") has
to determine what the pattern is. Players indicate their
guesses by placing colored pegs in holes, and receive feedback
from their opponents (it's a great game, and you can play it on
the Internet as well as at home). In this experiment,
female subjects first completed self-ratings of dominance and
submissiveness, and then were paired up as codebreakers to play
against the experimenter, serving as codemaker.
Unbeknownst to the subjects, their partners were really
confederates of the experimenter. During the first phase
of the experiment, the subject and confederate played against
the experimenter, alternating roles of leader (who decided what
guess to make) and assistant (who made suggestions).
After a break in the game, they were asked to decide for themselves who would be the leader, and who the assistant, for the next set of trials (Phase II). The confederate then suggested either that the subject serve as leader ("You seem to be the forceful, dominant type") or assistant ("You don't seem to be the forceful, dominant type"). Of course, this feedback was completely independent of the subjects' self-concepts. Some subjects who identified themselves as dominant received feedback that they were submissive, and vice-versa. Naturally, this "misperception" created some consternation on the part of the subjects.
At this point, half the subjects were given the opportunity to respond to the feedback -- through protestations of dominance (or, for that matter, submissiveness), statements, and queries to the confederate. Later, these interactions were rated by judges who were blind to the experimental condition in which the subjects were run.
Subjects who received discrepant feedback were more resistant to the feedback than those who received feedback that was congruent with their self-images -- regardless of whether their self-image was dominant or submissive. Submissive subjects were quite aggressive in defending their submissiveness!
Similarly, dominant subjects who received discrepant feedback were more dominant, compared to submissive subjects in the same condition.
At the very end of the experiment, subjects were asked to rate their personalities again. Subjects who received feedback that was consistent with their self-concepts showed little change from the pretest to the post-test. Subjects who received discrepant feedback showed more change, but those who had been given the "interaction opportunity" to correct the discrepant feedback showed less change than those who had not received such an opportunity.
Thus, when given an opportunity to do so, subjects will behave in such a manner as to correct another's erroneous perceptions of them, and to conserve their own self-concepts. What emerges is what Swann has characterized as a battle of wills between perceptual and behavioral confirmation effects on the part of the perceiver, and self-verification effects on the part of the target.
This line of research came to a climax with a study that directly pitted expectancy confirmation against self-verification, and showed that targets' self-verification efforts can actually alter perceivers' impressions of targets. In this study, undergraduate women were recruited for a study of the interview process. The procedure is complicated, so follow carefully!
Swann
and
Ely also had blind judges rate the targets on the basis of their
end of the conversation -- that is, they had no idea what
questions the perceiver had asked -- just what they had said in
reply. This time, in contrast with the earlier experiment,
the targets generally behaved consistently with their
self-concepts -- especially when their "self-certainty" was
high. When the targets were low in self-certainty, their
behavior was less determined by their self-concepts -- and
especially when the perceiver's level of certainty was
high. So, already we see a conflict between expectancy
confirmation processes and self-verification processes.
Targets can resist the perceiver's expectancies, and behave in
conformity with their own self-concepts -- especially when they
are more certain about their self-concepts.
Phase 2 of the experiment essentially repeated the procedure, with the perceiver selecting from a new batch of questions, and the judges blindly rating the target's responses.
On the perceivers' side, the results were
complex. Uncertain perceivers generally shifted to a
"disconfirmatory" strategy: even when they expected to be
interacting with an extravert, they tended to ask fewer
"extraverted" questions; this was especially the case when
target self-certainty was high. Highly certain
perceivers tended to continue with a "confirmatory" strategy,
asking "extraverted" questions of targets presumed to be
extraverts -- but only when the target's self-certainty was
low. It's as if he gave up on targets who didn't confirm
their expectancies on the first phase, and persisted only with
targets who actually showed a tendency toward expectancy
confirmation.
On the target's side, the judges' ratings continued as in Phase 1. Despite whatever the perceivers were doing with their questions, targets who were more certain about their self-concepts continued to behave in line with those self-concepts. Uncertain targets, interacting with a highly certain perceiver, showed some reversal -- with extraverts actually behaving in a somewhat introverted manner.
The procedure was repeated once more in Phase 3, with perceivers selecting from yet a third set of questions, and targets' third set of responses rated by blind judges.
This time, the perceivers' behavior was essentially independent of their expectations -- especially with the less-certain perceivers. When perceiver certainty was high, it's as if they made one last, halfhearted attempt to elicit expectancy-confirming behavior from the targets.
And the judges' ratings of the targets continued as before. When the target self-certainty was high, her behavior was congruent with her self-concept, regardless of the perceiver's initial level of certainty. When target self-certainty was low, there was no effect of her self-concept -- but then again, it's doubtful that the subjects had a concept of themselves in this domain in the first place! The important point is that there is no effect of expectancies, either.
Finally, the perceivers were asked to rate their final impressions of their targets. Remember, at the outset of the experiment the perceivers' expectancies were incongruent with the targets' self-concepts. Initially certain perceivers ended up more even-handed in their judgments, giving rating targets who considered themselves to be extraverts as no more extraverted than those who considered themselves to be introverts. And less-certain perceivers actually reversed their expectancies, so that their ratings fell more clearly in line with the targets' own self-concepts.
In the
final analysis, then, in the "battle of wills", the target will
eventually win out. Given an opportunity, targets will
tend to correct the perceiver's erroneous expectations. In
extreme cases, targets will revise these expectations
entirely. At least, they will dampen the usual
expectancy-confirmation processes. But this countercontrol
by the target requires that two conditions be in place:
A footnote: It's fairly clear that the experiment didn't turn out quite as intended. It's essentially a standard 2x2 design, with high or low levels of perceiver certainty crossed with high or low levels of target self-certainty. The experimenters almost certainly wanted to see expectancy confirmation in at least one cell -- where the perceiver is highly certain about the target, but the target is relatively uncertain about herself. But they didn't really get this. Instead, the perceivers' expectancies got revised under all conditions of the experiment. And, I suspect, they also wanted the effects to be symmetrical: that the performance of introverts would mirror that of extraverts. They didn't really get this effect either -- perhaps because their sample of targets, being drawn from the population of college students, was somewhat biased toward extraversion in the first place. No matter: it's still a beautiful experiment.
Expectancy confirmation
effects -- whether in the form of experimenter bias or the
Pygmalion effect have been very controversial. I've
already reference the critique of Elashoff and Snow (1970,
1971), who were very vigorous in their criticisms of the
original Pygmalion experiment. As we might expect, most
critical analysis has focused on the classroom, and the
influence of teacher expectations on student performance.
As Madon et al. (2011)
note, social constructivism, including the self-fulfilling
prophecy and various kinds of expectancy-confirmation effects,
is almost axiomatic in social psychology. We don't simply
perceive the social situation, including the others we encounter
in it: we construct a mental representation of the situation by
virtue of our perceptual-cognitive processes; and this
perceptual-cognitive activity has consequences. Because we
behave in accordance with our mental representation of the
situation, our behavior can alter the situation itself, shaping
the situation along the lines of our perception of it.
Social constructivism has been controversial, however, for a couple of reasons.
We'll come up against
this issue again, when the question concerns the accuracy of Social Perception.
At a somewhat higher level of analysis, these studies reinforce the distinction between the physical environment -- temperature, humidity, elevation, pollution, noise level, and the like -- which is the province of ecological (or environmental) psychology, and the social environment of other people -- their presence and their activity, their expectations, demands, and rewards -- which is the province of social psychology. But while much of classic experimental social psychology is focused on the objective social environment, as it would be described by a third person, the cognitive perspective on social interaction focuses on the subjective social environment, as it is cognitively constructed by the individual actors in a social situation.
From a cognitive point of view, it is the subjective environment -- whether physical or social -- that really determines the individual's behavior.
But at an
even higher level of analysis, these studies, and many more like
them, underscore the bidirectional relationship between the
person and the environment.
Beliefs shape, and sometimes create, reality.
This page last revised 02/05/2016.