Assuming adequate consolidation (whatever that is), an encoded memory, made rich and distinctive by elaborative and organizational processing, remains permanently in memory storage until needed, at which time it must be retrieved from storage, subject to interference, and put to use.
The encoding and storage of a memory trace does
not guarantee its later retrieval. Memories can be
retained in storage even if they can't be retrieved on some
particular attempt.
These observations underscore a distinction, first articulated by Tulving and Pearlstone (1966), between the availability and the accessibility of a memory. In these instances, the RA has prevented access to memories that evidently remain available in storage -- because these memories can be recovered later on.
But you don't have to be amnesic to show the difference between availability and accessibility.
Consider, for example, what happens if an
experimenter gives multiple memory tests after a single study
trial. Typically, the number of items recalled will remain
constant (unless there has been some distraction promoting
forgetting over the retention interval). But, as Tulving
(1964) observed, the fate of individual items varies
considerably.
More than 50 years before Tulving, Ballard
(1913) drew a similar contrast between two opposing memory
processes:
In the usual case, as in Ebbinghaus's famous forgetting curve, oblivescence, or inter-trial forgetting, exceeds reminiscence, or inter-trial recovery. That does not always occur, but the fact that any reminiscence occurs at all illustrates the distinction between the availability of memory in storage and its accessibility on any particular retrieval attempt.
Ballard's distinction between oblivescence and reminiscence is illustrated by an experiment by Waldfogel (1948), who asked his college students simply to write down everything they could remember from the first 8 years of their lives - -giving them an hour to do so. The distribution of early childhood memories was markedly skewed, with most memories from age 6 and later -- a clear example of what is called infantile and childhood amnesia (discussed in the lectures on Memory Development).
But that's not all. One week later, Waldfogel repeated the task with the same subjects. This time, he got a slight increase in recall, meaning that some events were remembered on Test 2 that had not been remembered on Test 1. In fact, only about half of the events remembered on Test 2 had been recorded on Test 1 as well -- a dramatic illustration that the accessibility of available memories can fluctuate from trial to trial.
In some cases, as in Tulving (1964), inter-trial forgetting and inter-trial recovery cancel each other out, but circumstances, inter-trial recovery can exceed inter-trial forgetting, resulting in a net increase of recall -- exactly the opposite of Ebbinghaus's forgetting curve. Matthew Erdelyi (1978) has named this phenomenon hypermnesia, to contrast with amnesia. Erdelyi initially claimed that hypermnesia occurred with pictorial as opposed to verbal stimuli, but it now seems that the important variables have to do with elaboration and (perhaps) organization: anything that enhances encoding will reduce inter-trial forgetting, and foster inter-trial recovery -- facilitating access to information that has been available in memory all along.
The distinction between availability and accessibility can also be observed in the feeling of knowing -- the familiar experience where you cannot recall a name, place, or word, but predict that you will recognize it when you see or hear it. These predictions are often correct. The fact that a memory was inaccessible on a recall test, but accessible on a recognition test, shows again that the memory was available all the time. (The related tip-of-the-tongue (TOT) state, in which people who can't recall a word can accurately describe some of its features even though they cannot recall the word itself, is another interesting phenomenon of metamemory.)
The superiority of recognition to recall has been known at least since McDougall (1924), leading to the general view that recognition is an "easier" test of memory than recall, just as multiple-choice tests are usually easier than short essays. But it is unclear just why this is the case -- what makes recognition easier than recall? This was not a problem that researchers studied during the heyday of interference theory, whose experiments were dominated by paired-associate learning (or, if you will, cued recall). But came to be of interest as a result of the cognitive revolution in memory.
In an early experiment by Tulving and
Pearlstone (1966), subjects heard lists of 12, 24, or 48
words representing 1, 2, or 4 conceptual categories.
One group might hear 12 words, all from the same category,
while another group might hear 48 words, 12 from each of 4
categories. You get the idea. In any case, the
relevant category name was announced before the items
themselves, and the items were also blocked by
category. The subjects were then divided into two
different tests.
It is important to understand that the subjects in this experiment had been treated identically up to the moment of the first memory test -- there were no differences in the conditions of encoding and storage conditions between the free-recall and category-cued-recall groups. Thus, the two groups were equivalent in terms of what was available in memory by virtue of encoding and storage processes. The differences in performance on the memory test were due to differences in retrieval conditions -- namely, that a cued recall test increases the accessibility of information stored in memory.
Note, too, that, cued recall is not just "easier" than free recall, because the relationship between test performance and the number of items per category differs between the two testing conditions.
It was on the basis of these results that Tulving and Pearlstone (1966) drew their formal distinction between availability and accessibility. In their view, encoding and storage processes make information available in memory, while retrieval processes make information accessible in memory.
Any particular test of memory measures only that information which is accessible under the conditions prevailing at the time of retrieval.
Accessibility is clearly affected by encoding factors. elaboration and organization make information highly accessible; when encoding is impoverished, information is typically inaccessible
But accessibility is also affected by retrieval factors, particularly the cues provided at the time of retrieval. In free recall, the cues are relatively impoverished, while in cued recall, the cues are somewhat richer.
Similar observations were made in an experiment on retroactive inhibition by Tulving and Psotka (1971). In their experiment, subjects studied lists of 24 words, 4 items from each of 6 conceptual categories, and were given 3 study trials per list. Different groups of subjects memorized 1 to 6 lists. At the end of each study trial, the subjects were given a free recall test, providing a measure of learning during the study phase. And at the end of the last list, subjects were asked to recall all items from all lists. This free recall test was followed by a cued recall tests for the same items, in which subjects were presented with the category labels, and asked to recall the associated list items. Initial learning was pretty good, with subjects typically recalling about 75% of list items. But the final free recall test showed retroactive interference: Recall of items from a particular list decreased with the number of interpolated lists presented between the final study trial and the final test: this, of course is RI. But there was no evidence of RI with the cued recall test: the levels of cued recall were constant, regardless of the number of interpolated lists.
Thus, again, free recall (FRl) and cued
recall (CRl) appear to differ qualitatively, not just
quantitatively:
In the previous studies, the list items were accompanied by conceptual category labels at the time of study, but this isn't necessary to see the effects of retrieval cues on memory.
In an experiment by Watkins and
Tulving (1975), subjects studied lists of paired
associates where the association between elements was
either semantic (e.g., bark-dog) or phonemic
(e.g., worse-nurse). As is usually the case in
paired-associate learning, the first element in each
pair is an explicit cue for the retrieval of the second
element as a target.
Similar findings were obtained when recognition was added to the comparison. Tulving and Watkins (1975) asked subjects to study lists of 28 five-letter words, followed by one of the following tests:
The result was that cued recall was, as usual, better than free recall: Recall improves with the increasing number of cues. And recognition was best of all, presumably because a recognition test provides a copy cue -- a cue that is actually a copy of a studied item.
Light (1972) also performed a comparison of the effects of different types of retrieval cues. Her subject studied lists of words presented either alone or as part of meaningful sentences. Otherwise, no nominal cues were provided at the time of encoding.
The result was
that free recall yielded the worst performance, and
recognition the best.
Thus, no matter how you slice it, Free Recall < Cued Recall < Recognition. One way to explain this outcome is in terms of a dual-process theory of retrieval, such as that offered by Anderson & Bower. In their theory,
In the dual-process theory:
The Cue-Dependency PrincipleThe accessibility of a trace available in memory is a function of the richness and informativeness of the cues used to retrieve that trace from storage. |
The more information in the cue, the more likely retrieval will be successful. Elaboration and organization enhance memory, apparently, by increasing the likelihood that the cue will contact relevant information stored in memory.
The cue-dependency principle implies that retrieval always occurs (when it occurs) in response to environmental cues. But is there any truly spontaneous remembering, in which memories are retrieved even when there are no retrieval cues present in the environment?
Here we need to distinguish between
at least three different sources of retrieval
cues:
There's more to be said about recall and recognition, particularly about the subjective experience of remembering, but I'm deferring discussion of these topics until after the lectures on Implicit Memory -- which, I hope, that will make that discussion more sensible.
We usually consider encoding ,
storage, and retrieval to be separate phases of
memory processing, with two separate factors
governing accessibility: processing at the time
of encoding and cues at the time of
retrieval. But in fact, it is difficult to
separate encoding and retrieval -- even by the
longest retention interval.
The classic mnemonic devices
enhance both encoding and retrieval processes.
Consider a famous
demonstration by Tulving and Thomson (1973)
of the recognition failure of recallable
words.
The finding was that subjects missed many targets on this recognition test. But when they were presented with the original weak cues, such as glue, the subjects recalled many items that had been previously forgotten on the recognition test.
The recognition failure of recallable words reverses the usual relation between cued recall and recognition. Usually, cued (and free) recall is worse than recognition. But in this case, cued recall is better than recognition. This was true even in an experiment involving 2-alternative forced choice recognition.
The recognition failure of recallable words underscores the interaction between encoding and retrieval processes. How an item is processed at the time of encoding determines not just whether it will be subject to elaborative and organizational processing. It also determines what cues will be effective at the time of retrieval.
Hence, the encoding specificity principle (ESP; Tulving & Thomson, 1973).
The Encoding-Specificity PrincipleThe accessibility of an event in memory is a function of the overlap between cue information processed at the time of encoding and cue information processed at the time of retrieval. |
Put another way, cue information processed at the time of encoding determines what cues will be effective in gaining access to memory at the time of retrieval. All encoded memories remain permanently available in storage. Available memories are accessible to the extent that information supplied by the retrieval cue matches information encoded in the memory trace.
Encoding specificity is related to another principle known as transfer-appropriate processing, or TAP (Morris, Bransford, & Franks, 1977):
The accessibility of an event in memory is a function of the overlap
between the processes deployed at the time of encoding and those that are deployed at the time of retrieval.
In TAP, the focus is on the overlap in processes engaged at the time of encoding and retrieval, whereas the ESP focuses on the overlap in the cues being processed.
TAP was famously
demonstrated by Morris, Bransford, and
Franks (1977) in a variation on the LoP
experiment.
This result is not produced by an overlap in cues between study and test, because the cues actually changed, from hail to pail and from hail to sleet. Rather, the effect occurs because the processes -- phonemic or semantic - -remain the same.
A
somewhat similar demonstration was
produced by Tversky (1973), who asked
subjects to study a list of words in
anticipation of either a recall or a
recognition test. Later, half the
subjects in each group were surprised
to receive the test of the other
kind. Overall, recognition was
better than recall. But again,
there was a significant interaction:
So, as with Tulving and Thomson (1973), recognition is not always superior to recall. Apparently, subjects expectations about how they were to be tested affected their processing at the time of encoding (there are implications of this experiment for short-answer versus multiple-choice testing in academic context, but we need not go into these here!).
We'll return to TAP later, when we discuss the differences between explicit and implicit memory. But for now, we'll continue to discuss some ramifications of the encoding specificity principle.
Encoding specificity appears to underlie the phenomenon of state-dependent memory (SDM), where retrieval of a memory depends on the match between the organism's physiological state at the time of encoding, and its physiological state at the time of retrieval.
SDM, in turn, was first observed in an experiment on animal learning by Overton (1964).
In
Overton's experiment, rats were
trained to run a T-maze, turning
right or left at the choice point in
order to escape shock. Before
learning trials, the rats were
drugged with a high dose of
barbiturate, and then half the rats
were reinforced for turning left,
the other half reinforced for
turning right. Over 10
training sessions, the animals
learned to respond
perfectly.
Thus, it seemed that the rats' learning was state-dependent, in that correct performance depended on the congruence between the state in which the learning occurred and the state in which the learning was tested.
Because this experiment was conducted before the cognitive revolution took hold in psychology, and because Overton himself had behaviorist leanings, the phenomenon was initially called state-dependent learning -- and, in some quarters, as drug-discrimination learning (because, ostensibly, the drug state served as a discriminative stimulus for the organism's response). But these days, when psychologists do not hesitate to talk of memory in both humans and animals, and when learning is defined as the acquisition of knowledge, the preferred label is state-dependent memory.
SDM
has also been observed in
humans. In the typical SDM
design, words are presented for
study, and then tested for free
recall, while manipulating the
subject's internal physiological
state by administering psychotropic
drugs (like barbiturate,
amphetamines, marijuana, alcohol,
nicotine, and even caffeine) that
act directly on the central
nervous system. The
experimenter then varies the
congruence of the drug state
between encoding and retrieval.
Put another way, if we plot the probability of recall against encoding-retrieval congruence, recall is best if it occurs in the same state, as opposed to a different state, present at the time of encoding. That's the basic SDM phenomenon.
Here's an actual example, from a study by Swanson and Kinsbourne (1976) on the effects of Ritalin on leaning and memory. Ritalin is an amphetamine-like stimulant which usually impairs learning. But it has well-known "paradoxical" effects on subjects with attention deficit-hyperactivity disorder (ADHD), improving their performance (apparently, their ability to pay attention) in many domains.
In their experiment,
S&K asked hyperactive and
control children to perform a
variant on paired-associate
learning called a zoo-location
task, which paired 48 animal names
with one of 4 familiar cities, as
in elephant-Vancouver (the
subjects were
Canadian).
On Day 2, those hyperactive children learned fastest whose drug states matched their state on Day 1: the congruence between encoding and retrieval states apparently enhanced memory. There were similar, though weaker, results for the normal subjects.
By
now, SDM has been shown, to
varying degrees, by a lot of
centrally acting drugs -- not
just anesthetics like
barbiturate, but also
anti-anxiety and antidepressant
agents, narcotics,
hallucinogens, and even nicotine
(see the review by Eric
Eich). The SDM effects of
caffeine are dubious -- perhaps
because, between coffee, Coke,
and Pepsi, not to mention
Mountain Dew, Red Bull, and
other "energy drinks", the
caffeine administered in the
laboratory is like a drop in the
ocean. There are no
effects, apparently, of aspirin
or lithium (a common treatment
for bipolar disorder).
SDM is not limited to drug states.
Eich and Metcalfe (1989) induced happy or sad emotional states by having their subjects listen to music, and then had them study a list of words. In a variant on the LoP paradigm, they read half the words from a list (e.g., vanilla); the remaining words were generated from cues (e.g., Milkshake flavors: Chocolate and _____). Typically, words that are generated by the subjects themselves are remembered better than those that are merely read -- a phenomenon called the generation effect, which is generally interpreted in terms of the elaborate processing induced by the task of generating the target words.
Anyway,
on
a later recall, Eich and
Metcalfe induced the same or
different mood in their
subjects, again by listening
to music.
Drug states and emotional states are both aspects of internal context -- the subject's internal physiological state, or the subject's internal mental state. But we can get similar effects by manipulating external context.
For example, Abernathy (1940) noted that college examination scores were higher when the exam was given in the student's usual classroom, with the usual instructor serving as proctor.
Some studies employ a more radical manipulation of the environment at encoding and retrieval. For example, Godden and Baddeley (1975) found evidence of environment-dependent memory (EDM) in SCUBA divers who studied a list either on the beach or 15 feet underwater, and who received a free recall test either on the shore or under water. Memory was best when study and test took place in the same environment.
So far as we
know, EDM follows the same
rules as drug
state-dependent memory (see
reviews by Smith, Glenberg,
and Bjork), and papers by
Fernandez and Glenberg).
State-dependency,
emotion-dependency, and
environment-dependency are
all aspects of a general
effect of context-dependent
memory.
Apparently, information
about the context in which
an event occurred is
encoded as part of the
memory trace.
The
encoding-specificity
principle (and the
related principle of
transfer-appropriate
processing) illustrate
the general point that
encoding and retrieval
factors work together to
determine the
accessibility of
memory.
As noted
in the lectures on
Representation, the
schema concept in
psychology has its
origins in the work of
F.C. Bartlett, and his
research on memory for
stories.
The schema concept was introduced into cognitive psychology by Bartlett (1932), who adapted it from earlier theoretical work of Sir Henry Head on bodily posture (Head & Holmes, 1911). Head noted that an organism receives a considerable amount of sensory information about the position and activity of its own body -- Sherrington (1904) had referred to these sensations as proprioception. This has to be integrated with some ongoing sense of what the body's posture is currently -- a sense that has to be continuously modified in light of incoming proprioceptive feedback. This evolving mental image is the body schema. Bartlett took this basic concept of the body schema and applied it to cognition. In his terms, a schema is the cognitive background against which perception is constructed, and memory is reconstructed, and is itself modified by those percepts and memories. A schema is an organized knowledge structure containing generic knowledge, beliefs, and expectations. It organizes perception and memory -- by which Bartlett means perceiving and remembering. As such, schemata provide the cognitive basis for the "effort after meaning" that is central to Bartlett's view of mental life.
Of course, a very similar concept of schema is central to Piaget's theory of cognitive development, as introduced in The Child's Conception of the World (1926). For Piaget, a schema is a cognitive background against which assimilation and accommodation take place.
Bartlett's schema concept was roundly criticized by fellow British psychologists (Oldfield & Zangwill, 1942), some of whom found Bartlett's use of the term incomprehensible. And in the United States, his work was largely ignored by the functional behaviorists who dominated American academic psychology. The idea of the schema was revived in experimental psychology by Neisser (1967), who made the concept central to his textbook on cognitive psychology, and in clinical psychology by Beck (1967), who made the concept central to his cognitive theory of depression and its treatment.
The relationship between Bartlett's and Piaget's use of the schema concept is problematic, and is a problem for the historians of psychology to resolve. Piaget published before Bartlett, in 1926, but Bartlett does not cite him -- his exposition is derived exclusively from Head. In somewhat the same way, Neisser and Beck do not acknowledge each other, even though they were writing their books at roughly the same time (the mid-1960s) and in the same place (the University of Pennsylvania, where Neisser was on sabbatical from Cornell in Martin Orne's Unit for Experimental Psychiatry). However, both Neisser and Beck were aware of Schachtel's paper which applied Piaget's theory to infantile and childhood amnesia, and Neisser cites Schachtel in his book.
There ensued, especially in the 1970s and 1980s, a full-scale "Bartlett revival" in the study of memory, as exemplified by the work of Gordon Bower and his students on story memory (for a review, see Brewer, 2000). Moreover, the schema concept was embraced by the emerging cognitive perspective in social psychology.
According
to Taylor and
Crocker (1981),
schemata had a
number of cognitive
functions:
what argued that schemata determine what will be encoded in, and retrieved from, memory. For Bartlett, encoding favored schema-congruent information -- information that fit with the person's expectations, and that was easily assimilated into currently active structures. And he suggested that schema-incongruent information might be ignored, and thus not encoded at all -- or else, distorted so as to be assimilated into prevailing schemata. Bartlett also argued that schemata provided the basis for retrieval (or, as he preferred to call it, reconstruction). Initially, he argued, the person remembers the event in very general terms, and then invokes a schema which guides the rest of the retrieval/reconstruction process.
When the schema concept was revived, it was naturally revived in the context of memory -- in particular, examining the relations between particular events and generic schemata, viewed as a kind of semantic knowledge, and their effect on subsequent episodic memory. The result was a bunch of conflicting findings: pretty much everyone found that schema-congruent events were well remembered, but the fate of other kinds of items was unclear. And, in fact, Bartlett's own evidence was very ambiguous on this point -- partly because he was unclear about what a schema was, and also due to his aversion to quantitative analysis.
The
situation was
clarified by a
series of papers
by Hastie (1980,
1981), who began
by noting that
there were three
different
relationships that
could obtain
between general
schematic
knowledge and
particular events:
Hastie and Kumar (1979) studied the schematic effects on memory in the context of person memory, a topic in social cognition that is concerned with memory for the attributes and behaviors of other people. In this context, knowledge of a person's general attributes comprises the schema for that person; and knowledge of his specific behaviors constitutes episodic memory.
Of
course, it
would be
unusual to
have equal
numbers of
schema-congruent
and
schema-incongruent
events.
Almost by
definition,
schema-incongruent
events have to
be relatively
rare --
otherwise,
you'd have a
very different
schema for
that
person.
Schema-incongruent
items,
precisely
because they
violate our
expectations,
should be
relatively
infrequent.
Accordingly,
in a
subsequent
experiment
H&K
constructed
alternative
lists of 12
behaviors:
Hastie (1984) conducted a further experiment in which the trait ensemble was followed by a list of schema-congruent and schema-incongruent items. Recall testing yielded the schematic processing effect, as expected. However, in a second experiment Hastie asked subjects to perform a sentence-continuation task: after each item, they were supposed to continue it with either an explanation of the event, and elaboration of the event, or the sequel to the event. On a later recall test, items (whether schema-congruent or schema-incongruent) in the explanation condition were recalled better than those in the elaboration or sequel condition. So, it's not schema-incongruency per se that yields better memory: it's the explanatory activity that schema-incongruency instigates.
These experiments illustrate the schematic processing principle:
The Schematic Processing PrincipleMemory for an event is a function of the relationship that event and pre-existing schematic knowledge, expectations, and beliefs. |
Schematic
processing
actually
reflects two
different
processes
affecting
schema-congruent
and
schema-incongruent
information.
Srull (1981) offered a somewhat different explanation than Hastie for schema-dependency, within the framework of a generic associative-network model of memory. He proposed that nodes representing individual episodes are linked to a node representing the person, in the usual way. Then, connections among nodes are produced by virtue of processing at the time of encoding -- such as explaining schema-incongruent items in light of the schema. However, nodes representing schema-incongruent items are associatively linked both to each other and to nodes representing schema-congruent items as well.
Testing recall, Srull obtained the usual schema-dependency effect. Schema-relevant items were recalled better than schema-irrelevant items, and schema-incongruent items were recalled better than schema-congruent items.
Then, Srull employed a sentence-verification procedure, not unlike that which had been used by Anderson & Hastie (1974), to examine priming effects on recognition memory. Srull compared response latencies to verify schema-congruent, incongruent, and -irrelevant items, depending on the immediately preceding item. Compared to a baseline provided by schema-irrelevant items, schema-congruent items primed responses to schema-incongruent items, while schema-incongruent items primed both schema-congruent and schema-incongruent items; schema-irrelevant items didn't prime anything. These results are consistent with Srull's hypothesis, that schema-incongruent items are linked to each other and to schema-congruent items, but that schema-congruent items are not directly linked to each other.
So far, the methods used to study memory, and the basic principles derived from those methods, wouldn't have surprised Ebbinghaus. If he had lived to 1980, he would have been impressed by the progress made beyond the Law of Repetition and the Principle of Time-Dependency, and he certainly would have been impressed by our advances in understanding the biological bases of memory, but he would view this progress as natural, and not embodying anything like a paradigm shift. This is because most of the work so far has been based on a more or less sophisticated version of the library metaphor, with memory traces, representing events, being encoded, stored, and retrieved much like books on a library shelf.
Lying behind the library metaphor is a particular view of the memory trace, as something that has an existence independent of the person doing the remembering. The trace exists "in memory", and must be found in order for remembering to proceed.
But an entirely different approach to memory was introduced by Frederick C. Bartlett (1932), a pioneering British psychologist (British, Canadian, and Australian psychologists like to trace their heritage to Bartlett, much like American psychologists like to trace their heritage to William James).
Some
hint of the
difference can
be seen in the
titles of
Ebbinghaus's
and Bartlett's
books:
But
in Bartlett's
view,
Ebbinghaus's
procedures,
and findings,
were
misleading,
and did not
represent
memory as it
operated in
the real
world.
In the real
world,
Bartlett
argued, memory
is reconstructive,
not
reproductive.
People don't
retrieve
memories of
past
experiences;
rather, they
reconstruct
memories.
Bartlett began his treatise on Remembering: A Study in Experimental and Social Psychology with a critique of the verbal-learning paradigm invented by Ebbinghaus for Uber das Gedachtniss. In fact, he began it with a critique of 19th-century psychophysics, which he then extended to Ebbinghaus. For Bartlett, the whole enterprise was too sterile, because the stimulus materials, and what subjects were asked to do with them, were devoid of meaning -- effectively denying subjects the effort after meaning which he considered essential to understanding mental function. Ebbinghaus, following Fechner and the other 19th-century psychophysicists, invented the nonsense syllable precisely to maintain tight control of stimulus conditions; and the nonsense syllable, by its very nature, was intended to frustrate subjects efforts after meaning, and force them to form merely rote associations between one meaningless CVC and another. The whole thing was wrongheaded. As Bartlett famously put it -- and every psychologist should keep this inscribed in a wallet-sized card:
In
a
foreshadowing
of Martin
Orne's
critique of ecological
validity,
Bartlett
argued that
remembering
was not
adequately
represented by
the rote
memorization
of unrelated
meaningless
items -- what
the
psychologists
of the
Bartlett
Revival
disparaged as
"grocery
lists".
Remembering
was less like
rote
recitation and
more like the
telling of
stories.
Accordingly,
in his
experiments
Bartlett told
his subjects
unfamiliar
stories -- a
favorite was
"The War of
the Ghosts", a
folktale
collected from
Native
Americans in
the Northwest
by the
pioneering
anthropologist
Franz Boas,
which begins
"One night two
young men from
Egulac...".
He read the
story out loud
twice.
Then, after a
suitable
retention
interval,
Bartlett then
asked his
subjects to
tell the story
themselves.
For this
purpose, he
employed two
somewhat
different
methods.
Based
on results
such as these,
Bartlett
argued that
remembering
was reconstructive,
not
reproductive,
in
nature.
The Reconstruction Principle
|
The
Reconstruction
Principle
qualifies the
Library
Metaphor
so frequently
invoked
(including by
myself) as a
framework for
understanding
memory.
Bartlett published his book in 1932, and then pretty much left further research to others. He wasn't a great methodologist to begin with, and reading through the "experimental" portions of Remembering, you get the sense that like James, he understood that research was crucial but his heart just wasn't in it. (Though he did produce a string of distinguished memory researchers, including Graham Reed, author of The Psychology of Anomalous Experience, which has extended treatments of such phenomena as deja vu.) And, as noted, his work was ignored for a long time: his British colleagues didn't understand it, and American psychologists were too infatuated with behaviorism. But beginning with Neisser's references to schema theory in Cognitive Psychology (1967), cognitive psychology, and especially memory research, underwent a sort of "Bartlett Revival" in which investigators began to study memory for things other than lists of nonsense syllables, words, and pictures, and in which theorists began to employ the schema concept without embarrassment.
There is an interesting story here. Bartlett himself rejected the verbal-learning procedure initiated by Ebbinghaus as too devoid of meaning. And in 1978, Neisser himself castigated those who employed the verbal-learning paradigm for sacrificing ecological validity to the appearance of methodological rigor:
Both Bartlett and Neisser had a point, as anyone who has gotten through the lectures on Associationism and Interference Theory can attest. But it also turned out that they were wrong. Once researchers grasped what Bartlett had been up to, and once the Cognitive Revolution was thoroughly ingrained in psychologists' mind and behavior, it was pretty easy to come up with ways to explore reconstructive processes with the constraints of the verbal-learning paradigm.
Among the earliest such efforts was research on the semantic integration effect by Bransford and Franks (1971, 1972).
This is reconstruction in almost pure form. The subjects have "constructed" a memory based on fragmentary material, woven into a sort of story.
An even more explicitly "Bartlettian" approach is exemplified by research on story memory -- that is, memory for narratives, instead of the usual word lists.
One line of research focused on point of view effects -- that is, the effect on memory of taking the point of view when reading a story. For example, Bower and his associates asked subjects to read a story about two boys playing hooky from school, which described their activities around the house. The subjects were asked to read the story from the perspective of either a home-buyer or a house-burglar. After recalling the story, they were asked to shift to the other perspective. In this condition, the subjects recalled new details that were important to the second perspective, but not the first.
A famous example of reconstructive memory was provided by Loftus and her colleagues, in a series of studies of eyewitness memory. The study employed a laboratory model of eyewitness memory, and thus has all the appearances of ecological validity, but deep down it's just a variant on the verbal-learning paradigm. The stimulus materials are pictures or film clips, not words, and the stimuli are ordered into a narrative instead of randomly, but it's a long ways from "The War of the Ghosts" (this is not a criticism of Loftus, but rather an illustration of the point I made earlier: reconstructive processes can be studied using standard verbal-learning paradigms).
Again, this is a good example of reconstruction in memory. The subjects saw what they saw, but they might not even have noticed the traffic sign. But the later interrogation assumed the existence of a Stop or Yield sign, and this knowledge was incorporated into their memories. Some of the memory came from trace information, others from the query itself -- and then it was all put together.
These experiments illustrate the post-event misinformation effect. Memory is not "pure", and leading questions can influence eyewitness reports. Apparently, misinformation gleaned from leading questions can be incorporated into an observer's memory for an event.
There
are lots more
experiments
along these
lines.
Of course, there are other alternatives. One, known as the source hypothesis, states that both memories are available in storage -- the original product of perception, and the later product of leading questions exist side by side. The post-event misinformation effect occurs because the subject is confused as to the source of the memories. The source hypothesis is inspired by findings of source amnesia in amnesic patients and normal subjects, which shows that subjects can remember information that they have acquired through learning, but forget the details of the learning experience.
A variant on the source hypothesis, and also consistent with the construction hypothesis, is the bias hypothesis proposed by McCloskey and Zaragoza (1985). They propose that the post-event misinformation effect occurs when memory for the original event is relatively poor. In their view, the post-event misinformation has no effect on the original memory. Rather, the misinformation creates a separate memory. In the absence of a strong original trace, retrieval and reconstruction is biased toward the trace representing the more recent misinformation.
To
understand the
bias
hypothesis
better,
consider the
following
hypothetical
classification
of subjects:
So, at least for now, thee post-event misinformation effect remains an example of reconstructive processes in memory. Memory does not involve just the retrieval of a trace from storage. Memory involves judgment and decision-making. Remembering is a judgmental process that takes account of all available information, not just trace information. The misinformation effect shows how, in remembering as in perceiving, the person "goes beyond the information given".
The Bartlett Revival in memory culminated in a large number of studies of what is now known as the Associative Memory Illusion -- also known as the "Deese Roediger McDermott Effect", or simply the "DRM Effect", because it was initially discovered by Deese (1959) and rediscovered and studied thoroughly by Roediger and McDermott (1985). The vast literature on the AMI is reviewed by Gallo (2000, 2006) -- who, because he was a student of Roediger's, calls it the "DRM effect".
The
Associative
Memory
Illusion is
literally an illusion.
An important corollary to the Reconstruction Principle is that memories are not properly viewed as traces of past events, encoded in the brain. Rather, they are better viewed as beliefs about the past.
The Reconstruction Principle also has important implications for the Library Metaphor of Memory, which has guided research and theory on memory ever since Ebbinghaus (if not Aristotle). It's convenient, and not completely incorrect, to think of memory as analogous to a book, which is written, cataloged and stored in a library, located, taken off the shelf, and read. But remembering is more like writing a book, based on fragmentary notes, than it is like reading one.
Remembering,
like
perceiving, is
what Bartlett
called effort
after meaning.
In both cases,
the person is
trying to make
sense of his
or her
experience.
Both
perception and
memory involve
problem-solving
activity --
the problem
being to
determine
"What is going
on now?"
and "What was
going on then?".
The analysis of memory yields evidence of seven broad principles of memory, which may be organized according to the stage of memory processing to which they apply:
There are other plausible candidates, and with all due respect to George Miller's famous paper, it's important not to make a fetish over the number 7.
As a reflection of the Bartlett Revival in the contemporary psychology of memory, Daniel Schacter has described "Seven Sins of Memory" that reflect the errors and biases entailed in reconstructive activity. These "sins" (Schacter himself admits that this might be too strong a word) can themselves be understood in terms of the "Seven Principles" discussed in these lectures.
In fact, there is one overarching principle, that runs through all of these, and underlies all of our understanding of how memory works (there, I think I've used all three possible spatial metaphors):
Availability vs. Accessibility.
It should be understood, however, that all of these principles were discovered in the context of standard tests of recall and recognition -- tests which require subjects to consciously remember past events. That begs the next question:
Is there more to memory than can be consciously remembered?
In one sense, that question has already been answered by the distinction between availability and accessibility. There is clearly more information available in memory than can be accessed at any particular moment, via any particular type of test.
But up until now, we have always defined access in terms of conscious recollection. Which begs the real question:
Can traces of past experience be encoded and stored, and thus available in memory, inaccessible to conscious recollection, but nonetheless still capable of influencing our ongoing experience, thought, and action?
The answer, it turns out, is "yes".
This page last modified 10/04/2014.