Note: the material on neural underpinnings of working memory and the hippocampus has not been seriously updated since 1999. Lots has happened since that time, but my course tended to focus on the psychology, not the neuroscience, or memory, so it hasn't been updated all that much.
Psychology is the science of mental life, and its first task is to understand the mental structures and processes that underlie human experience, thought, and action. This includes the mental structures underlying learning and memory.
The principal methods of psychology are behavioral in nature -- basically, they're all variants on self-report and response latency. The psychologist creates an experimental situation to which the subject must make some behavioral response, and then infers the nature of the underlying mental structures and processes from the subject's observed behavior.
The fact that psychology relies on behavioral methodology does not mean that psychology is a science of behavior. This was the definition of psychology put forth by John B. Watson, and carried forward by behaviorists from Watson to B.F. Skinner (and beyond). But, as Noam Chomsky put it, somewhere, "Psychology is no more the science of behavior than physics is the science of meter-reading". Behavior is our window onto the mind, just as meters are the physicist's window onto the physical world. Psychology has little interest in behavior, per se, except as it tells us something about the mind.
Actually, that's not true. Psychology is a science of mental life, but it does offer an explanation of behavior. A psychological explanation of behavior is always in terms of the individual's mental states -- i.e., cognitive, emotional, and motivational states of thought, belief, feeling, and desire. The reason psychology is interested in mental life, then is because psychologists explain behavior in terms of mental states.
In any event, behavioral experiments have yielded a number of principles governing learning and memory.
For learning, we know that:
For memory, we know:
Psychology could stop here and be a pretty good science, dotting the occasional i and crossing the occasional t, replying on empirical observations made under controlled circumstances to infer a set of theoretical principles which predict future observations, and developing theories expressed formally in mathematical and computational terms.
But some, many, psychologists want to go beyond this point, to uncover the neural structures and processes underlying mental life. This is a perfectly reasonable desire. The brain is the physical basis of mind, and what goes on in the mind must relate to what is going on in the brain and the rest of the nervous system. Arguably, a complete theory of the mind would also have something to say about "how the brain does it". But, as someone once said, physiology is a tool for psychology, not an obligation. It's not necessary to understand how the brain works in order to understand the principles of mental life revealed by experiments performed at the psychological level of analysis.
Speaking of which,
this is a good time to remind ourselves that, so far as
behavioral science is concerned, there are a number of
different levels sat which we can explain the behavior of the
individual.
In these lectures, we'll focus on the links between psychology and biology.
Even within the biological domain, there are several levels of explanation, corresponding to the hierarchical organization of biological structures. So, the biology of learning and memory can be addressed at a number of different levels:
For much of the 20th century, interest in the biological substrates of learning and memory was confined to what Hebb (1955), following Skinner (1938), called the conceptual nervous system (sometimes abbreviated cNS, as opposed to the real CNS -- the central nervous system). Everybody understood that learning and memory had something to do with neurons, but theory didn't really go much farther than that. But neurobiological work on learning and memory wasn't left wholly undone. UCB's Mark Rosenzweig (2007), himself a pioneer in this area, wrote a summary of this early work, from which this section draws heavily.
The signal event in the 19th century, of course, was the publication of Theodule Ribot's Diseases of Memory (1881), part of a trilogy of works, which attempted to induce general principles of memory from clinical cases of brain damage or brain disease (the other books were devoted to Diseases of the Will and Diseases of the Personality; he also wrote an early treatise on the Psychology of Attention). In his book, Ribot looked forward to a future time, when brain-damaged patients could be studied with experimental methods, not just clinical observation. Then came Ebbinghaus (1885), who developed the experimental methods, laying the foundation for the neuropsychological study of memory, as it began to develop in the 1950s with the case of H.M.
The next important event was the announcement of Muller and Pilzecker (1900) of their perseveration-consolidation hypothesis, which argued that memory traces are stabilized and made permanent by neural activity which continues for some period of time after the stimulus event has terminated. Consolidation failure, in their view, accounted for the temporal dynamics of retrograde amnesia, as captured by Ribot's Law. Employing Ebbinghaus's nonsense syllables and the newer paired-associate technique recently invented by Mary Whiton Calkins, they performed some 40 experiments. M&P's monograph has never been translated into English, but Lechner, Squire, & Byrne (1999) provided a comprehensive summary, from which the following is drawn. In a typical experiment (a variant on cued recall), subjects studied a list composed of nonsense syllables, A-b-C-d-E-f; then they were presented with the cue term A, and asked to produce the paired associate b, etc. In this way, their new experimental paradigm more closely paralleled the Stimulus-Response paradigm then beginning to emerge in the study of associative learning.
Muller and Pilzecker didn't do any neurophysiological work themselves, but their consolidation hypothesis raised the question of what a memory trace looked like at the neural level of analysis. This set off what Karl Lashley later called "the search for the engram", or the neural representation of the memory trace.
The search for the cellular basis of memory began with Ramon y Cajal's (1888) discovery that the nervous system, like the rest of the body, is made up of discrete cells, which Waldeyer-Hartz (1891) named neurons. For this discovery, Cajal shared the 1906 Nobel Prize for Physiology or Medicine with Camillo Golgi, who developed the staining technique which Cajal had used. Cajal also gets credit for the neuron doctrine -- anticipated by Alexander Bain (1872), but formally pronounced by Wilhelm von Waldeyer-Hartz (1891) -- that nerves are not continuous structures, but are separated by gaps -- which Sherrington subsequently named synapses (Foster & Sherrington, 1897). As early as 1894, Cajal had proposed that learning somehow modified the junctions between neurons.
These are the elements of the neuron doctrine, as originally set out by Waldeyer-Hartz, and subsequently elaborated by others (for details, see Finger, 1994).
The 20th Century
And there things stood for a half-century. In 1955, Hans-Lukas Teuber (he who gave us the vocabulary of single and double dissociations, initially trained as a social psychologist, and taught a pioneering cognitive neuropsychology course at MIT) wrote that "the absence of any convincing physiological correlate of learning is the greatest gap in physiological psychology" (p. 267). But beginning in the 1960s, research in this general area took off rapidly.
Here are the major milestones (for details, see Rosenzweig, 2007):
One of the earliest
attempt to relate learning and memory to biological processes at
the cellular level was by James V. McConnell (1962), who
succeeded in demonstrating classical conditioning with planaria,
flatworms with a very simple nervous system. Flatworms
also have the capacity to regenerate lost body parts, so in one
of his experiments McConnell conditioned flatworms to respond to
a bright light paired with electric shock. After
acquisition, McConnell ground up the worms, and fed them to
other planaria. When these new worms were put through the
same conditioning procedure, they acquired the conditioned
response faster than worms in a control condition.
McConnell suggested that the conditioned response was encoded in
messenger RNA, and so passed from the body of one
flatworm to the next.
McConnell's research proved difficult to replicate (it didn't help that he published most of this research in his own journal, the Journal of Biological Psychology, which while refereed and reputable was bound with a humor journal titled the Worm Runner's Digest. With the discovery of LTP, the biochemistry of memory has now gone in other directions, but Michael Levin and his associated (2013) recently reported that flatworms could regenerate memories as well as body parts. These investigators exposed flatworms to food in a distinctive environment -- a petri dish with a textured surface. After allowing the worms to become familiar with their environment, they then decapitated the worms, waited for two weeks for their heads (and brains, such as they've got) to grow back, and then put them back in the familiar petri dish. Planaria who had previously been familiarized with the textured dish began to feed more quickly than control worms who had never been exposed to it. This suggested, at the very least, that memories can be stored in neural locations outside the brain.
For more
on this history, see Chapters 23 and 24 of Stanley Finger's Origins
of Neuroscience: A History of Explorations into Brain Function
(1994).
At the cellular level of analysis, learning must be represented by changes in neural connections. Through experience, some neurons become disposed to fire together: as D.O. Hebb (1949) put it, "neurons that fire together wire together". Or, as James had speculated half a century earlier, "When two elementary brain processes have been active together or in immediate succession, one of them, on recurring, tends to propagate its excitement into the other". And, as Eric Kandel and his associates found in his Nobel-prizewinning research, those neural changes are preserved more or less permanently through a process known as long-term potentiation (LTP).
In contemporary neuroscience, research on the molecular and
cellular basis of learning and memory focuses on the
synapse, which mediates the connection between one neuron
and another. Logically, in order for learning to
occur, the connection between two neurons, or more likely
two assemblies of neurons, has to be modifiable. Neural
plasticity, or the ability for neural function to
change with experience, must be possible -- or learning
couldn't occur at all.
Consider a simple laboratory model of learning and memory,
such as classical conditioning. It seems that the
neural representation of the stimulus -- for simplicity,
think of it as a single afferent neuron, A, has to
acquire the ability to increase the likelihood of the neural
representation of the response -- again, for simplicity,
think of this as a single efferent neuron, B.
Eric Kandel and his colleagues were able to identify the
mechanisms of neural plasticity in research on the sea
mollusk Aplysia, which has only about 20,000 neurons
in its entire nervous system (compared to 85 billion in the
human nervous system). For his work, Kandel received
the Nobel Prize in Physiology or Medicine for 2000.
These mechanisms for neural plasticity come in two broad
forms:
You get the idea.
With respect to
higher levels of neural organization, most of this
research has involved a search for localization of
learning and memory.
Here, the question is whether particular memories are located in particular places in the brain. Perhaps there is a large brain structure, like Broca's area 12 (to take a Broca's area at random) which is invariably activated whenever a memory is encoded and retrieved, and which might, plausibly, be identified as the mental storehouse of memories. Alternatively, perhaps there is a a single neuron, or more likely a cluster of adjacent neurons, which are invariably activated whenever a particular event is remembered, and which might, plausibly, be identified as the neural representation of that memory.
The question of localization of content is epitomized by Karl Lashley (1890-1958) and his "Search for the Engram". Lashley taught rats to run a maze until they had reached a criterion for learning. After he ablated various portions of cerebral cortex, he then retested them in the maze. The idea was that, over different sites of damage, he could triangulate on a particular area of the cortex that held the rat's memory for the maze. In this, he failed utterly. Performance varied as a function of the amount of cerebral cortex destroyed, but was unaffected by the particular site of the lesion. On the basis of these results, Lashley formulated a Law of Mass Action:
[T]he maze habit, when formed, is not localized in any single area of the cerebrum.... [I]ts performance is somehow conditioned by the quantity of tissue which is intact.
Now, there are problems with Lashley's Law. The maze-learning task is complex, and performance on it could be mediated by many different cortical structures. Moreover, Lashley focused exclusively on cerebral cortex, and ignored the potential role of subcortical structures. Still, Lashley "laid down the law" that governed the conventional wisdom about the neural representation of particular pieces of knowledge. This conventional wisdom remained unchallenged until recently -- a point to which I will return later.
Still, the failure of Lashley's program of research to localize particular bits of knowledge in the brain led D.O. Hebb (1904-1985) and others to suggest models of neural networks that would implement the Law of Mass Action. Of particular significance was Hebb's notion that information was represented by cell assemblies distributed widely over the cortex. This wide distribution created a redundancy which survives localized lesions. Hebb's ideas foreshadowed the connectionist approaches to knowledge representation popular since Rumelhart and McClelland introduced the parallel distributed processing model of cognition.
But if knowledge is represented by assemblies of cells distributed widely over the cortex, what binds those cell assemblies together?
The question here is whether memory processes, as opposed to the contents of specific memories, are localized -- that is, served by particular nervous system structures. For a long time, the answer was no. Despite evidence of functional specialization in sensory and motor domains, and in language (e.g., Broca's and Wernicke's aphasia), "Tan", and Patient H.M. onwards.
The question of localization of function has been the focus of most work in cognitive neuropsychology and cognitive neuroscience related to memory, so that's where we'll focus our attention.
The biology of memory pretty much starts with Patient H.M. As is well known, H.M. received surgical resection of the medial portion of his temporal lobes, including the hippocampus, as a radical and desperate treatment for intractable epilepsy. The treatment worked, in that the severity of his seizures was greatly reduced; but it left him with a dense, permanent, anterograde amnesia (AA). This outcome indicated that the hippocampus and surrounding structures, whose function had been heretofore unknown, played a critical role in memory. There were other cases of resection of the temporal lobes (sparing the primary auditory cortex, naturally) which also resulted in a dense AA.
The conclusions to be drawn from H.M. and similar cases were clear from the start.
Brenda Milner, the neuropsychologist who did the original research with H.M. interpreted these results as follows:
But before we can answer that question, we need to understand that the medial temporal lobe (MTL) is itself a large and complex region, containing many structures including, but not limited to, the temporal lobe itself, the hippocampus, and the amygdala (don't forget the amygdala!).
Viewing this complex from the underside:
It's not at all clear which structures are actually responsible for memory, because all human cases of the amnesic syndrome involve damage t the hippocampus and amygdala as well as the temporal lobes.
Still, attention quickly focused on the hippocampus and the amygdala, subcortical structures embedded in the posterior portion of the medial temporal lobes. There appeared to be no memory impairment unless the brain damage included this area.
Zola-Morgan, Squire, and Amaral (1986) studied a new patient, known as R.B., whose amnesia was caused by a global ischemia, or restriction of the blood supply, and thus the flow of oxygen, to the brain. Post-mortem examination showed a bilateral lesion confined to the hippocampus, and for that matter a specific region of the hippocampus known as CA1. But R.B. was not as amnesic as H.M., suggesting that severe amnesia required damage to other sites as well.
At that point, before the advent of modern brain-imaging techniques such as PET and MRI, the general view was that the specific locus of memory was not decidable based on human evidence, which relies on accidental brain damage that can't be rigorously controlled. Investigators needed a way to produce discrete lesions to specific areas, to find out precisely which areas were responsible for memory. This can be done with lesions in nonhuman animals, but rats and monkeys can't talk to us. Thus, it was first necessary to develop an animal model for episodic memory -- memory for discrete episodes of experience.
Interest quickly settled on a paradigm known as delayed non-matching to sample. In this procedure, we present a single object to the animal. After a delay, we give the animal a choice between the old and a new object, and the animal must choose the new object (that is, it must not match the original sample) in order to get a reward. Successful performance requires that the animal recognize the old object as familiar, in order to reject it in favor of the new object. It is known that performance on this task declines markedly with a retention interval.
Mishkin
(1986) applied this animal model to a series of
studied with monkeys in which he created specific
patterns of brain damage, resulting in different
levels of amnesia.
The H+A+ lesion in monkeys produces an excellent animal model of human anterograde amnesia, as indicated by the comparison of monkeys and humans on the identical delayed non-matching-to-sample task. In the human case, we can experimentally manipulate memory by increasing the retention interval, increasing cognitive load, or creating a distraction task. The resulting memory deficit is generalized, not limited to a single modality. And it is enduring -- we get the same pattern of performance when monkeys are retested after 1-2 years. The H+A+ lesion even produces some RA for memories formed immediately prior to the lesion. And, most important, the memory deficit is selective: it spares short-term memory and memory for skills.
Based on results such as these, Squire and Zola-Morgan (1991) delineated a medial temporal lobe memory system, involving the hippocampus and surrounding subcortical structures. The MTL is not involved in perception, nor short-term retention. But long-term memory requires that perceptual representations in neocortex make contact with intact structures in the MTL. Squire and Zola-Morgan speculated that the MTL serves a binding function, connecting disparate features of an event that are themselves processed by separate cortical sites. The MTL may also perform an indexing function, allowing an organism to retrieve a whole memory from partial cues. But it's not just the binding of individual features together. If that were the case, MTL lesions wouldn't affect recognition memory, where all features of the original event are represented in what Tulving called "copy cues". Rather, the MTL seems to perform a specific function of binding of the features of an event to its episodic context.
What Was It Like To Be HM?
|
The amygdala is damaged in most cases of human amnesia, but it appears to make little contribution to the patients' memory deficits. So what dos it do?
The answer comes from studies of fear conditioning in animals. In a typical fear conditioning experiment, a conditioned stimulus (CS) such as a tone would be paired with an unconditioned stimulus (US) such as foot-shock. Over just a few trials, animals will show a conditioned fear response (CR) to the tone: freezing, piloerection, elevated blood pressure, and increased heart rate.
Studies by LeDoux and others have explored
the effects of brain lesions on conditioned fear.
But there's more to it than that, because apparently amygdala activity adds emotional valence to a memory (at least, the negative valence that comes with fear), and this emotional valence serves to make memories more distinctive. This idea may help explain why, when A lesions have no effect on memory, H+A+ lesions are especially deleterious.
Larry Cahill and James McGaugh have
explored the effects of the amygdala on memory in both
animal and human experiments.
Again, the general idea is that the amygdala is not directly involved in memory. But it is critical for generating the high levels of emotional involvement that strengthens memory.
Other researchers have focused on the frontal lobes. It has long been known that the frontal lobe contains the primary motor cortex, as well as important premotor areas. But it has become clear that the prefrontal cortex (PFC), which has an extensive network of cortical and subcortical connections, and is smaller in nonhuman animals than in humans, plays an important role in executive functions. And, of course, executive functions are critical for elaborative and organizational processing.
Note, first, that the prefrontal cortex
is not a single, monolithic piece of cortical
tissue. Instead, it can be divided up into at
least three components.
Research by
Edward Smith and John Jonides has identified two
different forms of working memory:
Smith and Jonides propose that the PFC is the biological substrate of working memory, but they infer from this evidence that there are many different working memories, each with a somewhat different localization.
One more brain structure apparently critical for memory is the fusiform gyrus, located in the inferior temporal sulcus where the temporal lobe meets the occipital lobe. Patients with damage to this area suffer a deficit in recognizing familiar faces. They can perceive the faces perfectly well, and can describe their physical features. They just cannot connect the face to a name. At the same time, these patients seem to have no problem recognizing other kinds of objects, such as houses. This specific deficit has suggested to some investigators (e.g., Nancy Kanwisher) that this region is specific to face memory -- so specific, that they have renamed it the fusiform face area (FFA).
It's a good idea, and it makes evolutionary sense that there be a brain module dedicated to the face, that most social of stimuli. But there are some problems with the proposal, most of which were brought to light by Isabel Gauthier and Michael Tarr. They note that recognition of faces takes place at a subordinate level of categorization: the patient must recognize a particular face as his wife's, or Ronald Reagan's, or whatever. By contrast, recognition of objects takes place at the basic level of categorization: all the patient is required to say is that an object is a house, or a car, or whatever. The proper control, they imply, would be to ask patients to recognize a specific house, like the White House or their own house.
Along these lines, there are case studies of two prosopagnosic sheep farmers (what's the chance of that?). In one, the farmer couldn't recognize faces, but could recognize his own sheep. In the other, the farmer couldn't recognize either type of object. Go figure (e.g., McNeil & Warrington, 1993).
Of course, level of categorization depends on expertise. When I see a bird, I just see a bird. But when my wife sees a bird, she sees a double-crested red-shinned grosbeak, or some such thing. So Gauthier and Tarr proposed a different function for the fusiform area: it mediates object recognition at subordinate levels of categorization. And a subject's preferred level of categorization depends on his or her level of expertise in a particular domain. Face recognition occurs at a subordinate level of categorization, because we're recognizing particular faces as belonging to a particular person. And we've all had a lot of experience recognizing faces, so it's not surprising that face recognition involves the fusiform gyrus. If we're novices in a domain, recognition occurs only at the basic level. But as we acquire expertise, recognition occurs at the subordinate level, and it, too, will involve the fusiform gyrus.
To test this hypothesis, Gauthier and Tarr trained subjects to categorize unfamiliar objects like snowflakes, and novel objects like "greebles". As they acquired expertise, they showed reductions in response latencies, sensitivity under speeded task requirements, and the like. But most important, as they acquired expertise in snowflake- or greeble recognition, they also came to activate the fusiform area during task performance. Accordingly, Gauthier and Tarr suggested that the proper label for the fusiform gyrus was the flexible fusiform area (not coincidentally, also abbreviated FFA). They conclude that the FFA is not specialized for faces, but rather is specialized for the recognition of objects, including faces, at subordinate levels of categorization.
I tell more of the story in my Social Cognition course, in the lectures on Social-Cognitive Neuroscience.
As with certain other areas of psychology, like perception, we're now in a position where we know enough about how memory works at the psychological level that we can now sensibly start using brain-imaging and other neuroscientific techniques to find out how the brain does it.
This is a great advance in psychology (and in neuroscience), but it comes with a danger. Some enthusiasts have suggested that neuroscientific evidence will enable us to test psychological theories of mental structure and process. The idea is that, in some way, knowing how the brain works will tell us how the mind works. Given the axiom that the mind is what the brain does, this makes sense -- until you think about it, at which point you realize that the truth is actually the reverse: understanding mental structure and function at the psychological level of analysis is critical for neuroscientific understanding of brain function.
Consider just two examples.
In the controversy over the fusiform area, Gauthier and Tarr drew on psychological theories of categorization, which indicated the correct interpretation of fusiform function. The earliest studies of prosopagnosia, which seemed to show that prosopagnosics failed to recognize faces but could recognize other objects OK, confounded face recognition with level of categorization, and failed to take account of differences in expertise across domains. When faces and non-faces were presented to subjects who were equally expert in both domains, face-specificity disappeared and the true function of the fusiform cortex was apparent.
A similar story can be told about the hippocampus. It is sometimes claimed that studies of amnesic patients revealed a basic distinction between explicit and implicit memory - an example of data about brain function contributing to psychological theory. But in fact, as Schacter (1987) has shown, the explicit-distinction was already apparent, long before H.M. had his surgery. And besides, so far as H.M. is concerned, it wasn't data about neural function that led to the distinction. Rather, it was behavioral data about how H.M., and others like him, performed on psychological tasks. It didn't matter where H.M.'s lesion was. All that mattered was that patients who couldn't recall or recognize things showed normal performance on priming tasks.
And, in fact, all we really know from
patients like H.M., and from brain-imaging studies
of the hippocampus, is that the hippocampus and
other structures in the medial temporal lobe is
critical for memory. Far from telling us how
memory is structured, interpretation of hippocampal
function has followed theoretical
developments in memory research:
And the literature on the hippocampus is entirely representative of other research in cognitive neuroscience. Psychology reveals functions, which are then assigned to brain parts. If the psychology is wrong, the neuroscience will also be wrong.
Or, as I like to put it (Kihlstrom, 2010):
This page last modified 10/25/2014.