The Origins of Consciousness
Consciousness is something we have, but how do we get it? And does any other species have it? Could non-organic machines have it?
These questions are, essentially, questions
concerning the development of consciousness, and psychology
offers two, perhaps three, principal views of development:
In the 17th century, the question of animal consciousness was foreclosed by Descartes' strict dualistic separation of mind and body, which entailed a strict separation between humans (who have minds and free will) and animals (which just have bodies, and operate as reflex machines). Descartes, following Catholic doctrine (he was aware of what had happened to "heretics" such as Girodano Bruno and Galileo) identified humans as the highest stage of development, except of course for God and the angels. But the doctrine of human free will legitimized such concepts of sin, and crime -- actions for which individuals can be held accountable. But in Descartes' view, animals had nothing like consciousness or free will. They operated solely by reflex -- a word that Descartes coined to refer to his theory that energy from the stimulus was reflected back into the environment in the form of behavior. In any event, Descartes held that "lower" animals simply did not have the sorts of conscious experiences that were crucial to Descartes's "Cogito" insight. For example, he asserted that they did not feel pain -- the yelp of a dog which is being beaten is simply a reflex.
In some respects, the Cartesian viewpoint is exemplified by Pierre Flourens (1794-1867), a French neuroanatomist, who characterized the decorticate pigeon as a reflex machine. Flourens agreed with the phrenologists Gall and Spurzheim that the brain was the organ of the mind, but he argued against their tendencies toward radical localizationism. Rather, he sided with those who argued instead for cortical equipotentiality.
While the phrenologists had attempted to relate mind to body by examining the lumps and depressions of the skull, Flourens pioneered the method of surgical ablation -- destroying parts of the brain, and then observing the behavioral consequences.
With his ablation method, Flourens identified the medulla oblongata as a motor center, and established that the cerebellum played an important role in maintaining stability and coordinating motor activities. When it came to the cortex, he agreed that some "lower" sensory and motor functions were differentiated and localized. At the same time, he argued that the "higher" functions of perception, volition, and intellect were distributed throughout the cerebral cortex. For Flourens and other proponents of equipotentiality, the brain -- the cerebral cortex, anyway -- was the organ of a unified mind.
In his famous experiments on
reflexes in the decorticate pigeon, for example, Flourens
(1824, 1842) found that a number of "reflex" behaviors were
preserved:
The debate between localization and equipotentiality continued to rage throughout the rest of the 19th century, and into the 20th. Research by Broca and Wernicke identified areas of the cerebral cortex that were crucial for language and speech, but outside of the sensory and motor projection areas, the cerebral cortex was largely held to consist of undifferentiated "association cortex". In 1929, Karl Lashley announced his Law of Mass Action, based on experiments with rats in mazes that were not all that different from the ablation experiments pioneered by Flourens a century earlier.
The Law of Mass Action was the triumph of the equipotentialists. More recently, however, modern behavioral and cognitive neuroscience has embraced a doctrine of modularity that holds that various mental and behavioral functions are performed by specialized cognitive modules or systems, which in turn are associated with dedicated brain modules or systems. Studies of implicit memory in amnesia and of implicit perception in blindsight, for example, have been interpreted as suggesting that there are separate modules for conscious and unconscious memory, perception, and the like. It has even been suggested (by D.L. Schacter, who has since revised his thinking) that conscious awareness in general is mediated by a conscious awareness system which takes inputs from modules responsible for perception, memory, and other functions. If the outputs of these processing modules reach the CAS, we are aware of them; if they do not, we perform these functions unconsciously.
Descartes had argued for a strict separation of man from "lower" animals. But Darwin's Origin of Species by Natural Selection (1859) changed all that, by arguing convincingly for a continuity between humans and other (sic), nonhuman, animals. In Darwin's theory of evolution by natural selection, adaptations that confer reproductive advantages are passed down to one's offspring, eventuating in the creation of new species. Thus, the theory held, different species are descended from common ancestors.
Darwin's doctrine of evolution was
spelled out most clearly with respect to the morphological
similarities and differences between species -- physical
traits. But at the same time, the theory implied that
evolution applied to mental similarities and
differences as well.
By the same
token, even rabid Darwinists, seemed to acknowledge the
possibility of discontinuity. You will recall from the
lectures on Mind
and Body, the assertion of epiphenomenalism by
T.H. Huxley, Darwin's cousin and "bulldog" defender of
evolution. Note, however, that Huxley refers only to
"the consciousness of brutes" (emphasis added) --
which seems to leave open the possibility that the
consciousness of men (and women) wasn't
epiphenomenal. If Huxley meant to imply that human
consciousness was not epiphenomenal, this would
count as a qualitative discontinuity between humans and
other, nonhuman animals.
Evolution had been much discussed before Darwin came along
-- what was really new in Darwin's theory was the idea of
natural selection as the means by which evolution
occurred. In fact, Darwin's cousin, Herbert Spencer,
had already argued in his Principles of Psychology
(1855) that the processes of mind were continuous with the
processes of life, and that an evolution of mind could be
traced in parallel with the evolution of life. Spencer
argued that the mind evolved through a series of stages:
There can be no doubt that the difference between the mind of the lowest man and that of the highest animal is immense. Nevertheless, the difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind. We have seen that the senses and intuitions, the various emotions and faculties, such as love, memory, attention, curiosity, imitation, reason, etc. of which man boasts, may be found in an incipient, or even sometimes in a well-developed condition, in the lower animals....
If it could be proved that certain high mental powers, such as the formation of general concepts, self-consciousness, etc., were absolutely peculiar to man, which seems extremely doubtful, it is not improbable that these qualities are merely the incidental results of other highly-advanced intellectual faculties; and these again mainly the result of the continued use of a perfect language....
That such evolution is at least possible, ought not to be denied, for we daily see these faculties developing in every infant; and we may trace a perfect gradation from the mind of an utter idiot, lower than that of an animal low in the scale, to the mind of a Newton.
The evolutionary perspective (the "modern synthesis") which defines modern biology was quickly imported into psychology, with assertions of continuity at the level of mind and behavior, not just anatomy and physiology. In recent history, what began as E.O. Wilson's Sociobiology has itself evolved into the contemporary field of evolutionary psychology.
Interestingly, the stage for this importation was set by Darwin himself, who argued in the Descent of Man (1871) and The Expression of Emotions in Man and Animals (1872) that there were similarities between humans and other animals in their facial and other bodily expressions of emotions.
In contemporary psychology, Darwin's position has been embraced most vigorously by Paul Ekman, in his work on innate "basic emotions" such as happiness, sadness, fear, anger, surprise, and disgust.
In the 19th century, Darwin's position
was embraced within psychology by his friend George John
Romanes (1848-1894), whose Animal Intelligence
(1882) attempted to marshal the evidence for intelligence
and consciousness in nonhuman animals, especially dogs,
horses, and primates. Romanes' method was essentially
to collect anecdotes that, to him, illustrated these traits
in animal behavior -- for example, a dog bringing its food
dish to its master when it wanted to be fed, or what seemed
to be a coordinated attack on humans by a band of
baboons. To quote Romanes at length:
One day, watching a small column of these ants (Eciton hamata), I placed a little stone on one of them to secure it. The next that approached, as soon as it discovered its situation, ran backwards in an agitated manner, and soon communicated the intelligence to the others. They rushed to the rescue; some bit at the stone and tried to move it, others seized the prisoner by the legs and tugged with such force that I thought the legs would be pulled off, but they persevered until they got the captive free. I next covered one up with a piece of clay, leaving only the ends of its antennae projecting. It was soon discovered by its fellows, which set to work immediately, and by biting off pieces of the clay soon liberated it. Another time I found a very few of them passing along at intervals. I confined one of these under a piece of clay at a little distance from the line, with his head projecting. Several ants passed it, but at least one discovered it and tried to pull it out, but could not. It immediately set off at a great rate, and I thought it had deserted its comrade, but it had only gone for assistance, for in a short time about a dozen ants came hurrying up, evidently fully informed of the circumstances of the case, for they made directly for their imprisoned comrade and soon set him free. I do not see how this action could be distinctive. It was sympathetic help, such as man only among the higher mammalia shows. The excitement and ardour with which they carried on their unflagging exertions for the rescue of their comrade could not have been greater if they had been human beings.
This observation seems unequivocal as proving fellow-feeling and sympathy, so far as we can trace any analogy between the emotions of the higher animals and those of insects.
By means of such anecdotes, Romanes accumulated evidence for "higher" mental functions, including memory, emotion, and intelligence, in nonhuman animals. Stories such as those Romanes collected have a powerful hold on our imagination. In our time, similar collections of anecdotes have appeared from time to time -- as in Jeffrey Masson's When Elephants Weep (1996), which claimed that many animal species have complex emotional lives.
In Mental Evolution in Animals (1883) Romanes distinguished between reflexes and instincts fixed by heredity, and intelligence which involved learning and consciousness. As evidence of the evolution of mental abilities, he gathered evidence of memory in sea urchins, learning in mollusks, tool use in monkeys, and conscience in dogs and apes.
In Mental Evolution in Man (1888), Romanes
distinguished among three categories of mental phenomena:
"[There is a] very strong prima facie case in favour of the view that there has been no interruption of the developmental process in the course of psychological history; but that the mind of man, like the mind of animals... has evolved."
Romanes is considered to be the founder of the field of
comparative psychology, but it is important to understand
that there were serious problems with his methodology:
Romanes' point about continuity was
well taken, but his methods were criticized by another
English psychologist, C. Lloyd Morgan (1852-1936) --
who in fact had been Romanes' student! Morgan
complained that Romanes leaped to the hypothesis of animal
intelligence, while overlooking the simpler hypothesis that
apparently intelligent animal behavior had its origins in
instincts and trial-and-error learning. Ever since, the field of comparative
psychology has been dominated by Morgan's "canon":
Always interpret behavior in terms of the lowest psychological process that could produce it.
Nevertheless, Morgan elaborated on Romanes' distinctions
among reflex, instinct, and intelligence:
If Romanes was perhaps too liberal in attributing consciousness to nonhuman animals, perhaps Morgan was too conservative. A sort of "third way" into the problem was offered by Margaret Floy Washburn (1871-1939), who was the first female PhD in psychology (from Cornell, under Titchener, in 1894) and the second woman to serve as president of the American Psychological Association (the first was Mary Whiton Calkins, who invented paired-associate learning). In The Animal Mind (1903), Washburn argued that the question of animal consciousness was really no different than the familiar and ancient philosophical problem of other minds.
"The mind of each human being forms a region inaccessible to all save its possessor.... If my neighbor's mind is a mystery to me, how great is the mystery which looks out of the eyes of a dog, and how insoluble the problem presented by the mind of... an ant or a spider?"
Still, Washburn noted, on the reasonable assumption that all human minds are "built on the same pattern", we make inferences about others' mental states from their words and actions. And the same holds for animal's mental states:
"[A]ll psychic interpretation of animal behavior must be on the analogy of human experience... Our acquaintance with the mind of animals rests upon the same basis as our acquaintance with the mind of our fellow-man: both are derived by inference from observed behavior."
Washburn found a number of traditional criteria for
animal consciousness unsatisfactory, because they
could all occur unconsciously:
"We know not where consciousness begins in the animal world. We know where it surely exists -- in ourselves; we know where it exists beyond a reasonable doubt -- in those animals of structure resembling ours which readily adapt themselves to the lessons of experience. Beyond this point, for all we know, it may exist in simpler and simpler forms until we reach the very lowest of living beings."
Consciousness in Plants?Much the same kind of anecdotal evidence has also been
deployed to make the argument that plants, as well
as animals, are conscious. The idea that plants
might be conscious has its origins in The Secret Life
of Plants, by Peter Tompkins and Christopher Byrd,
which found its way to the New York Times
best-seller list in 1973. Although that book was
mostly a piece of pseudoscience, more recently some
serious plant biologists have argued that plants can show
behavior that, if displayed by animals, would be taken as
evidence of intelligence and consciousness -- even though
plants lack anything remotely resembling a nervous
system. In a "manifesto" for "plant neurobiology",
Eric Brenner and his
colleagues point out that plants respond to changes in
their environment, and possess signalling systems that
resemble the electrical and chemical activity of an
animal's nervous system (Trends
in Plant Science, 2006, 11:413-419). These
authors, and like-minded individuals, founded the
Society for Plant Neurobiology in 2005, along with its
flagship journal, Plant Signalling & Behavior.
Not surprisingly, this development has been opposed by a large number of other plant scientists, who see this new research as a kind of throwback to the bad old days of TSLOP. See, for example, "Plant neurobiology: no brain, no gain?" by A. Alpi et al., Trends in Plant Science, 2007;12:135-136. The proponents argue
that plant "behavior" goes beyond mere photo- and
hydrotropisms. It sometimes really looks like
plants are responding flexibly to changed
environmental circumstances (which is one definition
of intelligence), responding to injuries (like being
cut or harvested), and communicating their internal
physiological states to other plants. The evidence is
largely anecdotal, but the question of "plant
neurobiology" is really, no different from the one
raised by Romanes and Washburn: We know that we
ourselves are conscious, and we can infer that other
humans are conscious, but how would we know whether
another kind of creature is conscious? Romanes
and Washburn asked this question about apes, dogs, and
ants. But
think about the question of consciousness in
computers: computers don't have the same
physiology as we do, but that doesn't prevent serious
people from asking whether computers could be
conscious, and how we could know. And the same question
crops up in the search for extraterrestrial
intelligence how would we know whether a
non-carbon-based life form is conscious? The question of
consciousness in plants is no different from the
question of consciousness in nonhuman animals,
machines, and alien lifeforms. It can't be
rejected out of hand -- by saying, for example, that
something that doesn't have a human-like nervous
system can't be conscious. Rather, it forces us
to be clear about what we mean by consciousness, so
that we could develop convincing tests for it.
|
For Morgan, the essential question of the evolution of mind was:
How is congenital variation [i.e., in instincts] related to acquired modification [i.e., learning]?
Of course, this is an old question. Jean Baptiste Lamarck (1744-1829), a French naturalist, had proposed that acquired habits were passed on to the organism's offspring through heredity -- a doctrine which became known as the inheritance of acquired characteristics. As Lamarck put it:
[C]ongenital variation will gradually render hereditary... that [which] was provisionally attained by plastic modification."
Darwin and his followers, including Romanes and Morgan, were adamantly opposed to the Lamarckian doctrine. For them, behavior was passed from generation to generation by means of "organic selection". In response to a sudden environmental change, certain acquired habits might have favored adaptation to the new environment; but what is inherited by the next generation was only a congenital variation favoring the acquired modification, rather than the modification itself. What the last generation learned, cannot be inherited by the next generation -- the next generation has to learn it anew.
For better or for worse, it turned out that Morgan's evidence was also anecdotal, and so it fell to Edward L. Thorndike (1874-1949) to bring the study of animal intelligence into the laboratory. Working at roughly the same time as Pavlov, Thorndike's classic studies of instrumental conditioning (e.g., cats in "puzzle boxes") led to the formulation of a set of "connectionist" (his term) laws of stimulus-response learning:
And if Thorndike didn't do so in so many words, Watson and Skinner were certainly prepared to. In fact, Washburn's 1903 book on The Animal Mind was the direct target of Watson's twin manifestos, "Psychology as a Behaviorist Views It" (1913) and Psychology from the Standpoint of a Behaviorist (1919).
Beginning with J.B. Watson, and culminating in the work of B.F. Skinner, the radical behaviorists turned the continuity assumption on its head. Whereas Romanes wanted to attributed consciousness and intelligence to nonhuman animals, the behaviorists wanted to deny it (or at least its functional importance) to humans as well.
Despite the hegemony of behaviorist thought in the period 1920-1950 or so, interest in animal consciousness and intelligence persisted -- just as interest in human consciousness persisted in those who studied the span of apprehension and Gestalt perception.
Among the most famous examples of consciousness in animals were Wolfgang Kohler's experiments on insight learning in chimpanzees, as described in The Mentality of Apes (1977). At the time that World War I broke out, Kohler (1887-1967) was working at a primate research facility operated by the Prussian Academy of Sciences in the Canary Islands. The war essentially marooned him off the Atlantic coast of Africa, and with lots of time on his hands -- leading to the studies that made him famous.
In
his experiments, Kohler constructed a kind of
playground, with a scattering of objects. Then
he would present the chimpanzees with a problem of
obtaining food that was not directly accessible to
them - -for example, a bunch of bananas lying
outside the enclosure, or hung out of reach.
In order to solve these problems, the chimps would
have to use their playthings as tools.
While solving these problems, Kohler believed that
the chimpanzees engaged in "cognitive trial and
error":
Of course, there are a number of problems with this
inference, including:
While Thorndike's Law of Effect had asserted that reinforcement is necessary for learning to occur, Tolman concluded that animals acquire knowledge simply through experience, without need of reward or punishment. In the mazes, for example, they acquired a "cognitive map" of the territory. Later, they could then use this knowledge for their own purposes, as circumstances warranted -- for example, quickly finding their way to the goal box once they learned that there was food to be eaten there.
In fact, although the behaviorists argued that reinforcement shaped behavior that began as random, E.R. Hilgard went so far as to argue that the even the animal's response on the very first trial of learning is not random. Rather, Hilgard argued, it has the character of a hypothesis -- it is as if the animal thinks to itself, "I wonder what will happen if I turn left?".
Of course, the radical behaviorists did not take kindly to this kind of mentalizing, and they sometimes caricatured Tolman's rats as "lost in thought at the choice point".
Caricatures aside, Tolman had it right about the role of reward in learning, and it became increasingly obvious that animal behavior wasn't motivated solely by rewards and punishments, but rather by goals of prediction and control. For example, at Wisconsin Harry F. Harlow (1950a, 1959b), he of the famous studies of monkey love (which came later), performed classic studies of curiosity and intrinsic motivation. In his research, Harlow presented rhesus monkeys with wooden puzzle blocks. In one condition, the animals were rewarded with food for making correct moves; in the control condition there was no reward. In general, there were no differences in performance between the two conditions. And contrary to Thorndike's Law of Effect, hunger actually interfered with performance (when the monkeys were not hungry, but received food as a reward, they stored it for later). Harlow concluded that the monkeys were intrinsically motivated to solve the puzzles -- they did so even without promise or prospect of reward. The study was one of a number of early studies of learning and reinforcement that, like Tolman's work, undermined the behaviorist Law of Effect. But in the present context, Harlow's findings suggests that the monkeys were curious about the puzzles.
Harlow was fond of telling the story of one evening when he left his laboratory at 600 N. Park Street (and sometimes known as Goon Park), on the edge of the University of Wisconsin Campus, he was startled to look back and see the lights flashing on and off. Fearing that there was a short-circuit that could cause a fire, he returned to find one of his rhesus monkeys flipping the light on and off -- apparently just for the sheer enjoyment of it. Another example of intrinsic motivation.
Interest in human consciousness was revived by the cognitive revolution of the 1950s and 1960s, with its renewed interest in mental states and processes; and by virtue of the continuity assumption, the question was once again raised whether the behavior of nonhuman animals was mediated by conscious mental states, and intelligent processes, as that of humans was (or seemed to be).
The late Donald B. Griffin, of Rockefeller University, argued in the affirmative in a series of books: The Question of Animal Awareness (1976), Animal Thinking (1984), and Animal Minds (1992).
Griffin asked "What is it about some kinds of
behavior that leads us to feel that it is
accompanied by conscious thinking?" As
criteria for consciousness, Griffin rejected mere
complexity, or adaptability to changing
circumstances, and based his arguments for animal
awareness on three considerations:
[The bug has] camouflaged itself chemically and tactile by gluing bits of a termite nest all over its body. In this way it is able to capture a termite at the opening of the nest without alarming the soldier termites. After sucking out the termite’s semifluid organs, the assassin bug jiggles the empty exoskeleton in front of the next opening in order to attract another termite worker…. When a second termite seizes the first, it is then captured and consumed itself…. [T]he process may be repeated continuously many times by the same assassin bug. The extraordinary complexity and coordination of these actions strongly suggest conscious thought, even though the assassin bug’s central nervous system is very small.
Griffin is considered to be the founder of the field of cognitive ethology. Classical ethology, as exemplified by the Nobel-Prize-winning work of Konrad Lorenz, Nikko Tinbergen, and Karl Frisch, was interested in the evolution of behavior -- imprinting in ducks, the zig-zag mating dance of the stickleback, and dancing in honeybees. Cognitive ethology is interested in the evolution of mental processes. (The term was later appropriated by the philosopher Daniel Dennett for somewhat different purposes.)
However, Griffin's behaviorist critics (e.g., Blumberg & Wasserman, 1995, 1996) accused him of making a behavioral analog to the creationists' argument from design -- that behavior is so complex that only consciousness and intelligence could produce it. They drew an analogy to arguments against evolution by proponents of creationism or "intelligent design" -- who assert, in much the same way, that nature is so complex that it could not have arisen by chance, but must have been designed by God. Thus, in their view, Mind occupies the same place in Griffin's cognitive ethology as God does in creationism (this was, of course, a particularly nasty argument to make against a biologist who accepts in the theory of evolution!). Instead of a "mentalistic comparative psychology", Blumberg and Wasserman argue for "animal mindlessness" -- and just so their intentions are clear, they assert that human consciousness is epiphenomenal as well:
"[T]he mentalistic approach in vogue today is as useless for understanding human behavior as it is for understanding animal behavior.
Still,
many ethologists, like Gould and Gould (e.g., The
Animal Mind, 1994), find it compelling to
attribute mental states to nonhuman animals.
For example, they find evidence of consciousness
even in the behavior of invertebrates like
honeybees:
"This vibrating pollen forager is reporting a food source about 15 degrees to the right of the sun’s direction. Six attending bees are also being told of the distance to the food and the dancer’s opinion of its quality."
Still, it must be recognized that the cognitive ethology of Griffin and the Goulds was still essentially anecdotal. To be sure, they had more systematic data than was available to Romanes, but their analysis still relied on an interpretation that was easy to contest. What was needed was a clear demonstration of animal consciousness, which quickly came in two forms.
The
first was the mirror test of self-awareness
developed by Gordon Gallup (1970), based on
observations originally made by Darwin (1871, 1872)
himself. When a mirror was placed outside the
cage of some orangutans residing in the London Zoo,
Darwin observed three stages of response:
But what is really going on? Are they reacting to the image as if it were another ape -- first treating it as a threat, and later as benign? Or do they realize, eventually, that they are looking at themselves?
Inspired
by Darwin's test, Gordon Gallup embarked on a study
of mirror self-recognition in chimpanzees which has
become a classic in consciousness research.
When Gallup (1970)
repeated Darwin's test, he found that the initial response of chimpanzees
(and other animals) to exposure to a mirror is to
explore the mirror and engage in other-directed
behaviors -- i.e., to treat their reflection as if
it were another animal. With continued
exposure, however, chimps begin to engage in
self-directed behaviors, using the mirror to
explore themselves -- especially hidden parts of
their bodies:
In a formal test of
self-awareness, Gallup painted red marks on the
foreheads of mirror-habituated chimpanzees. The painting was performed
while the chimps were anesthetized and the paint
was odorless, so the animals could only notice the
spot when they looked at themselves in the mirror.
The chimps' response was to examine the spots
visually -- by looking at themselves in the
mirror, touching the spots, and visually
inspecting (and smelling) the fingers that had
touched the spots. Gallup argued that the
chimpanzees recognized a discrepancy between their
self-image and their image in the mirror.
By
comparison, a study by Nielsen and Dissanayake
(2004) showed that all human infants show mirror
self-recognition by 24 months of age.
Later primate experiments by Daniel Povinelli added methodological niceties, such as a comparison of marked and unmarked facial regions.
Another Povinelli study indicated that mirror self-recognition is relatively rare, even among chimpanzees. It happens, but it's rare.
Thus, not all chimpanzees
show self-recognition in mirrors. In
general, the effect is obtained from chimps who
are sexually mature (but not too old), have been
raised in groups, and have had prior mirror
exposure. In the photo at right, a group of
chimpanzees confront their reflections in the
windows of a farmhouse in western Uganda.
The group became so violent that the family who
lived there had to abandon the house (see "The
Conflict Zone" by Ronan Donovan, National
Geographic, 02/2022).
The same mirror self-recognition
effect is found in orangutans and human infants,
but not usually in other primate species (the
status of gorillas, such as the famous Koko, is
controversial). And not usually in
non-primate species, except that there is some
evidence for mirror self-recognition in:
However, a comprehensive review by Gallup and Anderson concludes that only human infants and some species of great apes show "clear, consisting, and convincing evidence" for mirror self-recognition ("Self-recognition in animals: Where do we stand 50 years later? Lessons from cleaner wrasse and other species", Psychology of Consciousness, 2019).
Back when Gallup first reported his experiments, some
behavioristically inclined researchers were having none of
it. In 1981, Robert Epstein and Robert Lanza, working in
B.F. Skinner's laboratory, claimed that pigeons could be trained
to perform mirror self-recognition, and that what looked like
evidence of self-awareness was just a product of operant
conditioning, and said nothing about consciousness in nonhuman
animals (or humans, for that matter). Subsequently Uchino
and Watanabe (JEAB 2014) confirmed their observations and
supported their conclusions. However, what appeared to be
self-directed behavior in pigeons was observed only after an
arduous "shaping" regime in which the pigeons were first
reinforced for pecking at visible marks on their own bodies, then
dots projected on a wall, and then dots observed only in a
mirror. By contrast, self-recognition in chimpanzees (and
human infants, for that matter) occurs more or less spontaneously,
without any special training. The chimps may go through a
stage where they incorrectly perceive their self-images as other
animals, but eventually they (often) come to recognize that the
image in the glass is of themselves. So Epstein et al.'s
experiment does not undermine the claims of Gallup and later
investigators about mirror-self-recognition. But they do
show the lengths to which radical behaviorists like Skinner would
go to avoid talking about consciousness or any other mentalistic
construct! (Epstein was, I believe, Skinner's last graduate
student and this paper was one of the last empirical papers that
Skinner published.)
Self-Recognition in SolarisIn the great science-fiction film Solaris (Russian-language original scripted and directed in 1972 by Andrei Tarkovsky, based on the novel of the same title by Stanislaw Lem; English-language remake, 2002, by Steven Soderburgh, starring George Clooney), a psychologist, Kris, visits a space station orbiting the planet Solaris that has been the site of mysterious deaths. It turns out that the planet is a living, sentient organism, which has inserted creatures into the station based on images stored in the cosmonauts' (repressed?) memories. As soon as Kris gets to the station, he encounters the spitting image of Hari (she's named Rheya in the book), his late wife, who had committed suicide 10 years before. Searching through his baggage, the woman comes upon a picture of herself, but she does not recognize the image until she views herself, while holding the picture, in a mirror. It is clear that until that moment she had no internal, mental representation of what she looked like. The episode does not appear in Lem's book, which appeared in 1961, long before Gallup's original article was published (at least I can't find it) -- though Lem does have some interesting remarks about Rheya's memory -- she doesn't have much, and one of her memories is illusory. But Tarkovsky's film appeared in 1972, so he would have had the opportunity to hear about Gallup's findings (which were prominently reported), and incorporate them into the script (losing the material about memory). |
If it really turns out that orangutans have self-recognition, but gorillas do not, the implication is that the capacity for self-recognition arose independently at least twice in primate evolution.
The matter is complicated, however, because Hauser et al. (1995) obtained evidence for mirror self-recognition in cotton-top tamarins, a species of monkeys that are, in evolutionary terms, quite distant from the great apes. By his account, Hauser succeeded because he took the animals' normal behavioral ecology into account. However, his methods were criticized by Anderson and Gallup (1997; for a reply, see Hauser & Kralik, 1997). However, in 2001 Hauser and his colleagues reported that they could not replicate their earlier observations.
Given the results with chimpanzees, investigators have asked whether mirror self-recognition occurs in species other than primates.
Here's a photo, contributed
by a neighbor, of her cat watching a diamondback
rattlesnake which has come up on her
patio. She reports that the snake
"danced", waving back and forth, for about five
minutes before slinking away (the cat in the
picture was safely on the other side of a
sliding glass door). One commentator
suggested that the snake saw its reflection (not
just the cat) in the door, and was "dancing" the
way it would to challenge another snake. I
suppose you could also suggest that the snake
was trying to match its
kinesthetic/proprioceptive feedback with what it
saw the image doing, in which case maybe even
snakes can engage in
mirror-self-identification! Too bad the
snake didn't stick around so someone could do
the experiment.
Me, I'm a cat person, so I'm always on the lookout for pictures of cats looking in mirrors (and apparently I'm not alone!).
Marten and Psarakos (1994) compared the reactions of bottle-nose dolphins to a mirror placed in their pool, to their reactions to a strange dolphin viewed through a gate. The dolphins paid more attention to the stranger than to their reflections -- a behavioral difference that might suggest that they recognized themselves.
In a followup study, Marten and Psarakos (1995) compared reactions to real-time self-views played over a television set installed in the pool, compared to taped playback. One dolphin (Keola) appeared to discriminate between the two videos, but another (Hot Rod) did not. Keola also was observed using the video setup to examine a dye mark on his side.
Gallup has argued that chimpanzees, at least, have bidirectional consciousness: they are responsive to events in the external world, and they are also aware of the relationship between these events and themselves. In his view, this last form of awareness is the hallmark of consciousness. Still, the fact remains that, even among chimpanzees, not every animal shows mirror self-recognition -- and that must mean something.
In another development, Premack and Woodruff (1978) argued that chimpanzees (and perhaps other primates, especially great apes) had a "theory of mind". In their study, they reported that at least one chimp, namely the famous Sarah, was able to solve novel problems that entailed attributing mental states -- beliefs and desires -- to humans. After she retired from research, she spent the last 13 years of her life at Chimp Haven, a sanctuary for chimpanzees, and died in 2019 ("The World's Smartest Chimp Has Died", bu Lori Gruen, New York Times, 08/10/2019).
There's more to be said about the development of consciousness from the phylogenetic point of view, but before we go any further, let's shift for a moment to the ontogenetic view.
What Is It Like To Be an Elephant? Or an Earthworm?This is the question raised, and addressed, with a combination of anecdote and rigorous scientific research, by Carl Safina in Beyond Words: What Animals Think and Feel (2015). Reviewing the book in the New York Review of Books, Tim Flannery wrote ("The Amazing Inner Lives of Animals", 10/08/2015):
Most work on animal consciousness focuses on mammals, for
the simple reason that they are the most like us in terms
of brain structure. But other animals haven't been
neglected. Consider the following recent book title:
What a Fish Knows: The Inner Lives of Our Underwater
Cousins by Jonathan Balcombe (2016). In Other
Minds: The Octopus, the Sea, and the Deep Origins of
Consciousness (2016), Peter Godfrey-Smith masses
evidence that cephalopods, including squid, cuttlefish,
and octopus, are highly intelligent -- indeed, the most
intelligent of all the "water-bound" animals. He
argues that the octopus, with approximately as many
neurons in its brain as a dog has, also has
consciousness. But because the bulk of the octopus's
neural mass is located in its tentacles (the word cephalopod
is derived from the Greek words for "head" and "foot"),
rather than in its head, he claims that the consciousness
of an octopus must be very different from our own. Another book in the same vein is The Inner Life of
Animals by Peter Wohlieben (2017), a German
forester. In the later book, he presents
mostly anecdotal evidence, drawn from domestic and wild
animals, to argue that animals have levels of both
intelligence and consciousness that we don't usually
appreciate.
In an earlier book, The Hidden Life of Trees:
What They Feel, How They Communicate (2016), Peter
Wohlieben -- who, after all, began his career as a
forester -- animated plants, writing about how they
care for their young, feel fear and pain, mourn the death
of nearby trees, learn about their environment, etc.
Lincoln Taiz, a plant biologist at UC Santa Cruz, has led
a group of critics of Wohlieben's views, arguing that the
appearance of intelligence just testifies to the power of
evolution by natural selection, and that any thought
otherwise just shows how susceptible people are to animism
-- especially when it comes to trees (think The Wizard
of Oz or The Lord of the Rings).
Wohlieben himself doesn't seem too invested in the idea of
tree-consciousness, however, pretty much admitting that he
is using metaphor to stimulate his readers to make the
same kind of emotional connection to plant life as they
do, more naturally, to animal life, and consider that
trees, like animals, might be entitled to some kind of
rights, or at least deserve some kind of dignity and
respect (e.g., No clearcutting old-growth forests!).
But at another level, the idea of consciousness in trees
raises the question of consciousness in other kinds of
things, like computers and extra-terrestrials, which might
have nervous systems very different from ours -- and how
we would ever know they have it.
|
The "theory of mind" (ToM) now plays an important
role in theories of cognitive development (e.g., Flavell, 1999)
(although ToM researchers do not always acknowledge that the
concept has its origins in animal research).
A great deal of ToM
research is based on the false-belief task, which assesses
whether a child can impute mental states (beliefs and desires) to
other people, and understands that his mental states might differ
from theirs. The FBT involves three participants: the child,
a puppet, and an experimenter.
Children who pass the FBT show clearly that
they recognize that beliefs are mental representations of reality,
and that others' beliefs may be different. By the time they
are 60 months old, a clear majority of children pass the FBT.
Simon Baron-Cohen, a
British psychiatrist, has characterized the theory of mind as mindreading
-- the ability to make inferences about the contents of someone
else's mind. As such, the theory of mind is an important
concept in the study of social cognition. But here we are
only interested in ToM as an aspect of consciousness -- the
recognition of mental states as such, as representations of
reality.
Some neuroscientists have suggested that mindreading is served by
a dedicated module in the brain. Saxe and Kanwisher (2003)
conducted an fMRI study in which adults were asked to read three
different kinds of stories:
When subjects read the stories
about true and false beliefs, the researchers saw increased
activation in an area of the brain centered on the
temporo-parietal junction, which they dubbed the theory of
mind area. By contrast, another area of the brain,
known as the extra-striate body area, which is activated
when subjects think about the human body, showed no differential
activation.
Now, there's a seeming contradiction here. If we take mirror
self-recognition as our index of consciousness, then human infants
have it by the time they're 2 years old. But if we take the
FBT as our index, human children don't have it until they are 4 or
5 years of age. Which is it?
Clements and Perner (1994)
proposed one solution by looking at children's nonverbal behavior
in the FBT. When asked, 2- and 3-year-olds will incorrectly say
that the puppet will look in the new location (the oatmeal
container). But at the same time, they'll look at
the new location (the box). Apparently, these children have
an understanding of the situation that they cannot express
verbally, showing what is known as the competence-performance
distinction.
Accordingly, more recent investigators have developed non-verbal
versions of the FBT to assess the theory of mind in infants (and
others) with limited verbal abilities.
In a truly wonderful study, Onishi and Baillargeon (2005) tested
15-month-old infants on a totally nonverbal version of the
FBT. The experiment depends on the familiar finding that,
when infants tend to look longer at counter-expectational events
(OK, sometimes they pay less attention to
counter-expectational events; still, researchers commonly use
looking-time to assess infants' expectations).
So, when you test them nonverbally, it appears that even very
young infants have at least a rudimentary theory of mind.
They understand that mental states are representations of reality,
they know that what others believe might be incorrect, and they
expect other people to act in accordance with their beliefs -- and
are surprised when they do not.
Now, with a nonverbal version of the FBT in hand, let's ask the
obvious question: Does the chimpanzee have a theory of mind as mindreading?
We already know that (some) chimpanzees pass the mirror
self-recognition test, and have some degree of consciousness by
that standard. And Premack and Woodruff's study seemed to
show that the chimpanzee Sarah, at least, had a theory of
mind. But what we really need is systematic research on a
bigger sample of primates. So...
The results were quite
clear. The chimpanzees rarely chose the correct box.
Instead, they chose the box indicated by the communicator.
When 4-year-old human infants were tested on the same situation,
they also performed relatively poorly -- though they still
performed better than the chimps. Another group of
5-year-old children, by contrast, almost always made the correct
choice, understanding that the communicator's beliefs were
incorrect. So, by this criterion, the chimps lacked a theory
of mind.
So, if consciousness entails having a theory of mind, then:
But doesn't this contradict the conclusions by Premack and Woodruff, who found evidence that Sarah, at least, did have a theory of mind? Well, in the first place, Sarah was a very special chimpanzee, who was raised with considerable and intimate contact with her human keepers, and she had acquired a symbolic vocabulary (semantics, anyway, if not syntax). So it would be hazardous to generalize from her performance.
Moreover, there are some problems with these laboratory
studies.
Based on the evidence available so far,
however, Call and Tomasello (2008) offered the following
tentative conclusions about "chimpanzee psychology".
This conclusion may seem to violate the Darwinian doctrine of psychological continuity, but there have to be some discontinuities, somewhere, unless you want to start looking for evidence of ToMM in cockroaches. In commenting on this situation, Povinelli has turned Darwin on his head, reminding us that, despite the continuities uniting species, the whole point of evolution is to add new traits that aren't possessed even by close relatives. For humans, language appears to be such a trait. Perhaps consciousness, in the form of ToM, is another.
On the other hand, Frans de Waal (in are We Smart Enough to Know
How Smart Animals Are?, 2016)
has argued that our assumptions concerning animal
intelligence, including consciousness,
have been distorted by a kind of human egotism --
that is, by the assumption that animal intelligence
(or consciousness) has to be measured by human
standards. Instead, de Waal has argued that the case for animal
intelligence is to be made on the animals' own
terms, by looking at their behavior in their natural world.
It's hard to argue with that!
Still, the line of research on mirror self-recognition initiated by Gallup seems to indicate that chimpanzees, at least, have a rudimentary sense of self. And that reveals a rudimentary consciousness.
There the matter stood until 2016 when Christopher Krupenye, a postdoctoral fellow working with Tomasello and Call, adapted yet another paradigm already employed in the study of infant ToM to primates. The paradigm in question anticipatory looking, had previously shown to reveal ToM in human infants. Humans, including infants, direct their gaze to a location where they expect something to happen. If infants watch while a human searches for an object, they will direct their gaze toward the location where the human believes the object is, even if the human's belief is false. So, apparently, do apes. In their experiment, Krupenye et al constructed two scenarios in which chimpanzees, bonobos, and orangutans watch a human agent search for an object, and they tracked the apes' eye movements while they did so.
The answer is
that most of the apes directed their first look to the
location where the Actor thought
KK or the stone would be, given his knowledge. The apes knew better, of course, but
they expected the Actor to act in accordance with his beliefs, not their own. This is
the essence of the False
Belief test, and the apes passed it by looking where
they expected the Actor to look, based on their
understanding of his (false)
belief. At least most of them did. Most
of the apes passed one or the other test, and many
of them passed both of
them
Krupenye et al. conclude that great apes -- at least some chimpanzees, bonobos, and orangutans -- do possess a theory of mind after all. They anticipate the goal-directed behavior of some agent, and attribute mental states -- at least, states of belief to an agent, even if these beliefs are incongruent with reality. Actually, Krupenye et al. argue that these apes have an implicit if not explicit understanding of false beliefs (I would prefer the term covert to implicit, because I want to reserve the term "implicit" for unconscious percepts, memories, thoughts, and the like). Although not reflected in their own overt behavioral choices, as indicated by their poor performance on other ToM tests, their understanding of false beliefs is reflected in their covert anticipatory looking behavior.Here are links to videos of two subjects' performance on the two experiments (one each for Experiment 1 and Experiment 2). The subject can be seen in the upper left-hand corner, and the red dots indicate where he or she is looking. You may need to be connected to the UCB network (either directly, via AirBears, or VPN), or otherwise have access to the Science website to view these videos.
According to some
"futurists", we are quickly heading into a new
stage of evolution, satirized by Roz Chast in
this cartoon (New Yorker,
11/01/2021). One in which computers will
surpass humans in intelligence, and even
become conscious like ourselves (or more
so). The chief proponent of this idea is
Ray Kurzweil, in books like the Age of
Intelligent Machines (1990), The Age
of Spiritual Machines (1999), The
Singularity is Near: When Humans Transcend
Biology (2005), and How to Create a
Mind: the Secret of Human Thought Revealed
(2012). There are others, including the
4th Revolution: How the Infosphere is
Reshaping Human Reality by Luciana
Floridi (2014) and Superintelligence:
Paths, Dangers, Strategies by Nick
Bostrom (2014).
The evolution of machine intelligence begins in the medieval period with the construction of robots -- machines designed to perform some humanlike task (like serving wine -- see illustration at the left). For an interesting survey of these devices in medieval Europe, see Medieval Robots: Mechanism, Magic, Nature, and Art (2023) by Elly R. Truit. The figure at left is from a 15th-century illustrated edition of The Travels of Marco Polo, originally published in the 13th century, in which the author refers to a magical machine that fills goblets automatically.
Why shouldn't machines be intelligent? And if they achieve a certain level of complexity, why shouldn't they be conscious? If, as Marvin Minsky argued, the brain is a computer (or a machine) made of meat Minsky, M. ("Why People Think Computers Can’t", AI Magazine, 3(4), 1982), and brains are conscious, then there's no principled reason why computers (or other machines), made out of other substances (silicon chips, or beer cans tied together with string) could not also become conscious. It's just a matter of having sufficient computational power -- and the right program. The claim that, at a certain level of organization, information-processing machines might become conscious is fairly explicitly stated in Chalmers's theory of consciousness, discussed in the lectures on The Mind-Body Problem.
Reviewing the books by Floridi and Bostrom,
John Searle makes some critical points that
suggest, to me at least, serious problems with
the idea that computers could become
conscious, simply by virtue of massive
information-processing power ("What Your
Computer Can't Know", New York Review of
Books, 10/09/2014; for a subsequent
exchange with Floridi, see "At the Information
Desk", NYRB, 12/18/2014).
First, remember from the lectures on Introspection
that the essence of consciousness is
subjectivity: there is something it's like to
be conscious. But Searle's basic point
is that consciousness is observer-relative,
and so is information. Therefore, the
concept of information can't provide an
objective, third-person account of subjective, first-person
experience because information
is also observer-relative.
Let's review Searle's take on the objective-subjective distinction, which turns out to be a lot more complicated than it first appears. In what follows, I use examples from Searle's 2014 review.
Searle then goes on to clarify certain assumptions about ostensibly "intelligent" computers -- essentially, repeating his famous "Chinese Room" argument against Strong Artificial Intelligence..
It follows from this that
digital computers don't have any
observer-independent intelligence at all. All of their
intelligence is observer-relative, because it has been built into
the computer by a programmer who has real, intrinsic,
observer-independent consciousness and intelligence.
And it follows, too, that everything that a digital computer does
is, in Searle's terms nonconscious. It simply
implements a program that has been written by a conscious
agent. It doesn't think, or feel, or desire anything -- it
simply executes a program that has been created by a conscious
programmer.
Searle also discusses the nature of information, making some points that are critical for understanding the possibility of computer consciousness.
Recall that David Chalmers's theory of consciousness is panpsychist,
because he proposes that any physical system that represents
information is conscious. Thus, thermostats and solar
systems are conscious, to the extent that they can represent
information. The implication is that computers are
conscious, too, or at least they can be conscious, once they
approach the information-processing capacity of the human brain,
with its 100 billion neurons and 100 trillion synaptic connections
between them. It is this kind of argument that leads some
theorists to conclude that computers can, in principle, become
conscious entities, once they become powerful-enough information
processors.
In addressing this argument, Searle reminds us that there are two kinds of information.
The LaMDA Brouhaha at GoogleIn 2002, a controversy erupted at Google
when Blake Lemoine, one of its software engineers,
publicly announced on Medium, an idea-sharing social
network, that one of the company's computational
systems, a "large language model" (LLM) known as the
Language Model for Dialogue Applications (LaMDA) was, in
fact, "sentient". It didn't just make meaningful,
appropriate responses to verbal input, carrying on
conversations. It's a souped-up version of Joseph
Weizenbaum's ELIZA program, which simulated a "Rogerian"
psychotherapist, coupled with a knowledge base like
Watson, the IBM supercomputer that defeated two GOAT
champions at "Jeopardy!" -- except, so Lemoine claimed,
LaMDA actually knew what it was talking about. For a short but provocative essay on the
political economy of artificial intelligence, see "You
Talking to Me?" by Dwayne Monroe, The Nation,
06/25/2022. |
The Large Language Models that support contemporary approaches to AI are based on "connectionist" architectures familiar in cognitive psychology, as discussed in my lectures on Neuroscientific and Computational Models of Memory. An article in the New Yorker profiled Geoffrey Hinton, one of the earliest proponents of connectionist modeling and for that reason sometimes called "the godfather of AI". in 2017, he shared the Turing Award, often called the "Nobel Prize" in computer science, with and Yann Bengio Le Cun for their work on "deep learning" by machines. The profile ("Metamorphosis" by Joshua Rothman, 11/20/2023) is absolutely fascinanting, including these remarks (by Rothman) on the relation between AI and human intelligence:
How should we describe the mental life of a digital intelligence without a mortal body or an individual identity? In recent months, some A.I. researchers have taken to calling GPT a “reasoning engine”—a way, perhaps, of sliding out from under the weight of the word “thinking,” which we struggle to define. “People blame us for using those words—‘thinking,’ ‘knowing,’ ‘understanding,’ ‘deciding,’ and so on,” Bengio told me. “But even though we don’t have a complete understanding of the meaning of those words, they’ve been very powerful ways of creating analogies that help us understand what we’re doing. It’s helped us a lot to talk about ‘imagination,’ ‘attention,’ ‘planning,’ ‘intuition’ as a tool to clarify and explore.” In Bengio’s view, “a lot of what we’ve been doing is solving the ‘intuition’ aspect of the mind.” Intuitions might be understood as thoughts that we can’t explain: our minds generate them for us, unconsciously, by making connections between what we’re encountering in the present and our past experiences. We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. “For years, symbolic-A.I. people said our true nature is, we’re reasoning machines,” he told me. “I think that’s just nonsense. Our true nature is, we’re analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.”?
In addition to the phylogenetic and ontogenetic views familiar to psychology, there is another view of development that can be found in other social sciences, such as economics and political science. This is a cultural view of development, by which it is held that societies and cultures develop much like species evolve and individuals grow.
The
origins of this cultural view of
development lie in the political
economy of Karl Marx, who argued
that all societies went through four
stages of economic development:
To which, Marx and
Engels later added two other stages:
In 1960, the American
economic historian W.W. Rostow
offered a non-Marxist alternative
conception of economic growth which
he called the stages of growth:
Most recently, Francis
Fukuyama traced political
development through a series of
stages in the Origins of
Political Order: From Prehuman
Times to the French Revolution
(2011); a second volume, forthcoming
as of 2012, will track political
development since the French
Revolution of 1789. According
to his view:
Stage theories of political and economic development are about as popular in social science as stage theories of cognitive or socio-emotional development have been in psychology!
Note, however, the implications of the term development, which suggests that some societies are more "developed" -- hence, in some sense better -- than others. Hence, the familiar distinction between developed and undeveloped or underdeveloped nations. The implication is somewhat unsavory, just as is the suggestion, based on a misreading of evolutionary theory, that some species (e.g., "lower animals") are "less developed" than others (i.e., humans). For this reason, contemporary political and social thinkers often prefer to talk of social or cultural diversity rather than social or cultural development, thereby embracing the notion that all social and cultural arrangements are equally good.
With the exception of Marx, few of these social scientists have had much to say about the correlates between the economic and political development of whole societies and cultures, and the psychological development of the individuals who live in them. For Marx, however, it was important that industrial workers develop a class consciousness -- an awareness of the class structure of the society that oppressed them -- as a precondition to the revolution(s) that would overthrow bourgeois capitalism in favor of socialism and, eventually, communism. Mao Tse-tung required the same class consciousness of the peasants who were to lead the revolution in pre-capitalist, feudal China.
Marx specifically discussed the development of consciousness in the preface to his Contribution to the Critique of Political Economy (1859):
The general conclusion at which I arrived and which, once reached, became the guiding principle of my studies can be summarized as follows. In the social production of their existence, men inevitably enter into definite relations, which are independent of their will, namely relations of production appropriate to a given stage in the development of their material forces of production. The totality of these relations of production constitutes the economic structure of society, the real foundation, on which arises a legal and political superstructure and to which correspond definite forms of social consciousness. The mode of production of material life conditions the general process of social, political, and intellectual life. It is not the consciousness of men that determines their existence, but their social existence that determines their consciousness.
This Marxist notion of consciousness can also be found in the work of W.E.B. DuBois (pronounced Doo-Boyz; 1868-1963), the African-American scholar who wrote in The Souls of Black Folk (1903) about the "double consciousness" of the American Negro (as African-Americans were then called), who is aware of himself as both American and Negro.
A related concept can be found in the works of the French-West Indian psychiatrist and social philosopher Frantz Fanon (1925-1961) -- especially in Black Skin, White Masks (1952).
In
the late 1960s, this Marxist concept
of consciousness could be found in
the literature of the women's
movement, as in the efforts of
first-wave feminists such as Kate
Millett to engage in
"consciousness-raising" activities
that would make women aware of their
oppression by men (and, for that
matter, men aware of their roles as
oppressors).
Capping
off the "Sixties Countercuture", at
least so far as consciousness is
concerned, Charles Reich, legal
scholar, wrote The Greening of
America (1970), in which he
described three stages in the
evolution of cultural consciousness:
But
little of this had anything to do
with scientific psychology or
cognitive science.
In fact, however, some psychologists in the Soviet Union who were influenced by Marx, including Lev Vygotsky, did explore the relations between economic development of society and the cognitive development of individuals. In the West, this line of research was pursued most avidly by the Michael Cole, Silvia Scribner, and their colleagues at the Center for Comparative Cognition at UC San Diego. These investigators explored, such topics as literacy as a force in cognitive development. However, these researchers and theorists have had little to say about consciousness per se.
But that doesn't mean that the cultural development of consciousness has been ignored entirely.
If mirror self-recognition is your criterion for consciousness, then it's pretty clear that we've had it for a very long time. Archeological digs in Turkey have yielded mirrors, made of polished stone, dating to roughly 6,000 BC. And before that, early humans could see their reflections in pools of water (remember the myth of Narcissus). But how did they react? Did they recognize themselves? They did: jewelry such as beads has been found in some of the earliest Paleolithic cave-dwellings. This indicates that our earliest human ancestors knew what they looked like, and cared about it. (At least once they had been expelled from the Garden of Eden -- see below.)
In The Origins of Consciousness in the Breakdown of the Bicameral Mind (1977), Julian Jaynes offered the beginnings of an account of the cultural development of consciousness. Recall (from the Introduction) that Jaynes was able to find relatively few references to mental life in early Western literature, until roughly the 6th century B.C.
For
example, Jaynes points out that
there is little or no evidence of
consciousness in Homer's Iliad.
With rare exceptions, nobody makes
decisions, introspects, or
reminisces. The people in
Homer pretty much just do what the
gods tell them to do.
The point here is that there's a lot of emotion in the Iliad,
and plenty of desire, but not too much by way of thinking,
choosing, and desiring. It's as if the people of the Iliad
are operating on what Robert McCarley called the "Reptilian Brain"
(the brainstem and cerebellum) and the "Old Mammalian Brain" (the
limbic system).
Actually, that's not strictly true. Achilles, whose pride and wrath are the subject of the Iliad, made a choice: "between a long, insignificant life and a brief, glorious one" (Daniel Mendelsohn, "Battle Lines", New Yorker, 11/07/2011). That's why he's so angry with Agamemnon: Agamemnon, who had to give up one of his slave girls, took one of Achilles' -- thus depriving Achilles of one of the spoils of war, and in turn threatening Achilles' reputation. So Achilles decides not to fight any longer, thus depriving the Greeks of their best warrior -- at which point the war starts going badly for them. And later on, he decides to rejoin the battle. And at the end, he decides to return Hector's body to Priam.
In the Odyssey (Book XI), by the way, Achilles even changes his mind. When Odysseus encounters him in the Underworld (Hades), and greets him as "blessed in life, blessed in death" he responds that "I'd rather serve as another man's laborer, as a poor peasant without land, and be alive on Earth, than be lord of all the lifeless dead".
Helen, too, sometimes displays glimmers of consciousness. When she stands on the walls of Troy, identifying for Priam the Greeks who have massed to attack him, she seems to regret what she has done. Mostly, though, she just names the warriors and the cities they represent.
But the Iliad is a long poem, and there's not a lot of consciousness in it. At least, not a lot of free will, choice, and decision-making. After all, Helen wasn't Achilles' wife. He's gone to battle because Agamemnon requires it. And it's not as if Helen eloped with Paris of her own free will. She was given to him by Aphrodite. And there's no reason to think that Paris had any thought about Helen until Aphrodite offered her as a bribe -- he didn't even know that Helen was already married.
There's consciousness aplenty in Homer's Odyssey, however.
(I hear you objecting, "What about the Trojan Horse?". But the story of the Trojan Horse isn't in the Iliad. It's in the Odyssey (Books IV and VIII; also Euripides's Trojan Women and Virgil's Aeneid, Book II) -- and it was Odysseus's idea! Odysseus is a different sort of man, with a different sort of mind.
There are other examples from
ancient Greek literature. In the story of Jason and the
Argonauts, for example, dating from the 3rd century BCE, we have a
scene in which Jason, who has been the Golden Fleece by King
Aietes, but only if he will perform certain tasks, thinks hard
about what he should do.
Dating HomerHomer, of course, may not have ever lived at all, or there may have been many "Homers". The general view is that "Homer", whoever he was, collected and arranged the Iliad and the Odyssey sometime in the 8th century BCE, based on earlier oral traditions. Adam Nicolson, in Why Homer Maters (2014), argues that the epics represent "the violence and sense of strangeness of about 1800 B.C.". Bryan Doerries, reviewing Nicolson's book
in the New York Times Book Review ("Songs of the
Sirens", 12/28/2014), suggests that "the ancient poems
appear as a bridge between the present and an otherwise
inaccessible past, a rare window into a moment of
cultural convergence around 2000 B.C., when East met
West, North met South, and Greek consciousness was
forged in the crucible of conflict between a savage
warrior culture from the flat grasslands of Eurasia and
the wealthy, sophisticated residents of cities in the
eastern Mediterranean." The Iliad and the Odyssey,
Nicholson writes, constitute "a miracle of
transmission from one end of human civilization to the
other." |
And in a provocative analysis of the Hebrew Bible,
Amos
is repeating God's word;
Ecclesiastes is thinking for
himself.
By analogy to the bicameral legislature (or, perhaps to the bicameral camel!), Jaynes suggests that these early humans possessed a bicameral mind consisting of a "decision-making part" and a "follower part". Mostly, humans of this time were creatures of habit. But when something frustrated their habitual behavior, or called for a novel response, the stress of decision instigated auditory hallucinations, which early humans interpreted as the gods telling them what to do.
Writing in 1976, at a time when knowledge of hemispheric specialization was just beginning to emerge, Jaynes sometimes identified the decision-making part with the "silent" right hemisphere, and the following part with the left hemisphere. But Jaynes didn't believe that, all of a sudden, one hemisphere got connected with another. After all, there has been no change in the human genome since Adam and Eve -- 6000 years simply isn't enough time. Rather, Jaynes attributes the development and breakdown of bicamerality to cultural changes -- to developments taking place in society, rather than in species or individuals.
Jaynes
finds evidence of bicamerality
in the most ancient writings,
so in his view the bicameral
mind was in operation at least
by the invention of writing,
about 3000 BCE. In fact,
Jaynes believes that
bicamerality emerged in the
shift from hunter-gatherer
mode to agriculture, as a way
of controlling large groups of
people through a rigidly
ordered social hierarchy with
a god at the top, speaking
through a kind, who told
everybody else what to
do. Think of Moses
bringing the Ten Commandments
down from Mount Sinai.
Similarly, Jaynes traces the beginnings of the breakdown of the bicameral mind to about 1400 BCE, when large civilizations naturally produced lots of "voices", who often didn't agree with each other. At that point, people stop listening to gods and start thinking for themselves. Or, perhaps, people simply found that the gods didn't talk to them anymore. It's about this time that literature is full of references to people being abandoned by their gods, as in the Psalmist's "My God, My God, why has thou forsaken me?".
Anyway, Jaynes argues that the breakdown was fully consolidated by about 600 BCE -- when Solon initiated the Golden Age of Greece with the adage, "Know thyself".
Jaynes identifies similar trends occurring at about the same time in China and India. Actually, the entire period from about 800 to 200 BCE has come to be known as the Axial Age -- a term (an axis is a pivot) coined by the German philosopher Karl Jaspers in his treatise On the Origin and Goal of History (1949). Jaspers pointed out that the period from about 800 to 200 BC was characterized by a revolution in religious and philosophical thought, in which "the spiritual foundations of humanity were laid simultaneously and independently" in very different parts of the world:
The Axial Age is usually viewed as a turning point in the history of religion -- and, in particular the establishment of monotheism -- based on a transcendental vision of the presence and power of a divine entity. But it also was a turning point in philosophy. As Jaspers put it, "For the first time there were philosophers". Confucius, Gautama Buddha, and Socrates thought for themselves, and thought about thinking, and initiated a tradition in which thinkers exchanged their thoughts and debated with each other. Beginning about 800 BC, and for the first time, people began to reflect on themselves and their society.
For a highly readable treatment of the Axial Age, with special emphasis on its place in the history of religion, see The Great Transformation: The Beginnings of Our Religious Traditions (2006) by Karen Armstrong, a historian of religion (she also wrote A History of God, 1993). See also The Axial Age and Its Consequences, a collection of papers by sociologists and historians edited by Robert N. Bellah and Hans Joas (2013; reviewed in "A Different Turning Point for Mankind?" by G.W. Bowersock, New York Review of Books 05/09/2013).
Jaynes' arguments are certainly provocative, but he was not the only person to have this idea. Reviewing a new translation of the Odyssey by Emily Wilson, Gregory Hays, a classics scholar at the University of Virginia, reminds us that "Aristotle said that the "Iliad" was a poem in which things happened to people, while the "Odyssey" was a poem of character" ("A Version of Homer that Dares to Match Him Line for Line", New York Times Book Review, 12/10/2017).
As he was writing his book (and as he acknowledged there), he became aware of a similar argument by Bruno Snell, a German philologist, in his book, The Discovery of the Mind (1953). Snell argued that the characters in Homer do not have any -- well, character, in the modern sense of a personal self. They don't seem to have minds of their own, and they don't seem to have personalities. Their actions appeared, to him, to result from divine intervention, rather than any beliefs and desires on their part. Further, he noted, that Homer did not have words for "mind" or even "body", in our modern sense. For example, in Homer the word psyche refers not to mind or consciousness, exactly, but to some kind of spirit which departs the body at the time of death.
"[I]t appears that in the early period the 'character' of an individual is not yet recognized.... There is no denying that the great heroes... are drawn in firm outline and yet the reactions of Achilles, however grand and magnificent, are not explicitly presented in their volitional or intellectual form as character, i.e. as individual intellect and individual soul" (Snell, 1953, quoted in Knox, 1993, p. 39).
Bernard Knox, in his beautifully titled essay, "The Oldest Dead White European Males" (in his book of the same title, published in 1993; see also another of his essays, "The Human Figure in Homer", from 1991), argued to the contrary, also on philological grounds. He points out that all the individual characters have names, which individualizes them. And they also have mental states. Achilles is full of rage, obviously -- it's the first word in the Iliad. There are other examples. For example, Achilles himself makes a big decision: to become a hero with a short life rather than an ordinary man who lives longer. Moreover, Knox correctly criticizes Snell for making an argument from silence. Just because Homer does not describe the thoughts and desires of Achilles and the rest, doesn't mean that they didn't have them. Finally, Knox argues, it's a mistake to take the language of epic poetry as representative of the language of everyday life. Otherwise, you might conclude that pre-Homeric ship-captains gave sailors their orders in hexameter!
Still, most of
the examples of modern
consciousness -- of beliefs
and desires leading to action
-- come from the Odyssey,
and that is Jaynes' point.
More recently, James Kugel, a
biblical scholar, has
offered a similar argument
as Jaynes in the Great
Shift (2017).
He points out that at one
point, God spoke to people
-- think of Moses and the
burning bush; now,
however, people speak to
God, in
various forms of prayer.
What Kugel calls
"The Great Shift"
reflects
a change in our
understanding of
God (as a being
"out there",
rather than a part
of nature), but
also a change in
human
consciousness.
Kugel argues that
the premodern
mind was
"semipermeable"
(adopting Charles
Taylor's idea
that it
was "porous")
-- meaning
that premoderns
did not perceive
themselves as
individuals
separate from
the natural
world, but
rather a part
of it.
One result was
that they heard
voices
talking to
them, instead
of thinking to
themselves --
Jaynes's
essential
argument, from
a religious as
opposed to a
psychological
perspective.
As far as psychologists, philosophers, and other cognitive scientists are concerned, the most upsetting thing about Jaynes is his insistence that humans were not always conscious.
Some of this misunderstanding is Jaynes' fault. He writes about "the bicameral mind", with a right-hemisphere "decision-making part" and a left-hemisphere "follower part", and the implication that, one day, the two hemispheres came together and that is the origin of consciousness. Of course, the two hemispheres were always together -- there is no reason to think that the corpus callosum suddenly emerged about 1000 BCE to connect them.
And some of this misunderstanding is somewhat forensic in nature. For example, Daniel Dennett has written approvingly of Jaynes' arguments because, in Dennett's view, they support the idea that consciousness is a social construction. That it doesn't "really" exist, but rather that it exists only as a figment of our Cartesian imaginations.
But another interpretation of Jaynes is that there was a point in historical time when humans realized that they were conscious -- that they had minds of their own, and that they could control their own minds -- in a way that they didn't really understand that before. The origin of consciousness, in this view, was much less an invention than a discovery -- a little like Moliere's Dr. Pangloss, who discovered that he had been speaking prose all his life!
There are certainly precedents to this kind of discovery. For example, anthropologists often speak of the Neolithic Revolution -- the term was coined by V. Gordon Childe -- that occurred when humans replaced hunting and gathering with agriculture. That discovery apparently occurred once, at a particular time and place (the Fertile Crescent, in the land known then as Sumer, present-day Iraq), and then spread like wildfire into Europe, Asia, and Africa. There are, in fact, a number of such "firsts" -- by Samuel Noah Kramer listed 39 of them in his book History Begins at Sumer (1956). Why couldn't the discovery that we are conscious -- that our mental states represent things other than themselves, and that we can control the contents of our own minds -- have been one of them?
That is, apparently, what happens in infant cognitive development, as the child achieves, and then fleshes out, a theory of mind. Jaynes wrote his book before there was much talk about the theory of mind (it was published in 1979, and Premack and Woodruff had introduced the concept of "theory of mind" only in 1978). I can't help but think that, had he known of the concept, he would have not relied so much on bicamerality. Rather, he might have thought about the origins of consciousness as a cultural achievement -- a point where someone discovered that he was conscious, and told someone else about it, and the idea spread like wildfire.
Again, Jaynes
was writing in the early and
mid 1970s, before anyone had a
concept of the theory of mind
(the Premack & Woodruff
paper was published in
1978). But what he's
really talking about is the
development of a theory of
mind in human history.
All that is most human about us, this consciousness, this artificial space we imagine in other people and in ourselves, this living within our reminiscences, plans, and imaginings, all of this is indeed only 3000 years old. And that, ladies and gentlemen, is less than 100 generations. And from that I think we can conclude that we are all still very young.
This page last revised 01/08/2024.