Pre-Exam Reviews |
Feedback on Current Exam |
Previous Exams |
This course has one midterm exam and a comprehensive final exam.
The best way to prepare for
exams is to keep up with assigned readings, attend lectures,
and deploy effective, efficient study skills.
For two essays by your
instructor concerning effective learning (and teaching)
strategies, see:
Chief among these is the
study-strategy known as PQ4R (or, alternatively, SQ4R),
initially proposed as the SQ3R method by Francis P.
Robinson (1946), and expanded and promoted by John Anderson of
Carnegie-Mellon University, a cognitive psychologist who is
concerned with the applications of cognitive psychology to
education. I quote from Anderson's text, Cognitive
Psychology and Its Implications (Worth, 2000, pp. 5-6
and 192-193):
These pointers are based on
two principles familiar from the study of human learning and
memory:
Of course, PQ4R works for lectures, too.
That's one reason that I distribute the lecture illustrations
in advance -- despite the fact that doing so spoils some of
the "surprise value" that certain slides might otherwise
have. And that's also why I distribute the lecture
illustrations in a "3/page" format that facilitates taking
notes right on the printout.
Roald Hoffman and Saundra McGuire, highly regarded professors of chemistry, have offered six strategies that facilitate effective learning of any academic subject - -whether in high school, college, or graduate school ("Learning and teaching Strategies", American Scientist, 2010). Most of what follows is in their own words -- I've eliminated quotation marks for ease of readability.
There is a lot of misinformation about
learning strategies and study skills. John Dunlosky and
his colleagues (2013) have pulled this literature together in a
few succinct points:
What Works:
What Doesn't Work:
For details, see "Improving Students' Learning with Effective Learning Techniques: Promising Directions from Cognitive and Educational Psychology" by John Dunlosky, Katherine A Rawson, Elizabeth J. Marsh, Mitchell J. Nathan, and Daniel T. Willingham (Psycholoogical Science in the Public Interest, 2013); this review is summarized in "What Works, What Doesn't" by the same authors in Scientific American Mind, September/October 2013).
Another important principle of learning and memory is the distinction between massed and distributed practice. In general, memory is better is practice is spread out over time (and even location), than if it is all lumped together at once. So take your time going through each chapter. Don't read it all in one sitting, and don't read it multiple times in a single sitting! Spread the reading out, pace yourself, and things will go better.
Another principle is that memory is improved by repeated testing. It's not so much a matter of repeated reading, as it is of repeated testing. Practice PQ4R several times for each topic.
Exam questions always focus on basic concepts and principles, as opposed to trivia such as names and dates. If I should mention a name or date, it's usually because that is relevant to the concept or principle that I'm really interested in.
Then again, if you've taken a course on consciousness, you should know who William James is, and that he he was responsible for the concept of the "stream of consciousness". But that's rarely the point of the question.
In general, there are two
ways to get the right answer to one of my test questions:
Prior to exams, the instructor conducts a review session. For the midterm, the review is conducted during regular class times. For the final, the review session is conducted during one of the "Dead Days" between the end of classes and the beginning of the exam period.
The GSIs are encouraged not to conduct additional review sessions during discussion section. This is because discussion sections are intended to supplement lecture and text material; they aren't intended for review purposes.
Up until 2005, it was my practice to conduct a formal review prior to exams, accompanied by illustrations. It is now my practice to provide a written "narrative review", so that the review sessions themselves can be devoted to questions and answers.
The midterm exams are
noncumulative in nature.
In devising the test, I try to have at least one question explicitly drawn from each of the lectures. So, for example, on one exam or another:
I also try to have one question drawn from each major section of each chapter in the assigned reading. So, for example, with respect to the Revonsuo text, you can expect at least one question on each of the following topics:
And, of course, there's considerable overlap. For example, my lectures on Attention and Automaticity include material on automatic and controlled processing; and in my lectures on Meditation, I talk about non-Western approaches.
So, one way or another, the balance of questions is about 50-50 between lectures and text.
In order to facilitate rapid, objective, and reliable grading, the midterm and final exams in this course are administered in short-answer format, with an occasional (very) short essay. If I had my way, the exams would be multiple-choice, a format which insures rapid, objective and reliable grading. Because of the periodic Forum postings, described in the syllabus, the short-essay portion is likely to be eliminated altogether beginning in Spring 2009.
Exams must be written in ink. In the event of a question about scoring, exams written in pencil will be ineligible for reconsideration.
I don't intentionally repeat questions from past exams. Nevertheless, all previous exams in my offerings of Psych 129 are available on the course website, as a guide to studying.
To retrieve a particular
exam, simply click on the links in the table below.
Fall 2016
|
No Exams -- in Fall 2016, the course was taught as a seminar. |
|
Fall 2014 |
Midterm |
Final |
Spring 2013 |
Midterm |
Final |
Spring 2011 |
Midterm |
Final |
Spring 2009 |
Midterm |
Final |
Fall 2008 |
Final -- Budapest Semester in Cognitive Science |
|
Spring 2007 |
The course was not offered this semester |
|
Spring 2005 |
Midterm |
Final |
Spring 2003 |
Midterm |
Final |
Spring 2001 |
Midterm |
Final |
Spring 1999 |
Midterm |
Final |
Spring 1998 |
Midterm |
Final |
Since I arrived at Berkeley, both my lectures and the textbook have undergone several changes. Accordingly, there are some questions on past exams that are not pertinent to the current version of the course. But because concepts and principles change more slowly than picky details, even the very oldest exams are still a good resource.
Moreover, due to the vagaries of scheduling, sometimes the coverage of exams differs from year to year. For example, the material on unconscious mental life may sometimes appear on the Midterm and sometimes on the Final. One way or another, the exams cover the entire course of readings and lectures.
No matter how carefully it's constructed,
an exam can have bad items -- they may be just too
difficult, or they may not tap the kinds of basic concepts and principles that should be
the subject of examination. I try not to write
bad items at the outset, but after the exam is over
there are ways to identify and correct items that,
despite our efforts, just aren't right.
The first thing is to score the exam
straight, assuming that all the items are good, and
calculate the mean score (and standard deviation). In
an upper-division course, experience tells me to look for an
average score of about 80%. An average much lower than
that indicates that there might be something wrong with the
exam. Of course, it might also be that students just
didn't study effectively! I can't do anything about
that, but I can do something to correct any problems
internal to the exam itself.
Then we look at the psychometric properties of the exam as a whole, particularly its reliability (coefficient alpha), which should be in the .70s or .80s.
In order to identify potentially bad
questions, I use a dual criterion of (1) extremely low
scores and (2) extremely low item-to-total correlations.
Individual items may be worth differing
points. In order to put all the items on the same scale,
their scores are converted to percentage scores. For example, a
3-point question with a mean score of 1.99 would be
converted to a percentage score of .66.
In addition, I examine the item-to-total
correlations between each item and the total test score
(corrected by dropping the item in question). With a
large class, even low item-to-total correlations can be
statistically significant (for N = 100, r = .20 yields a p-value of .046). So I employ a
cutoff of .20 to identify items with low item-to-total rs. Any such items are
also candidates for rescoring.
Any such items are rescored by giving all
students full credit.
Sometimes items will be graded so that
students receive “half-point” scores like 1.5 or 2.5. When these are
summed to determine a final revised exam score, the total
exam score is rounded up, as necessary -- e.g., from 39.5 to
40 or 45.5 to 46.
As a result of the rescoring, and rounding up, scores on the exam should now be at least in the 80% range. If they don't, at this point, it's not because of any problem with the exam!
I will post a general scoring guide to the course website as soon as possible after each exam has concluded, so that students can check their answers. This is also intended to enhance the value of past exams as a study guide.
GSIs are encouraged not to address questions about particular exam items. All the feedback you need is provided in the instructor's feedback, on the course website.
Exam grades are posted to
the course website as soon as possible after scoring has
been completed. This can take a couple of days, but
is often completed much more quickly than that.
With anything other than
totally objective, multiple-choice exams, errors in grading can occur. After exam
grades and feedback have been posted to the course
website, and the exams
returned in discussion section, students who believe that
a serous error in grading has occurred may appeal.
This appeal must be in writing, with a paragraph
explaining why the student's answer is as good as, or
better than, the specimen answer given in the Scoring
Guide used in grading the exam.
Students are strongly encouraged not to "fish" for
extra points. Regrading is time-consuming, and regrading
requests rarely result in a raised
exam score. The question will be regraded
by the GSI who did the original grading. GSIs
have been instructed to approach each regrading
request "fresh", meaning that the regrade, influenced
by knowledge of how other students
answered the question, may be lower
than the original grade.
Requests for regrading can only be
honored for midterm exams. According to UCB
policy, final letter grades are due within 96 hours
of the final exam, which doesn't give
students time to review their exams; after grades are
posted, they may only be changed to
correct a clerical error.
Students registered with the Disabled Students Program are entitled to certain accommodations with respect to testing. Such students should consult with the instructor in advance of the exam to make appropriate arrangements.
Assignment of grades is, in some ways, the most problematic aspect of any course.
At some institutions, and in some individual departments and courses, there is a forced curve such that, for example, the average grade is set at C (no kidding!). Thus, for example, scores 1 standard deviation above the mean might get some kind of B, while scores that are two standard deviations above the mean might be required for an A. But this means that, no matter how good overall class performance is, someone has to get a C, and someone has to fail. And that doesn't seem fair.
As an alternative, the
traditional academic criteria for letter grades are as follows:
In
addition, I try to conform my grade distributions to that of the
campus as a whole. Psychology is both a social science and a
biological science; so I average the figures together for those
two divisions of the College of Letters and Sciences.
According to the most recent data available to me, from the
2013-2014 academic year (Fall and Spring semesters combined):
With respect to grade inflation, the issue is not easy to resolve: what is the proper proportion of As in academically elite institutions like Berkeley? At one point recently, 50% of Harvard students were getting As in their courses, and 80% were graduating with honors: Harvard is Harvard, but even at Harvard there was a general feeling that those figures were too high. Recently, Princeton established a goal of giving no more than 35% As in any course. They quickly abandoned this policy, but the fact that the faculty tried it in the first place tells you something about the general concern.
Given the model of the
normal curve, there should probably be more Bs than As,
and more Bs than Cs. But right now, that's not
what's happening.
But, if everybody got 90% on my exams, they'd all get As -- and, they'd deserve them.
This page last revised 12/29/2016.