Pre-Exam Reviews |
Feedback on Current Exam |
Previous Exams |
This course has one midterm exam and a comprehensive final exam.
The best way to prepare for
exams is to keep up with assigned readings, attend lectures,
and deploy effective, efficient study skills.
For two essays by your
instructor concerning effective learning (and teaching)
strategies, see:
Chief among these is the
study-strategy known as PQ4R (or, alternatively, SQ4R),
initially proposed as the SQ3R method by Francis P.
Robinson (1946), and expanded and promoted by John Anderson of
Carnegie-Mellon University, a cognitive psychologist who is
concerned with the applications of cognitive psychology to
education. I quote from Anderson's text, Cognitive
Psychology and Its Implications (Worth, 2000, pp. 5-6
and 192-193):
And, for that matter, Repeat.
Of course, PQ4R works for lectures, too.
That's one reason that I distribute the lecture illustrations in
advance -- despite the fact that doing so spoils some of the
"surprise value" that certain slides might otherwise have.
And that's also why I distribute the lecture illustrations in a
"3/page" format that facilitates taking notes right on the
printout.
And, again, Repeat.
Roald Hoffman and Saundra McGuire, highly regarded
professors of chemistry, have offered six strategies that
facilitate effective learning of any academic subject - -whether
in high school, college, or graduate school ("Learning and
teaching Strategies", American Scientist, 2010).
Most of what follows is in their own words -- I've eliminated
quotation marks for ease of readability.
What Works:
What Doesn't Work:
For details, see "Improving Students' Learning with Effective Learning Techniques: Promising Directions from Cognitive and Educational Psychology" by John Dunlosky, Katherine A Rawson, Elizabeth J. Marsh, Mitchell J. Nathan, and Daniel T. Willingham (Psycholoogical Science in the Public Interest, 2013); this review is summarized in "What Works, What Doesn't" by the same authors in Scientific American Mind, September/October 2013).
Another important principle of learning and memory is the distinction between massed and distributed practice. In general, memory is better is practice is spread out over time (and even location), than if it is all lumped together at once. So take your time going through each chapter. Don't read it all in one sitting, and don't read it multiple times in a single sitting! Spread the reading out, pace yourself, and things will go better.
Another principle is that memory is improved by repeated testing. It's not so much a matter of repeated reading, as it is of repeated testing. Practice PQ4R several times for each topic.
Exam questions always focus on basic concepts and principles, as opposed to trivia such as names and dates. If I should mention a name or date, it's usually because that is relevant to the concept or principle that I'm really interested in.
Then again, if you've taken a course on social cognition, you should know who Asch is, and that he was influenced by the Gestalt school of perception. But that's rarely the point of the question.
In general, there are two ways
to get the right answer to one of my test questions:
Prior to exams, the instructor conducts a review session. For the midterm, the review is conducted during regular class times. For the final, the review session is conducted during one of the "Dead Days" between the end of classes and the beginning of the exam period.
The GSIs are encouraged not to conduct additional review sessions during discussion section. This is because discussion sections are intended to supplement lecture and text material; they aren't intended for review purposes.
It is now my practice to provide a written "narrative review", so that the review sessions themselves can be devoted to questions and answers. Here are the illustrations used in prior review sessions (in PDF format). These will be supplanted with a narrative review in the run-up to the exams.
The midterm exam is noncumulative in nature.
And, of course, there's considerable overlap. For example, in my lectures on the Social Perception, I include material on face perception; and in my lectures on Social Judgment, I talk about automaticity.
So, one way or another, the balance of questions is about 50-50 between lectures and text.
In order to facilitate rapid, objective, and reliable grading, the midterm and final exams in this course are administered in short-answer format, with an occasional (very) short essay. If I had my way, the exams would be multiple-choice, a format which insures rapid, objective and reliable grading. Because of the periodic Forum postings, described in the syllabus, the short-essay portion is likely to be eliminated altogether beginning in Spring 2008.
Exams must be written in ink. In the event of a question about scoring, exams written in pencil will be ineligible for reconsideration.
I don't intentionally repeat questions from past exams. Nevertheless, all previous exams in my offerings of Psych 164 are available on the course website, as a guide to studying.
To retrieve a particular exam,
simply click on the links in the table below.
Fall 2015 |
Midterm |
Final |
Spring 2014 |
Midterm |
Final |
Spring 2012 |
|
|
Spring 2010 |
Midterm |
Final |
Spring 2008 |
Midterm |
Final |
Spring 2006 |
Midterm |
Final |
Spring 2004 |
Midterm |
Final |
Spring 2002 |
|
|
Spring 2000 |
Midterm |
Final |
Since I arrived at Berkeley, both my lectures and the textbook have undergone several changes. Accordingly, there are some questions on past exams that are not pertinent to the current version of the course. But because concepts and principles change more slowly than picky details, even the very oldest exams are still a good resource.
Moreover, due to the vagaries of scheduling, sometimes the coverage of exams differs from year to year. For example, the material on Social Categorization sometimes appears on the Midterm and sometimes on the Final. One way or another, the exams cover the entire course of readings and lectures.
No matter how carefully it's constructed, an
exam can have bad items -- they may be just too difficult, or
they may not tap the kinds of basic
concepts and principles that should be the subject of
examination. I try not to write bad items at the
outset, but after the exam is over there are ways to
identify and correct items that, despite our efforts, just
aren't right.
The first thing is to score the exam straight,
assuming that all the items are good, and calculate the mean score
(and standard deviation). In an upper-division course,
experience tells me to look for an average score of about
80%. An average much lower than that indicates that there
might be something wrong with the exam. Of course, it might
also be that students just didn't study effectively! I can't
do anything about that, but I can do something to correct any
problems internal to the exam itself.
Then we look at the psychometric properties of the exam as a whole, particularly its reliability (coefficient alpha), which should be in the .70s or .80s.
In order to identify potentially bad questions,
I use a dual criterion of (1) extremely low scores and (2)
extremely low item-to-total correlations.
Individual items may be worth differing points.
In order to put all the items on the same scale, their scores are
converted to percentage scores.
For example, a 3-point question with a mean score of 1.99
would be converted to a percentage score of .66.
In addition, I examine the item-to-total
correlations between each item and the total test score (corrected
by dropping the item in question). With a large class, even
low item-to-total correlations can be statistically significant
(for N = 100, r = .20 yields a p-value of .046). So I employ a cutoff of
.20 to identify items with low item-to-total rs. Any such items are also
candidates for rescoring.
Any such items are rescored by giving all
students full credit.
Sometimes items will be graded so that students
receive "half-point" scores like 1.5 or 2.5. When these are summed to
determine a final revised exam score, the total exam score is
rounded up, as necessary -- e.g., from 39.5 to 40 or 45.5 to 46.
As a result of the rescoring, and rounding up, scores on the exam should now be at least in the 80% range. If they don't, at this point, it's not because of any problem with the exam!
With anything other than totally objective, multiple-choice exams, errors in grading can occur. After exam grades and feedback have been posted to the course website, and the exams returned in discussion section, students who believe that a serious error in grading has occurred may appeal. This appeal must be in writing, with a paragraph explaining why the the student's answer is as good as, or better than, the specimen answer given in the Scoring Guide used in grading the exam. Students are strongly encouraged not to "fish" for extra points. Regrading is time-consuming, and regrading requests rarely result in a raised exam score. The question will be regraded by the GSI who did the original grading. GSIs have been instructed to approach each regrading request "fresh", meaning that the regraded, influenced by knowledge of how other students answered the question, may be lower than the original grade.
Requests for regrading can only be honored for midterm exams. According to UCB policy, final letter grades are due wthin 96 hours of the final exam, which doesn't give students time to review their exams; after grades are posted they may only be changed to correct a clerical error.
I will post a general scoring guide to the course website as soon as possible after each exam has concluded, so that students can check their answers. This is also intended to enhance the value of past exams as a study guide.
GSIs are encouraged not to address questions about particular exam items. All the feedback you need is provided in the instructor's feedback, on the course website.
Exam grades are posted to the course website as soon as possible after scoring has been completed. This can take a couple of days, but is often completed much more quickly than that.
Students registered with the Disabled Students Program are entitled to certain accommodations with respect to testing. Such students should consult with the instructor in advance of the exam to make appropriate arrangements.
Assignment of grades is, in some ways, the most problematic aspect of any course.
At some institutions, and in some individual departments and courses, there is a forced curve such that, for example, the average grade is set at C (no kidding!). Thus, for example, scores 1 standard deviation above the mean might get some kind of B, while scores that are two standard deviatons above the mean might be required for an A. But this means that, no matter how good overall class performance is, someone has to get a C, and someone has to fail. And that doesn't seem fair.
As an alternative, the
traditional academic criteria for letter grades are as
follows:
In
addition, I try to conform my grade distributions to that of the
campus as a whole. Psychology is both a social science and a
biological science; so I average the figures together for those
two divisions of the College of Letters and Sciences.
According to the most recent data available to me, from the 2014-2015 academic year
(Fall and Spring semesters combined):
What do I mean "depending on how you count"? It turns out that UC Berkeley records letter grade distributions in two ways, as percentages and as counts. When it calculates percentages, it does so based on the number of letter grades. But not all students take a course for a letter grade. So, for example, of students taking upper-division courses for letter grades in 2014-2015, 49% got "some kind of A". But only about 2/3 of these courses (69%) were taken for a letter grade; most the rest were taken on a Pass/No Pass or Satisfactory/Unsatisfactory basis (and a very few grades were Incomplete, In Progress, or Unknown). P/NP and S/U are excellent options, as they make it easier for students to take intellectual risks. At the same time, experience indicates that most students who opt for P/NP are going to get grades in the B or C range -- perhaps because they've gotten in over their heads, or because the P/NP option gives them license to blow the course off. The result is that, while 49% of all letter grades in upper-division courses were "some kind of A", As comprised only 31% of all grades. Put another way, the P/NP option probably inflates the percentage of As and Bs. If you assume that students who received a grade of P or S would have received a C+ if the course had been taken for a letter grade, the percentage of As and Bs combined would drop to about 56% from 86%.
For students taking the course on a "Pass/Fail" basis, the minimum for a passing grade is C-.With respect to grade inflation, the issue is not easy to resolve: what is the proper proportion of As in academically elite institutions like Berkeley? At one point, 50% of Harvard students were getting As in their courses, and 80% were graduating with honors: Harvard is Harvard, but even at Harvard there was a general feeling that those figures were too high. Recently, Princeton established a goal of giving no more than 35% As in any course -- but then abandoned the policy as unworkable and probably unfair. Given the distribution of letter grades in all upper-division courses at UCB, presented above, I shoot for about 55% As and Bs together
Moreover, given the model of the normal curve, there should probably be more Bs than As, and more Bs than Cs.
But, if everybody got 90% on my exams, they'd all get As -- and, they'd deserve them.
This page last revised 12/14/2015.