Wednesday, March 13, 2013

Quizzes as a Tool for Calibrating Student Expectations

With every passing semester, I believe ever more firmly that one of the fundamental keys to a successful class (like any good relationship) is the clear communication of expectations.  More recently, as I move to a much more student-centered classroom and model of instruction, I have come to see that, for me to communicate my expectations clearly, I need to have a good understanding of what my students' baseline expectations are.  This is particularly true when it comes to grades; and particularly true in a class with 75% underclassmen in the state of Texas.  These students gained admission because they were in the top 10% (or 8% or 9%) of their graduating high school class.  This probably means that they have rarely if ever received a grade lower than an A since middle school--if then.

This spring, working together with a colleague in UT's Center for Teaching and Learning, I administered a backgrounds, goals, and expectations survey to the students during the first week of the semester.  Nearly the entire class completed the survey (there was a big incentive attached).  I recently received an overview of the results.  In many respects, I was surprised: fewer of them work than I expected (only 25%); more of them were in the class because they were interested in the topic than I would have guessed (only about 25/335 registered for purely pragmatic reasons, e.g., "it fit my schedule").  The biggest shock was the grade that they expected to get in the course: 96% expected an A of some kind.  Only .6% expected a grade lower than a B+.  Historically, somewhere between 35-40% earn some kind of A in the course.  Another 30% earn some kind of B and about 20% earn a C.  I fail very few students because they drop the course, even after it is over; or withdraw.  This particular cohort seems to be very good (I wrote about their performance on the first midterm recently).  They are performing at a high level on quizzes and midterms. This means that more like 50% are in the A range, perhaps 55%.  I am fairly sure, though, that 96% of the class will not be earning some kind of A.

I'd be curious to know how this same cohort would answer that question now, at the midway point of the semester.  They have now sat for five quizzes and a midterm exam and have completed half of their graded discussion posts.  With the quizzes, they get weekly feedback on their performance and have the opportunity to calibrate their effort but also their expectations.  Despite the hassles of administering scantron quizzes (to limit cheating) to a class of nearly 400 students, I am a completely sold on their many benefits--and these benefit extend well beyond motivating regular and consistent study.  I would imagine that, with each quiz, the student can see how they are doing.  I post the questions with the correct answers and also review questions that were missed by more than 25% of the class (and frequently include those questions on future quizzes).  I also post a summary of the class performance, so that students have some sense of where they stand relative to the rest of the class.

When I first started teaching a decade ago, I was reluctant to ever post data that showed student performance relative to their classmates.  I wanted students to focus on themselves and not compare themselves or be hyper competitive.  Since I don't grade on a curve, it doesn't matter how someone else did.  Over the years, though, I've realized that this comparative data has an important role to play in helping my students calibrate their expectations, face reality, and grasp that they are no longer in high school.  They may well be a small fish in a very big pond full of much larger fish.  It cuts back on complaints when they get a glimpse of just how many big fish are swimming around with them.

The other advantage of frequent graded assessments: they force students to confront reality.  When I used a 3 midterm system for my lecture class, I regularly had students operating in extreme denial.  Even at the end of the semester, they believed that they were going to get a much better grade than they actually received (I once calculated this from the course evaluations).  They persuade themselves that they will do better on the next exam, regardless of how they have done on previous exams.  I am stunned at the degree of denial that I see on a regular basis.  My sense is that these weekly quizzes are going a long way towards dispelling that denial and forcing students to fish or cut bait. They are also giving them very specific information about their learning behaviors and reminding them that, if they don't keep up, it will be very bad for their grade.

We will do an end of semester survey that asks them to reflect on their performance in the course.  I am now very curious to read those surveys, and especially to see whether, over the course of the semester, the frequent graded feedback actually does help them to adjust their expectations but also realize that, if they want an A, they are going to have to work very hard for it.  I would not be surprised if this cohort ends up earning significantly higher grades than I usually give--but it will be because they worked hard for them.  They knew how hard they would need to work because, each week, they got feedback on the success of their learning strategies.

No comments:

Post a Comment