Thursday, June 6, 2013
The Myth of the Super Professor
Yesterday I wrote about the myth of the bad professor, a myth that has been a central feature of the narrative of a crisis in higher education. The myth has been a driving force behind the daily calls for the reform of higher education--preferably by Silicon Valley VCs who are entirely outside of the system (and who, on the whole, seem to understand few of the complexities of higher education in the US). The topic evidently hit a nerve with many because, in about 24 hours, the post had almost 500 hits. Among other things, I think that points to the fact that so many of us are fed up with the constant attacks on our abilities and efforts. It is enough of a challenge to inspire often undermotivated students to do the hard work of learning; the last thing we need is to defend ourselves from the politically motivated attacks of those who have never walked in our shoes and who have no idea what is actually happening on the ground.
Yet, to an extent, we have left ourselves vulnerable to these attacks because, in all the years that college teaching has been professionalized, we've never really developed a coherent and consistent system for measuring student learning in our classes. In many respects, we ourselves are guilty of perpetuating the myth of the bad professor because we have not adequately challenged the myth of the great professor. If the bad professor is phoning it in, the great professor is the charismatic sage whose students hang on to his (it's usually a he) every word. When faculty are evaluated for annual raises, tenure, and promotion, student evaluations are consulted; teaching awards are often a result of student nominations and student recommendations. At no point in this process is any significant attention paid to student learning. The best professor is the one whose students love him and want to impress him (and, to be fair, are hopefully inspired to work hard in the course because of this love and desire to impress). But shouldn't the best professor be the one who produces the highest learning outcomes in a cohort of students? Shouldn't we be evaluating the quality of instruction not on the performance but on the results of the performance?
Most faculty will agree with the research that repeatedly demonstrates that teaching evaluations are not a very effective measure of instructor quality or student learning. There are a range of age and gender biases at play. One recent study found, unsurprisingly, a strong correlation between the attractiveness of the professor, the grade the students thought they were getting, and the positivity of evaluation of the professor and course. We all know these things are true, yet we continue to play along. One of my favorite stories about gaming course surveys goes as follows: during the semester, slip into the class discussion commentary on each of the survey questions, telling the students exactly how you are doing whatever the question asks (e.g. emphasizing repeatedly your accessibility). Do this over and over. By the end of the semester, the students will have internalized the narrative and will rate you and your course highly. In my own case, I know that if I want to have outstanding course evaluations, I just have to let everyone think they are getting an A up until the time they submit the evaluation--and then use a high stakes final exam to sort them out. I've never actually done this, but know plenty of people who do.
So what do we need to do to shift the focus from the personality of the professor--and the cult of personality more generally--to evidence-based arguments about instructional quality and student learning? First, we need to assess our courses and students much more deliberately. We need to be able to demonstrate in some terms the value added by our courses. Yes, I know this plays into the rhetoric of the "outcomes" crowd. It will inevitably undervalue all sorts of difficult to evaluate skills like critical thinking; and it assumes that a course's value is immediately known at the end of the semester (though this second issue could be addressed with follow-up surveys). Many of us feel more comfortable with the current system, even while acknowledging its imperfections--including me. At the same time, by essentially allowing ourselves to be evaluated based largely on features of our personality, we are laying the foundation for our demise (because there's always someone else who is even more accessible, entertaining, etc.). MOOCs perpetuate the notion that good teaching is equivalent to skilled public performance rather than demonstrable learning outcomes. An important first step in responding to the pedagogical claims of MOOCs is to insist that courses and instructors demonstrate student learning outcomes.
As part of the redesign of my Intro to Rome class this past year, I instituted a thorough system of assessments of the course. Students had multiple opportunities to comment on all parts of the course. This was interesting and instructive for a range of reasons, not least of which was because we also had data that allowed for a comparison between their self-reports and direct assessment of various behaviors. It quickly became clear that there was a significant gap. One place where this gap was enlightening to all of us was in the assessment of the implementation of an "ethics flag." Three large enrollment classes implemented the flag in Fall 2012. One class was lecture-based, 220 students. The second was a mix of lecture and discussion sections, with 300 students. Mine was flipped, with discussion as primary feature, with 400 students. When the students were assessed, they reported the most satisfaction and learning in the lecture-based class; then the mixed class; with my class in last place. When the flag implementation committee did a direct assessment of their work, the results were exactly the reverse. They may have "liked" my class the least, but they learned the material much better--not surprising since I was requiring them to engage with it actively rather than passively. This spring, the discussion section course and my course were offered again and the results were the same, but with a slightly smaller gap and enough student comments to be pretty sure that perceived workload was inversely correlated with student satisfaction. Alas.
When we saw these results, they made sense--but were also an important reminder that student self-reports often reflect their sense of comfort. Many of them are still far more comfortable learning passively via lecture than in more active forms. Indeed, as my campus has worked to "blend" a number of gateway courses through the Course Transformation Program, a consistent feature of the blended courses has been lower student evaluations. There is a clear and indisputable inverse correlation between level of student engagement required and student satisfaction with the course. What we are learning is that the techniques that produce the greatest learning gains and best prepare students to progress through degree programs and graduate on time aren't necessarily the courses (and instructors) that they love the most. Often, this is because such courses require significant and consistent engagement. At the same time, these instructors are clearly doing their job of producing significant learning gains in their students.
The time has come to abandon the myth of the great professor--the lecturer who keeps the audience rapt in their seats, scribbling down his every word, chuckling at his witty jokes, in awe of his brilliance. Certainly, many professors are skilled performers and many of the students in their classes learn well. But there are plenty of boring, bad performers who are very good at designing courses and whose students learn at very high levels--even if those students didn't necessarily love the professor. Teaching is ultimately about student learning. If we continue to insist on defining our best professors as those who are the most gifted performers, but without any way to quantify what this means, we make it difficult to defend ourselves from the accusations of bad teaching. If we shift to defining good teaching through demonstrable student learning, we make it very difficult for these outside (and sometimes inside) attacks to carry much weight. Of course, this shift has to start at the highest levels of the university, by creating a system that evaluates and rewards student learning more than instructor likeability. Likeability matters. It matters that students enjoy a class. But, at the end of the day, what matters most is that they learned the course content.
[disclaimer: my students generally like me and give me high course evaluations. I'm not writing this because of sour grapes. Rather, after a decade+ of teaching, I've seen repeatedly that the system is set up to incentivize a kind of teaching that does not always serve the students' best interests. And, at the moment, this system has left faculty very vulnerable to charges that they aren't "good", with no clear way to rebut such charges.]