Wednesday, April 17, 2013

What's in a Name?

It's been a busy but exciting few weeks.   I've been hard at work gathering together lots of data: objective data about student learning behaviors from Echo360 as well as Blackboard; but also subjective data about their experiences in my "stealth" flipped class this semester.  Together with my assessment specialist, Stephanie Corliss, I put together a poster that told the story of my efforts to transform Introduction to Ancient Rome into a course that emphasized active learning, high levels of student engagement, and cumulative learning.  It was an incredibly gratifying experience to see in the charts and graphs the positive effects of the stealth flipped model on student learning and achievement.


Subjectively, I can tell that there's a big difference in the depth of learning and engagement in the spring cohort as compared to the fall cohort.  Some of this can be attributed to the fact that it's spring semester and incoming freshmen have become more accustomed to the ways of college coursework.  Given that at least 50% of the students aren't freshmen, though, something else has to be going on.

In a recent post on her excellent Peer Instruction blog, Julie Schell highlighted two key elements of my approach to this spring cohort: avoidance of the term "flipped" (instead, I use words like active, engaged, student-centered); and the introduction of more structure and weekly formative assessments.  These two tweaks, both of which came about in response to survey and other data collected from the students in the fall semester, have been crucial to the success of the spring "stealth" flipped class.

In the fall, I followed all the advice in published research that said that, for students to "buy in" to a flipped class, it was important to make them full partners in the process.  I spent an entire class period, on the second class meeting, explaining what a flipped class was; how it worked; and why we were going to be using it that semester.  I also explained what we would be doing with class time now that content delivery was shifted outside of class.  This was all recorded on Echo360 and available to the students throughout the semester.  By the end of the semester, it was clear from the various surveys that about 30% absolutely loved the flipped class; about 30% hated it; and 40% didn't care one way or the other.  One theme that repeated was the extent to which the students who didn't like the flipped class blamed the model for things that had nothing to do with the model (most obviously, the work load and expectations on exams).  It was also clear from activity on the class FB page that there was a very vocal minority (perhaps 5% of the class) who believed that they could force UT to abandon this pedagogical experimentation if they refused to play along.  This vocal minority became the quintessential resistant students.

In retrospect, it's not surprising to me that some of my students would have a less than enthusiastic response to the news that they are part of an institutional teaching experiment.   After all, nearly all of them are taking the course to satisfy a graduation requirement.  They last thing they want is to have a random Classics professor experimenting on them!  Even if they have some inherent interest in the ancient world, they are there to get their box checked off and move on--all without taking a hit to their GPA (after all, humanities classes are "easy").   Perhaps even more importantly: it's not a class in their major and they aren't going to take additional classes on the subject.  Arguments about increased learning don't carry much weight with this crowd.  They care about grades, not learning (or, at least, not learning at the possible expense of their final grade)--they made this point very clearly on various surveys.

During the winter break, I spent a lot of time thinking about how to do a better job of getting students to see the payoff of buying into the flipped class.  I knew that this would be a huge challenge since I have 400 students and the nature of my relationship with them is markedly different than in a class of 15 or 30 or even 50.  No matter how nice and approachable and communicative I am, they feel a sense of distance from me.  This distance was brought home to me when I had a TA taking some pictures of the class from the back of the room.


 It was fascinating to realize what I looked like to those students seated in the middle or back of the auditorium.  When I teach a 25 student undergraduate class in a seminar room, I can build an effective personalized relationship with each of my students; and use that relationship with them to walk them through the flip process.  That is not possible in a classroom of 400 non-majors.  What we found in the fall was something very interesting: by labeling the class a "Flipped Class", by using the pedagogical jargon du jour, I was creating a massive sense of disorientation for my class.  They were in a large auditorium yet I was telling them that I wasn't going to lecture.  They absolutely couldn't see or accept the rational--and evidence-based--arguments for practicing recall and application of content in class in place of traditional lecture.  Even students who did very well in the course reported a sense of disorientation.  As any experienced educator knows, disorientation is not conducive to learning.

My goal in the spring version of the flipped class was to find a way to orient the students, to get them to feel comfortable in my classroom while also pushing against that comfort zone.  I had learned that if I pushed them entirely out of their comfort zone, they would respond with resistance--and nobody won.  I was pouring a ton of effort into the course but the class grades were essentially identical to the traditional lecture version.  I consulted with my team of learning and assessment specialists, and we decided to try something out: we would make some important tweaks (adding quizzes and a discussion board; bringing back a small amount of lecture into class); but we would also try to avoid giving the students something to "blame" when they felt overworked or stressed.

Let me be clear: nobody ever deceived the students in any way.  Far from it.  The class syllabus spelled out in precise detail how the class would run and what was expected of the students.  The second class day was devoted to a "class orientation", during which I went through each element of the class (e.g. i>clicker, discussion board, peer discussion) and explained how it would work and why we were doing it.  I explained that this active approach would help them master Roman history.  I simply avoided labeling the class.  I used descriptors, like "active", "engaged," "using peers as important learning resources".  This lecture was recorded using Echo360 and has been available to the students throughout the semester.  In addition, the students did a learning strategies survey for CTL, at the top of which they were told that CTL was doing research on student learning in large lecture courses.

Throughout the semester, I have had "meta" conversations with the class about how the different parts of the class are helping them to learn the material.  My partners in the Center for Teaching and Learning administered carefully designed "exam wrappers" after each midterm so that students have the chance to reflect on their learning strategies and the way that the different parts of the class are helpful (or not) to their learning; and also to think about how they might improve their approach.  On each exam wrapper, we ask them if there is anything that the teaching team can do to improve their learning experience.  These comments are collected and sent to me by the CTL team and we make adjustments when appropriate.  Despite the enormous class size, there is a steady dialogue between the students, me, and the teaching team. 

By abandoning the formal terminology and presenting the class as "this is just the way I teach a lecture class" rather than as a "flipped class", I've found a way to motivate the students to, in fact, flip the class.  Indeed, one of the great lessons of the fall was that, while I could design a flipped class, only the students could truly "flip the class."  It took a number of tweaks to get them to flip, to embrace a model of learning that both puts them at the center but also expects a lot out of them.  It is deeply satisfying to me to see this spring cohort doing this; and, thereby, seeing marked improvements to their learning and achievement.

******************
6/13/2013: Robert Talbert has a great post on his CHE blog about student resistance to innovative teaching models

Monday, April 1, 2013

Midterm #2

Tomorrow my Rome students will take the second midterm.  This is the exam that has proven to be a real stumbling block to previous cohorts, and nothing I've done seems to have much of an affect on student performance.  I'm very interested to see how this Spring 2013 cohort does.  They have had a chance to practice the material on quizzes; they have had a chance to experience why this chunk of material is particularly challenging.  At the same time, many of them have midterms in other classes; and, by accident, I schedule the exam on the Tuesday after Easter.  Quiz performance dropped noticeably on the most recent quiz.  As well, lecture viewing stats are a bit lower than I'd expect.  I fear that they are going to surge upward in the next 24 hours.  I only hope that the foundations the students set down with their weekly quiz preparation and in class practice will be enough to propel them, even if they are doing a bit too much cramming. 

It is not an easy exam.  In fact, it's substantially more difficult than the ones in Fall 2011 and Fall 2012.  As I wrote the exam, I realized that I needed to raise the bar a bit.  I won't continue to do this, but it's nice to see how much I've been able to improve the quality of learning in the class in just a year.  My sense is that the difficulty level is now about where I want it, and where it should be for a first year course.  It's still a lot of pretty basic information, but now involves much more analysis and a much deeper engagement with the complexities of different events.

I'm holding my breath, hoping for the best  but ready for less.  This exam is why I flipped my class; it's why I have made so many changes to the way the content is delivered and to what I do in class.  The students have been coming to class and engaging.  They have been doing well on the quizzes, despite the hiccup last week.  I hope all of this shows in their performance--in part because I'd like to feel like I've finally come a distance in solving the problem of this middle part of the course.

Addendum 4/3/2013: Some preliminary observations about Midterm #2 (with no information about their actual performance): time was absolutely not a factor, and many of them handed in the exam after about 45-50 minutes. Only one person was writing up to the last minute.  They asked very few questions during the exam.  Anxiety levels seemed quite low, despite the fact that one of the boxes of exams was 5 minutes late to the classroom.  There was no evidence of last minute cramming in the Echo data or on Blackboard.  I am sure that they did several hours of study on Monday and Tuesday morning, but it seems to have taken the form of review rather than desperately attempting to learn the material from scratch.  They seemed to find the exam very predictable.  Although this was a more difficult exam than those in Fall 2011 and 2012, my hunch is that scores will be significantly higher.  I'll know more once we get the scantron data later this week; and once the short answers are graded over the weekend.

Addendum 4/6/2013: 

Fall 2012 multiple choice:
1.6%=100%
7.6=one question wrong
8.6=two questions wrong
11.4=3 questions wrong (c. 85%)
Spring 2013
11.4%=100%
18.8%=one question wrong
17.1%=two questions wrong
14.5%=three questions wrong
11.4=4 questions wrong (c. 85%)
MASSIVE difference.  I expect that the correlation between the MC and short answer will be pretty strong (it generally is).  Oh, and this was a more difficult exam than in Fall 2012.  So, what I am seeing is that low stakes assessments dramatically improves performance on more comprehensive exams...  Pretty cool to me (who would never have believed this in August 2012).

An (In)decent Proposal

One of the aggressively advertised promises of the MOOC courses is the fact that it offers anyone in the world the opportunity to learn from "the best professors from the best institutions."  There has been some criticism that, translated, this means mostly older white males and perpetuates an outdated image of most university faculties (though, to be fair, at RI institutions, men still far outnumber women in the ranks of full professors).  Likewise, it has been pointed out that nothing about an appointment at a prestigious institution like Stanford or Harvard is evidence for good teaching (though I *would* argue that, on the whole. these faculty are very good performers in front of a camera: eloquent and self-possessed).

Another topic that gets substantial discussion among higher ed folks is the rapidly aging professoriate.  Without a mandatory retirement age and because of the recession, many faculty are continuing to work well into their 70s and even 80s.  Sometimes they continue to be productive members of the university community.   Too often, though, they are a drain on increasingly limited resources and, some would say, have contributed to the backup in the hiring of newly minted PhDs to tenure-track jobs (of course, others argue that, once the current senior faculty retire, their positions will be lost to attrition and their workload replaced by a flock of lecturers and adjuncts).

So here's a proposal that might, as the proverb goes, kill two birds with one stone: universities who are interested in producing MOOCs should hand the production of these over to all faculty who are over the age of 70 (or even 67).  Emeritus faculty should also be asked to contribute to the project, perhaps as TAs, discussion leaders, and the like.  These senior faculty, many of whom are very experienced lecturers and have long and impressive CVs as well as multiple books that they can market to their worldwide audience of students, would be in charge of producing content but also staffing the courses (an ideal job for an emeritus who is missing intellectual engagement).  These older faculty are less likely to care that they aren't getting compensated for their time (hopefully they already have good retirement accounts in place); and this model would move us away from the use of grad students and lecturers to staff the MOOCs, often on a "voluntary" basis.  Finally, because MOOCs continue to be videos of lectures, these faculty won't need to get up to speed on the latest in education technology and pedagogy if they don't want to.  They can leave that nonsense to their younger and less distinguished colleagues.

Think about it.  This would be a great opportunity for faculty near the end of their careers (perhaps even their lives) to go out on top.  One last worldwide lecture tour and an opportunity to attain a certain immortality.  As well, it would allow universities to capitalize on the skills and reputations of a segment of the faculty population that has proven resistant to incentivized retirement deals and the like.  At the moment, most of the behind the scenes work is done by grad students and low-paid lecturers.  Why not shift that to the other end of the spectrum, to those faculty who are eligible to collect Social Security and Medicare; and to interested emeritus faculty?

Ok, I'm joking.  Sort of.

Late Drops

My institution allows students a number of opportunities to late drop (Q-drop) a course--far too many in my opinion.  The precise number of times a student can late drop a course varies from college to college.  There are always a few students who have done absolutely no work for the course, ever, and show up to late drop.  This happens with such regularity, semester after semester, that I suspect some sort of financial aid or other scam.  They need to be enrolled as full time students, but never had any intention of completing the course.  Then there are the students who have, say, taken the first midterm and most of the quizzes, but have done very poorly--at a level that is really only possible if they were putting forth minimal effort.  When I talk to these students, it becomes clear that, though they took some/all of the assessments, they did very little work for the class.  Sometimes this was planned, other times personal issues have come up and interfered in the student's ability to complete the assigned work.  And then there are the students who are earning a decent grade, a B or C, but don't want their GPA to go down.  Oddly, many of them concede that their GPA is already in the B or C range (they are often science/math majors), but it's somehow ok to get a C in a Computer Science class but not in a liberal arts course.

I was hoping that the frequent assessments in the form of weekly quizzes and ethics worksheets would cut down on the number of late drop forms I sign.  Students would know where they stood and it would be much more challenging for them to be in denial about their grade.  Ultimately, it hasn't had any affect at all.  I am signing the same number as always--perhaps even more than usual thus far.  In most cases, it is a case of the student doing very little work and clearly failing the course.  If they are in a position to earn some kind of B, I've encouraged them to stay in the class and dedicate some time to finishing it off.  In the midst of talk about time to degree and 4 year graduation rates, it seems imperative that institutions tighten up the rules governing late drops.  It appears that students are using these largely as a way to maintain full-time student status, even when they had little intention of completing a course.  I am not opposed to a student dropping a course that s/he is failing after working hard at it; but it's ridiculous for someone to be able to drop a course in Week 10 that they never showed up for.

The one bright spot: all but three of the students who wanted to drop were of the "never showed up" sort rather than ones who didn't like the grade they were getting.  I'm sure that others have changed their grading option to pass/fail.  But overall, I do feel like the quizzes have done their job of keeping students informed of their performance and forcing them to be realistic about their chances of getting an A in the course.

One of my plans this summer is to work with our Center for Teaching and Learning assessment specialists to get a better sense of who drops the class and why.  The number isn't very big, but it's big enough that I think we might be able to sketch some "types" and think about how to address this.  Even though this course is just fulfilling an elective requirement, dropping it late means that a student is going to have to pay for and take another course down the line, perhaps instead of taking a course in his/her major that would move him/her closer to graduation.

Update 5/26/2013: In Fall 2011, I had 5/220 students late drop (Q) or withdraw (W).  In Fall 2012, 12/387 students earned a Q or W.  In Sprint 2013, 16/387 students earned a Q or W.  The numbers between Fall 2011 and Fall 2012 stayed relatively stable; they increased slightly in Spring 2013.  This was surprising to me since the students received significantly more feedback on their performance and had multiple opportunities to make necessary adjustments.  Interestingly, though, the most influential factor was likely the fact that, in Spring 2013, I was no longer giving high-stakes midterm exams.  In previous semesters, 30% or so of the grade was outstanding by the last class day.  Some students clearly decided to dig in, study, and see if they could earn a passing grade.  In Spring 2013, all but about 13% of the grade was determined before the final class day.  The increase in Q drops actually represents students making informed decisions--they knew that their chances of earning whatever grade they wanted were slim (or non-existent) and so opted to drop the class.  At least half of these students, I noticed, had never engaged in the class discussion board and had often done poorly on the weekly quizzes.  In essence, they were "bad fits" for a class that required a "learn as you go" rather than "cram for the exam" approach.  I suspect that one way to decrease the number of late drops is simply to be able to have a better sense of "who" the typical late-drop student is; and then remind students that, if this is them, they will not find that my class is a good fit for them.