I awoke on Tuesday morning to find that the Chronicle had published a provocative article on the ethical obligations of MOOC instructors, with the headline "MOOC Professors Claim No Responsibility for How Courses Are Used." This led to a flurry of activity on my Twitter Feed, with a range of academics and others contributing to the conversation. Jonathan Rees had a characteristically savvy and pointed response to the article as well (though I would not bundle MOOCs and flipped classes the way he does--but that's for another post).
It's a question that MOOC professors should have been asking themselves from the start, instead of assuming that administrators and legislators at institutions lower on the food chain would behave "reasonably." In fact, from the perspective of budgets and current politics, many believe that they *are* behaving reasonably when they make plans to license MOOCs and make them credit-bearing. They care not a whit about the authorial intentions of the MOOC professors (and, as the CHE article as well as Michael Sandel's comments about SJSU's use of his HarvardX Justice course suggest, at least some MOOC professors also don't care what happens to the commodities they created). Unfortunately, not caring--even remaining deliberately ignorant--is not a morally acceptable position to adopt. If someone decides to commodify a product--in this case a course--they have an ethical responsibility towards the community to ensure that the commodity is not used to do harm to the larger community.
Many MOOC professors seem not to grasp the significance of their participation in the grand MOOC Inc scheme. I won't pretend to understand individual motives, which must be highly varied. But I suspect that, whatever amount ego is involved, at the root of it, many are motivated by the opportunity to touch the lives of tens of thousands, to achieve a kind of notoriety that is impossible in a bricks and mortar university--even for award-winning senior faculty. As well, they are motivated by the propaganda of altruism that Coursera, in particular, has worked hard to spread. Certainly, there's no question that access to a high quality education is a significant national and global issue; and there's no question that working to increase accessibility is a good thing. The difficulties come when one realizes that the MOOC Incs need to make money, they need to monetize. When this need to monetize collides with the desire of many state legislatures to decrease the costs of higher education to their citizens, serious problems arise. It is at this point that the MOOC professor has a moral obligation to, at the very least, take a position. If the MOOC professor is in it for the money s/he can make from licensing fees (a process which will inevitably decrease salaries and job opportunities at institutions down the food chain), fine. But own this. Don't hide behind claims of altruism or, worse, indifference.
Alternatively, if your motives truly were about outreach, about making your passion available to a larger audience, own that and protect your product. Be sure that your commodified course cannot be sold without your permission; take an interest in the terms under which it can be run at a different institution. All of us share syllabuses and teaching ideas; there's no reason not to share videos, practice questions, and other learning activities with our colleagues around the world. The problems start when regents and legislators, most of whom have little experience with the nuts and bolts of the academic enterprise, treat a MOOC course as if it were the same as a classroom course or even a textbook. It shares things in common with both of these, yet it is neither. Faculty need to be vocal about these distinctions. We can't simply dismiss MOOCs out of hand; we have to explain and perhaps even demonstrate why they don't and can't do what regents and legislators want to claim they can do.
We also need to be sure not to demonize all MOOC "superprofessors." It's easy to do, especially because the media coverage has been so polarizing. In the CHE article on the ethics of MOOC creation, it was as if they went out of their way to find the most tone-deaf, out of touch with the rest of the academic world faculty to speak on the record. Other MOOC faculty, though. are being thoughtful and responsible about their courses. On Monday afternoon, I spent a delightful hour on the phone on Monday afternoon with one such professor, Dr. Al Filreis.
Dr. Filreis offers his quirky and fascinating take on modern and contemporary American poetry in a MOOC titled ModPo that rums from the Coursera platform. The course runs for 10 weeks and, Dr. Filreis repeatedly stated, is not intended to replace a university course (though it could be used as part of a flipped class led by another instructor); or to run independent of him and his teaching team. It is, notably, focused on discussion rather than lecture--a great model for other faculty. Dr. Filreis is motivated by an altruistic desire to make his course more widely available to, especially, adult learners; and secondarily, to work in tandem with other faculty around the country to coordinate his MOOC with campus-based courses. He was clear that he would not willingly relinquish control over his course to Penn to market; and that he would shut the course down rather than allow it to be run independent of him. Of course, it remains to be seen whether the institutions who are building these courses at significant cost will respect the wishes of the MOOC professors if push came to shove.
The current conversation about MOOCs, as polemical as it is, is a crucial one to have; and it is crucial that institutional faculty step up and engage themselves in this discussion, take a stand. A number of Harvard faculty have taken a first step in that direction, in a letter to the dean of the Faculty of Arts and Sciences that was signed by about 60 faculty. This is an important step and is one that should be followed by faculty at every institution currently involved with one of the MOOC Incs as well as faculty as schools that are deliberating the licensing of MOOCs to be taught on their own campuses. At the moment, the quietism of faculty is being exploited; faculty are being caricatured as researchers who can't teach, don't care about teaching, and would happily do anything to avoid doing the real work of teaching. Legislators, regents, even the MOOC Incs. promise that all of this is going to result in less work for us. They leave out the fact that it will also, certainly, result in even deeper cuts to the instructional staff and will decimate the ladder faculty. And that's the thing: whether we like it or not, we are all part of a community and we have moral responsibilities to other members of that community, including the graduate students who are training for teaching positions.
Friday, May 24, 2013
The Early Returns on Flipping a Large Enrollment Humanities Course
Back in Fall 2012, I embarked on an instructional experiment. I wanted to see if it was possible to run a large enrollment, core requirement lecture course as a flipped class, with the same sorts of results (e.g. increased learning, higher order thinking, more student responsibility and self-direction, higher grades) that seem to accompany flipping smaller (15-30 students) classes. To be completely honest, I had no idea what I was doing. I was working from instincts honed after a decade of teaching every possible kind of course at UT Austin; and with a lot of help from our IT department and our Center for Teaching and Learning.
It's been an interesting year, to say the least. I've chronicled many of the ups and downs, the lessons learned along the way, in this blog. In the coming weeks, I'll be adding substantially to the blog for those who are interested in trying something similar, Especially at this moment in time, it's crucial for faculty to be willing to step outside their comfort zones, to experiment with new ways of motivating learning in their students even in the hostile to learning environment of the large enrollment "lecture" class. What worked for me won't work for everyone, but I think it important to make the point that institutional money invested in the development of more effective "on the ground" instruction will go a long way towards addressing some of the loudest detractors of higher education. Unlike MOOCs, we can actually demonstrate improved learning and better learning experiences for our enrolled students.
Figuring out how to teach large enrollment classes as student-focused is not easy; it's a horrendous amount of work, trial and error. A lot of error. And work. That said, when you get it right (or pretty close to right), it's incredibly rewarding to see the students engaging with the content at a much deeper level; and to see their grades reflecting this more substantial grasp of the course material
Some rough data. What I am posting is final courses grades from Fall 2011 (traditional lecture course; 220 students); Fall 2012 (attempted flipped class that didn't really flip; 387 students); Spring 2013 (redesigned and successful flipped class; 387 students). This is preliminary data, but nothing major will change.
A couple of interesting things: In Spring 2013, 41% earned As; 10% earned A-, for a total of 51% of class in A range. In Fall 2012 25% earned As, 15% earned A-, for a total of 40%. In Fall 2011, 34.4% earned As, 12.6 earned A-, for a total of 47%. The Fall 2012 course was more difficult than the Fall 2011, which partially accounts for the drop in As. As well, though, the Fall 2012 group was, on the whole resistant to "flipping" and about 2/3 of them used the familiar "cram for the exam" approach. Notably, the Spring 2013 course was substantially more challenging than the Fall 2012 course yet this cohort had by far the highest grades, esp. in the A category.
In Spring 2013, 76.6% of the class earned a B- or higher; 87.3% earned a C- or higher (1.3% earned credit). In Fall 2012, 79.3% earned a B- or higher; 90.2% earned a C- or higher. In Fall 2011, 80.6% earned a B- or higher; 94.1% earned a C- or higher (1% earned credit). The differences between the Fall 2011 and subsequent semesters are probably due to increased difficulty; and to the fact that, in the Fall 2012 version, the grading scheme was such that, while it was difficult to earn an A or high B, it was relatively easy to earn a B or lower. That was much less true in the Spring 2013 version, which valued consistency of performance over a wide range of activities. Still, it is worth noting that the doubled class size had very little effect on the overall grades of the students (though it did seem to increase substantially the number of people who failed the class--largely because they did not do any of the assigned work; and the number who failed to complete the class via W or Q drop).
In the Spring 2013 version, 23/371 students failed the class, compared to 17/375 in Fall 2012 and 10/215 in Fall 2011. In addition, in Spring 2013 16 students did not finish the class; that number was 12 in Fall 2012 and 5 (of 220 enrolled) in Fall 2011. Overall, these are relatively small numbers and seem to remain stable as a percentage of the total enrollment. I was disappointed that these numbers did not decrease in the spring semester, when the students were given substantially more feedback on their performance in the class. Despite knowing that they had little chance of passing the class, this cohort of students made few adjustments and, often, waited until the very end of the semester to drop the class despite having done almost nothing through the semester.
Overall--and this is admittedly impressionistic--the most significant difference between the "lecture-based" model of instruction and the more active, flipped model is that it raised the grades of the B-C students. The A students performed at an even higher level (we had about 10% of the class score over 100%); but the real beneficiaries were those who would normally have been in the B-C range. These students seem to have moved up at least half a grade, if not a full grade, from previous cohorts. These were the students who benefited most from the constant engagement, increased structure, and quiz/exam structure that strongly incentivized a "study as you go" approach to the course material.
It's been an interesting year, to say the least. I've chronicled many of the ups and downs, the lessons learned along the way, in this blog. In the coming weeks, I'll be adding substantially to the blog for those who are interested in trying something similar, Especially at this moment in time, it's crucial for faculty to be willing to step outside their comfort zones, to experiment with new ways of motivating learning in their students even in the hostile to learning environment of the large enrollment "lecture" class. What worked for me won't work for everyone, but I think it important to make the point that institutional money invested in the development of more effective "on the ground" instruction will go a long way towards addressing some of the loudest detractors of higher education. Unlike MOOCs, we can actually demonstrate improved learning and better learning experiences for our enrolled students.
Figuring out how to teach large enrollment classes as student-focused is not easy; it's a horrendous amount of work, trial and error. A lot of error. And work. That said, when you get it right (or pretty close to right), it's incredibly rewarding to see the students engaging with the content at a much deeper level; and to see their grades reflecting this more substantial grasp of the course material
Some rough data. What I am posting is final courses grades from Fall 2011 (traditional lecture course; 220 students); Fall 2012 (attempted flipped class that didn't really flip; 387 students); Spring 2013 (redesigned and successful flipped class; 387 students). This is preliminary data, but nothing major will change.
A couple of interesting things: In Spring 2013, 41% earned As; 10% earned A-, for a total of 51% of class in A range. In Fall 2012 25% earned As, 15% earned A-, for a total of 40%. In Fall 2011, 34.4% earned As, 12.6 earned A-, for a total of 47%. The Fall 2012 course was more difficult than the Fall 2011, which partially accounts for the drop in As. As well, though, the Fall 2012 group was, on the whole resistant to "flipping" and about 2/3 of them used the familiar "cram for the exam" approach. Notably, the Spring 2013 course was substantially more challenging than the Fall 2012 course yet this cohort had by far the highest grades, esp. in the A category.
In Spring 2013, 76.6% of the class earned a B- or higher; 87.3% earned a C- or higher (1.3% earned credit). In Fall 2012, 79.3% earned a B- or higher; 90.2% earned a C- or higher. In Fall 2011, 80.6% earned a B- or higher; 94.1% earned a C- or higher (1% earned credit). The differences between the Fall 2011 and subsequent semesters are probably due to increased difficulty; and to the fact that, in the Fall 2012 version, the grading scheme was such that, while it was difficult to earn an A or high B, it was relatively easy to earn a B or lower. That was much less true in the Spring 2013 version, which valued consistency of performance over a wide range of activities. Still, it is worth noting that the doubled class size had very little effect on the overall grades of the students (though it did seem to increase substantially the number of people who failed the class--largely because they did not do any of the assigned work; and the number who failed to complete the class via W or Q drop).
In the Spring 2013 version, 23/371 students failed the class, compared to 17/375 in Fall 2012 and 10/215 in Fall 2011. In addition, in Spring 2013 16 students did not finish the class; that number was 12 in Fall 2012 and 5 (of 220 enrolled) in Fall 2011. Overall, these are relatively small numbers and seem to remain stable as a percentage of the total enrollment. I was disappointed that these numbers did not decrease in the spring semester, when the students were given substantially more feedback on their performance in the class. Despite knowing that they had little chance of passing the class, this cohort of students made few adjustments and, often, waited until the very end of the semester to drop the class despite having done almost nothing through the semester.
Overall--and this is admittedly impressionistic--the most significant difference between the "lecture-based" model of instruction and the more active, flipped model is that it raised the grades of the B-C students. The A students performed at an even higher level (we had about 10% of the class score over 100%); but the real beneficiaries were those who would normally have been in the B-C range. These students seem to have moved up at least half a grade, if not a full grade, from previous cohorts. These were the students who benefited most from the constant engagement, increased structure, and quiz/exam structure that strongly incentivized a "study as you go" approach to the course material.
Sunday, May 19, 2013
Summer!
This crazy busy semester (and academic year) is finally wrapping up. I've got a bunch of things I want to post about in the coming several weeks. I am delighted to report that the "no jargon" or "stealth" approach to flipping the large enrollment class worked really well. The students performed very well--90% of them earned a C- or higher, in fact. In addition, I can truly say that I felt like I was facilitating educations for the majority of them. So, more to come in the near future.
Wednesday, April 17, 2013
What's in a Name?
It's been a busy but exciting few weeks. I've been hard at work gathering together lots of data: objective data about student learning behaviors from Echo360 as well as Blackboard; but also subjective data about their experiences in my "stealth" flipped class this semester. Together with my assessment specialist, Stephanie Corliss, I put together a poster that told the story of my efforts to transform Introduction to Ancient Rome into a course that emphasized active learning, high levels of student engagement, and cumulative learning. It was an incredibly gratifying experience to see in the charts and graphs the positive effects of the stealth flipped model on student learning and achievement.
Subjectively, I can tell that there's a big difference in the depth of learning and engagement in the spring cohort as compared to the fall cohort. Some of this can be attributed to the fact that it's spring semester and incoming freshmen have become more accustomed to the ways of college coursework. Given that at least 50% of the students aren't freshmen, though, something else has to be going on.
In a recent post on her excellent Peer Instruction blog, Julie Schell highlighted two key elements of my approach to this spring cohort: avoidance of the term "flipped" (instead, I use words like active, engaged, student-centered); and the introduction of more structure and weekly formative assessments. These two tweaks, both of which came about in response to survey and other data collected from the students in the fall semester, have been crucial to the success of the spring "stealth" flipped class.
In the fall, I followed all the advice in published research that said that, for students to "buy in" to a flipped class, it was important to make them full partners in the process. I spent an entire class period, on the second class meeting, explaining what a flipped class was; how it worked; and why we were going to be using it that semester. I also explained what we would be doing with class time now that content delivery was shifted outside of class. This was all recorded on Echo360 and available to the students throughout the semester. By the end of the semester, it was clear from the various surveys that about 30% absolutely loved the flipped class; about 30% hated it; and 40% didn't care one way or the other. One theme that repeated was the extent to which the students who didn't like the flipped class blamed the model for things that had nothing to do with the model (most obviously, the work load and expectations on exams). It was also clear from activity on the class FB page that there was a very vocal minority (perhaps 5% of the class) who believed that they could force UT to abandon this pedagogical experimentation if they refused to play along. This vocal minority became the quintessential resistant students.
In retrospect, it's not surprising to me that some of my students would have a less than enthusiastic response to the news that they are part of an institutional teaching experiment. After all, nearly all of them are taking the course to satisfy a graduation requirement. They last thing they want is to have a random Classics professor experimenting on them! Even if they have some inherent interest in the ancient world, they are there to get their box checked off and move on--all without taking a hit to their GPA (after all, humanities classes are "easy"). Perhaps even more importantly: it's not a class in their major and they aren't going to take additional classes on the subject. Arguments about increased learning don't carry much weight with this crowd. They care about grades, not learning (or, at least, not learning at the possible expense of their final grade)--they made this point very clearly on various surveys.
During the winter break, I spent a lot of time thinking about how to do a better job of getting students to see the payoff of buying into the flipped class. I knew that this would be a huge challenge since I have 400 students and the nature of my relationship with them is markedly different than in a class of 15 or 30 or even 50. No matter how nice and approachable and communicative I am, they feel a sense of distance from me. This distance was brought home to me when I had a TA taking some pictures of the class from the back of the room.
It was fascinating to realize what I looked like to those students seated in the middle or back of the auditorium. When I teach a 25 student undergraduate class in a seminar room, I can build an effective personalized relationship with each of my students; and use that relationship with them to walk them through the flip process. That is not possible in a classroom of 400 non-majors. What we found in the fall was something very interesting: by labeling the class a "Flipped Class", by using the pedagogical jargon du jour, I was creating a massive sense of disorientation for my class. They were in a large auditorium yet I was telling them that I wasn't going to lecture. They absolutely couldn't see or accept the rational--and evidence-based--arguments for practicing recall and application of content in class in place of traditional lecture. Even students who did very well in the course reported a sense of disorientation. As any experienced educator knows, disorientation is not conducive to learning.
My goal in the spring version of the flipped class was to find a way to orient the students, to get them to feel comfortable in my classroom while also pushing against that comfort zone. I had learned that if I pushed them entirely out of their comfort zone, they would respond with resistance--and nobody won. I was pouring a ton of effort into the course but the class grades were essentially identical to the traditional lecture version. I consulted with my team of learning and assessment specialists, and we decided to try something out: we would make some important tweaks (adding quizzes and a discussion board; bringing back a small amount of lecture into class); but we would also try to avoid giving the students something to "blame" when they felt overworked or stressed.
Let me be clear: nobody ever deceived the students in any way. Far from it. The class syllabus spelled out in precise detail how the class would run and what was expected of the students. The second class day was devoted to a "class orientation", during which I went through each element of the class (e.g. i>clicker, discussion board, peer discussion) and explained how it would work and why we were doing it. I explained that this active approach would help them master Roman history. I simply avoided labeling the class. I used descriptors, like "active", "engaged," "using peers as important learning resources". This lecture was recorded using Echo360 and has been available to the students throughout the semester. In addition, the students did a learning strategies survey for CTL, at the top of which they were told that CTL was doing research on student learning in large lecture courses.
Throughout the semester, I have had "meta" conversations with the class about how the different parts of the class are helping them to learn the material. My partners in the Center for Teaching and Learning administered carefully designed "exam wrappers" after each midterm so that students have the chance to reflect on their learning strategies and the way that the different parts of the class are helpful (or not) to their learning; and also to think about how they might improve their approach. On each exam wrapper, we ask them if there is anything that the teaching team can do to improve their learning experience. These comments are collected and sent to me by the CTL team and we make adjustments when appropriate. Despite the enormous class size, there is a steady dialogue between the students, me, and the teaching team.
By abandoning the formal terminology and presenting the class as "this is just the way I teach a lecture class" rather than as a "flipped class", I've found a way to motivate the students to, in fact, flip the class. Indeed, one of the great lessons of the fall was that, while I could design a flipped class, only the students could truly "flip the class." It took a number of tweaks to get them to flip, to embrace a model of learning that both puts them at the center but also expects a lot out of them. It is deeply satisfying to me to see this spring cohort doing this; and, thereby, seeing marked improvements to their learning and achievement.
******************
6/13/2013: Robert Talbert has a great post on his CHE blog about student resistance to innovative teaching models
Subjectively, I can tell that there's a big difference in the depth of learning and engagement in the spring cohort as compared to the fall cohort. Some of this can be attributed to the fact that it's spring semester and incoming freshmen have become more accustomed to the ways of college coursework. Given that at least 50% of the students aren't freshmen, though, something else has to be going on.
In a recent post on her excellent Peer Instruction blog, Julie Schell highlighted two key elements of my approach to this spring cohort: avoidance of the term "flipped" (instead, I use words like active, engaged, student-centered); and the introduction of more structure and weekly formative assessments. These two tweaks, both of which came about in response to survey and other data collected from the students in the fall semester, have been crucial to the success of the spring "stealth" flipped class.
In the fall, I followed all the advice in published research that said that, for students to "buy in" to a flipped class, it was important to make them full partners in the process. I spent an entire class period, on the second class meeting, explaining what a flipped class was; how it worked; and why we were going to be using it that semester. I also explained what we would be doing with class time now that content delivery was shifted outside of class. This was all recorded on Echo360 and available to the students throughout the semester. By the end of the semester, it was clear from the various surveys that about 30% absolutely loved the flipped class; about 30% hated it; and 40% didn't care one way or the other. One theme that repeated was the extent to which the students who didn't like the flipped class blamed the model for things that had nothing to do with the model (most obviously, the work load and expectations on exams). It was also clear from activity on the class FB page that there was a very vocal minority (perhaps 5% of the class) who believed that they could force UT to abandon this pedagogical experimentation if they refused to play along. This vocal minority became the quintessential resistant students.
In retrospect, it's not surprising to me that some of my students would have a less than enthusiastic response to the news that they are part of an institutional teaching experiment. After all, nearly all of them are taking the course to satisfy a graduation requirement. They last thing they want is to have a random Classics professor experimenting on them! Even if they have some inherent interest in the ancient world, they are there to get their box checked off and move on--all without taking a hit to their GPA (after all, humanities classes are "easy"). Perhaps even more importantly: it's not a class in their major and they aren't going to take additional classes on the subject. Arguments about increased learning don't carry much weight with this crowd. They care about grades, not learning (or, at least, not learning at the possible expense of their final grade)--they made this point very clearly on various surveys.
During the winter break, I spent a lot of time thinking about how to do a better job of getting students to see the payoff of buying into the flipped class. I knew that this would be a huge challenge since I have 400 students and the nature of my relationship with them is markedly different than in a class of 15 or 30 or even 50. No matter how nice and approachable and communicative I am, they feel a sense of distance from me. This distance was brought home to me when I had a TA taking some pictures of the class from the back of the room.
It was fascinating to realize what I looked like to those students seated in the middle or back of the auditorium. When I teach a 25 student undergraduate class in a seminar room, I can build an effective personalized relationship with each of my students; and use that relationship with them to walk them through the flip process. That is not possible in a classroom of 400 non-majors. What we found in the fall was something very interesting: by labeling the class a "Flipped Class", by using the pedagogical jargon du jour, I was creating a massive sense of disorientation for my class. They were in a large auditorium yet I was telling them that I wasn't going to lecture. They absolutely couldn't see or accept the rational--and evidence-based--arguments for practicing recall and application of content in class in place of traditional lecture. Even students who did very well in the course reported a sense of disorientation. As any experienced educator knows, disorientation is not conducive to learning.
My goal in the spring version of the flipped class was to find a way to orient the students, to get them to feel comfortable in my classroom while also pushing against that comfort zone. I had learned that if I pushed them entirely out of their comfort zone, they would respond with resistance--and nobody won. I was pouring a ton of effort into the course but the class grades were essentially identical to the traditional lecture version. I consulted with my team of learning and assessment specialists, and we decided to try something out: we would make some important tweaks (adding quizzes and a discussion board; bringing back a small amount of lecture into class); but we would also try to avoid giving the students something to "blame" when they felt overworked or stressed.
Let me be clear: nobody ever deceived the students in any way. Far from it. The class syllabus spelled out in precise detail how the class would run and what was expected of the students. The second class day was devoted to a "class orientation", during which I went through each element of the class (e.g. i>clicker, discussion board, peer discussion) and explained how it would work and why we were doing it. I explained that this active approach would help them master Roman history. I simply avoided labeling the class. I used descriptors, like "active", "engaged," "using peers as important learning resources". This lecture was recorded using Echo360 and has been available to the students throughout the semester. In addition, the students did a learning strategies survey for CTL, at the top of which they were told that CTL was doing research on student learning in large lecture courses.
Throughout the semester, I have had "meta" conversations with the class about how the different parts of the class are helping them to learn the material. My partners in the Center for Teaching and Learning administered carefully designed "exam wrappers" after each midterm so that students have the chance to reflect on their learning strategies and the way that the different parts of the class are helpful (or not) to their learning; and also to think about how they might improve their approach. On each exam wrapper, we ask them if there is anything that the teaching team can do to improve their learning experience. These comments are collected and sent to me by the CTL team and we make adjustments when appropriate. Despite the enormous class size, there is a steady dialogue between the students, me, and the teaching team.
By abandoning the formal terminology and presenting the class as "this is just the way I teach a lecture class" rather than as a "flipped class", I've found a way to motivate the students to, in fact, flip the class. Indeed, one of the great lessons of the fall was that, while I could design a flipped class, only the students could truly "flip the class." It took a number of tweaks to get them to flip, to embrace a model of learning that both puts them at the center but also expects a lot out of them. It is deeply satisfying to me to see this spring cohort doing this; and, thereby, seeing marked improvements to their learning and achievement.
******************
6/13/2013: Robert Talbert has a great post on his CHE blog about student resistance to innovative teaching models
Monday, April 1, 2013
Midterm #2
Tomorrow my Rome students will take the second midterm. This is the exam that has proven to be a real stumbling block to previous cohorts, and nothing I've done seems to have much of an affect on student performance. I'm very interested to see how this Spring 2013 cohort does. They have had a chance to practice the material on quizzes; they have had a chance to experience why this chunk of material is particularly challenging. At the same time, many of them have midterms in other classes; and, by accident, I schedule the exam on the Tuesday after Easter. Quiz performance dropped noticeably on the most recent quiz. As well, lecture viewing stats are a bit lower than I'd expect. I fear that they are going to surge upward in the next 24 hours. I only hope that the foundations the students set down with their weekly quiz preparation and in class practice will be enough to propel them, even if they are doing a bit too much cramming.
It is not an easy exam. In fact, it's substantially more difficult than the ones in Fall 2011 and Fall 2012. As I wrote the exam, I realized that I needed to raise the bar a bit. I won't continue to do this, but it's nice to see how much I've been able to improve the quality of learning in the class in just a year. My sense is that the difficulty level is now about where I want it, and where it should be for a first year course. It's still a lot of pretty basic information, but now involves much more analysis and a much deeper engagement with the complexities of different events.
I'm holding my breath, hoping for the best but ready for less. This exam is why I flipped my class; it's why I have made so many changes to the way the content is delivered and to what I do in class. The students have been coming to class and engaging. They have been doing well on the quizzes, despite the hiccup last week. I hope all of this shows in their performance--in part because I'd like to feel like I've finally come a distance in solving the problem of this middle part of the course.
Addendum 4/3/2013: Some preliminary observations about Midterm #2 (with no information about their actual performance): time was absolutely not a factor, and many of them handed in the exam after about 45-50 minutes. Only one person was writing up to the last minute. They asked very few questions during the exam. Anxiety levels seemed quite low, despite the fact that one of the boxes of exams was 5 minutes late to the classroom. There was no evidence of last minute cramming in the Echo data or on Blackboard. I am sure that they did several hours of study on Monday and Tuesday morning, but it seems to have taken the form of review rather than desperately attempting to learn the material from scratch. They seemed to find the exam very predictable. Although this was a more difficult exam than those in Fall 2011 and 2012, my hunch is that scores will be significantly higher. I'll know more once we get the scantron data later this week; and once the short answers are graded over the weekend.
Addendum 4/6/2013:
Fall 2012 multiple choice:
1.6%=100%
7.6=one question wrong
8.6=two questions wrong
11.4=3 questions wrong (c. 85%)
Spring 2013
11.4%=100%
18.8%=one question wrong
17.1%=two questions wrong
14.5%=three questions wrong
11.4=4 questions wrong (c. 85%)
MASSIVE
difference. I expect that the correlation between the MC and short
answer will be pretty strong (it generally is). Oh, and this was a more
difficult exam than in Fall 2012. So, what I am seeing is that low stakes
assessments dramatically improves performance on more comprehensive
exams... Pretty cool to me (who would never have believed this in
August 2012).
It is not an easy exam. In fact, it's substantially more difficult than the ones in Fall 2011 and Fall 2012. As I wrote the exam, I realized that I needed to raise the bar a bit. I won't continue to do this, but it's nice to see how much I've been able to improve the quality of learning in the class in just a year. My sense is that the difficulty level is now about where I want it, and where it should be for a first year course. It's still a lot of pretty basic information, but now involves much more analysis and a much deeper engagement with the complexities of different events.
I'm holding my breath, hoping for the best but ready for less. This exam is why I flipped my class; it's why I have made so many changes to the way the content is delivered and to what I do in class. The students have been coming to class and engaging. They have been doing well on the quizzes, despite the hiccup last week. I hope all of this shows in their performance--in part because I'd like to feel like I've finally come a distance in solving the problem of this middle part of the course.
Addendum 4/3/2013: Some preliminary observations about Midterm #2 (with no information about their actual performance): time was absolutely not a factor, and many of them handed in the exam after about 45-50 minutes. Only one person was writing up to the last minute. They asked very few questions during the exam. Anxiety levels seemed quite low, despite the fact that one of the boxes of exams was 5 minutes late to the classroom. There was no evidence of last minute cramming in the Echo data or on Blackboard. I am sure that they did several hours of study on Monday and Tuesday morning, but it seems to have taken the form of review rather than desperately attempting to learn the material from scratch. They seemed to find the exam very predictable. Although this was a more difficult exam than those in Fall 2011 and 2012, my hunch is that scores will be significantly higher. I'll know more once we get the scantron data later this week; and once the short answers are graded over the weekend.
Addendum 4/6/2013:
1.6%=100%
11.4%=100%
An (In)decent Proposal
One of the aggressively advertised promises of the MOOC courses is the fact that it offers anyone in the world the opportunity to learn from "the best professors from the best institutions." There has been some criticism that, translated, this means mostly older white males and perpetuates an outdated image of most university faculties (though, to be fair, at RI institutions, men still far outnumber women in the ranks of full professors). Likewise, it has been pointed out that nothing about an appointment at a prestigious institution like Stanford or Harvard is evidence for good teaching (though I *would* argue that, on the whole. these faculty are very good performers in front of a camera: eloquent and self-possessed).
Another topic that gets substantial discussion among higher ed folks is the rapidly aging professoriate. Without a mandatory retirement age and because of the recession, many faculty are continuing to work well into their 70s and even 80s. Sometimes they continue to be productive members of the university community. Too often, though, they are a drain on increasingly limited resources and, some would say, have contributed to the backup in the hiring of newly minted PhDs to tenure-track jobs (of course, others argue that, once the current senior faculty retire, their positions will be lost to attrition and their workload replaced by a flock of lecturers and adjuncts).
So here's a proposal that might, as the proverb goes, kill two birds with one stone: universities who are interested in producing MOOCs should hand the production of these over to all faculty who are over the age of 70 (or even 67). Emeritus faculty should also be asked to contribute to the project, perhaps as TAs, discussion leaders, and the like. These senior faculty, many of whom are very experienced lecturers and have long and impressive CVs as well as multiple books that they can market to their worldwide audience of students, would be in charge of producing content but also staffing the courses (an ideal job for an emeritus who is missing intellectual engagement). These older faculty are less likely to care that they aren't getting compensated for their time (hopefully they already have good retirement accounts in place); and this model would move us away from the use of grad students and lecturers to staff the MOOCs, often on a "voluntary" basis. Finally, because MOOCs continue to be videos of lectures, these faculty won't need to get up to speed on the latest in education technology and pedagogy if they don't want to. They can leave that nonsense to their younger and less distinguished colleagues.
Think about it. This would be a great opportunity for faculty near the end of their careers (perhaps even their lives) to go out on top. One last worldwide lecture tour and an opportunity to attain a certain immortality. As well, it would allow universities to capitalize on the skills and reputations of a segment of the faculty population that has proven resistant to incentivized retirement deals and the like. At the moment, most of the behind the scenes work is done by grad students and low-paid lecturers. Why not shift that to the other end of the spectrum, to those faculty who are eligible to collect Social Security and Medicare; and to interested emeritus faculty?
Ok, I'm joking. Sort of.
Another topic that gets substantial discussion among higher ed folks is the rapidly aging professoriate. Without a mandatory retirement age and because of the recession, many faculty are continuing to work well into their 70s and even 80s. Sometimes they continue to be productive members of the university community. Too often, though, they are a drain on increasingly limited resources and, some would say, have contributed to the backup in the hiring of newly minted PhDs to tenure-track jobs (of course, others argue that, once the current senior faculty retire, their positions will be lost to attrition and their workload replaced by a flock of lecturers and adjuncts).
So here's a proposal that might, as the proverb goes, kill two birds with one stone: universities who are interested in producing MOOCs should hand the production of these over to all faculty who are over the age of 70 (or even 67). Emeritus faculty should also be asked to contribute to the project, perhaps as TAs, discussion leaders, and the like. These senior faculty, many of whom are very experienced lecturers and have long and impressive CVs as well as multiple books that they can market to their worldwide audience of students, would be in charge of producing content but also staffing the courses (an ideal job for an emeritus who is missing intellectual engagement). These older faculty are less likely to care that they aren't getting compensated for their time (hopefully they already have good retirement accounts in place); and this model would move us away from the use of grad students and lecturers to staff the MOOCs, often on a "voluntary" basis. Finally, because MOOCs continue to be videos of lectures, these faculty won't need to get up to speed on the latest in education technology and pedagogy if they don't want to. They can leave that nonsense to their younger and less distinguished colleagues.
Think about it. This would be a great opportunity for faculty near the end of their careers (perhaps even their lives) to go out on top. One last worldwide lecture tour and an opportunity to attain a certain immortality. As well, it would allow universities to capitalize on the skills and reputations of a segment of the faculty population that has proven resistant to incentivized retirement deals and the like. At the moment, most of the behind the scenes work is done by grad students and low-paid lecturers. Why not shift that to the other end of the spectrum, to those faculty who are eligible to collect Social Security and Medicare; and to interested emeritus faculty?
Ok, I'm joking. Sort of.
Late Drops
My institution allows students a number of opportunities to late drop (Q-drop) a course--far too many in my opinion. The precise number of times a student can late drop a course varies from college to college. There are always a few students who have done absolutely no work for the course, ever, and show up to late drop. This happens with such regularity, semester after semester, that I suspect some sort of financial aid or other scam. They need to be enrolled as full time students, but never had any intention of completing the course. Then there are the students who have, say, taken the first midterm and most of the quizzes, but have done very poorly--at a level that is really only possible if they were putting forth minimal effort. When I talk to these students, it becomes clear that, though they took some/all of the assessments, they did very little work for the class. Sometimes this was planned, other times personal issues have come up and interfered in the student's ability to complete the assigned work. And then there are the students who are earning a decent grade, a B or C, but don't want their GPA to go down. Oddly, many of them concede that their GPA is already in the B or C range (they are often science/math majors), but it's somehow ok to get a C in a Computer Science class but not in a liberal arts course.
I was hoping that the frequent assessments in the form of weekly quizzes and ethics worksheets would cut down on the number of late drop forms I sign. Students would know where they stood and it would be much more challenging for them to be in denial about their grade. Ultimately, it hasn't had any affect at all. I am signing the same number as always--perhaps even more than usual thus far. In most cases, it is a case of the student doing very little work and clearly failing the course. If they are in a position to earn some kind of B, I've encouraged them to stay in the class and dedicate some time to finishing it off. In the midst of talk about time to degree and 4 year graduation rates, it seems imperative that institutions tighten up the rules governing late drops. It appears that students are using these largely as a way to maintain full-time student status, even when they had little intention of completing a course. I am not opposed to a student dropping a course that s/he is failing after working hard at it; but it's ridiculous for someone to be able to drop a course in Week 10 that they never showed up for.
The one bright spot: all but three of the students who wanted to drop were of the "never showed up" sort rather than ones who didn't like the grade they were getting. I'm sure that others have changed their grading option to pass/fail. But overall, I do feel like the quizzes have done their job of keeping students informed of their performance and forcing them to be realistic about their chances of getting an A in the course.
One of my plans this summer is to work with our Center for Teaching and Learning assessment specialists to get a better sense of who drops the class and why. The number isn't very big, but it's big enough that I think we might be able to sketch some "types" and think about how to address this. Even though this course is just fulfilling an elective requirement, dropping it late means that a student is going to have to pay for and take another course down the line, perhaps instead of taking a course in his/her major that would move him/her closer to graduation.
Update 5/26/2013: In Fall 2011, I had 5/220 students late drop (Q) or withdraw (W). In Fall 2012, 12/387 students earned a Q or W. In Sprint 2013, 16/387 students earned a Q or W. The numbers between Fall 2011 and Fall 2012 stayed relatively stable; they increased slightly in Spring 2013. This was surprising to me since the students received significantly more feedback on their performance and had multiple opportunities to make necessary adjustments. Interestingly, though, the most influential factor was likely the fact that, in Spring 2013, I was no longer giving high-stakes midterm exams. In previous semesters, 30% or so of the grade was outstanding by the last class day. Some students clearly decided to dig in, study, and see if they could earn a passing grade. In Spring 2013, all but about 13% of the grade was determined before the final class day. The increase in Q drops actually represents students making informed decisions--they knew that their chances of earning whatever grade they wanted were slim (or non-existent) and so opted to drop the class. At least half of these students, I noticed, had never engaged in the class discussion board and had often done poorly on the weekly quizzes. In essence, they were "bad fits" for a class that required a "learn as you go" rather than "cram for the exam" approach. I suspect that one way to decrease the number of late drops is simply to be able to have a better sense of "who" the typical late-drop student is; and then remind students that, if this is them, they will not find that my class is a good fit for them.
I was hoping that the frequent assessments in the form of weekly quizzes and ethics worksheets would cut down on the number of late drop forms I sign. Students would know where they stood and it would be much more challenging for them to be in denial about their grade. Ultimately, it hasn't had any affect at all. I am signing the same number as always--perhaps even more than usual thus far. In most cases, it is a case of the student doing very little work and clearly failing the course. If they are in a position to earn some kind of B, I've encouraged them to stay in the class and dedicate some time to finishing it off. In the midst of talk about time to degree and 4 year graduation rates, it seems imperative that institutions tighten up the rules governing late drops. It appears that students are using these largely as a way to maintain full-time student status, even when they had little intention of completing a course. I am not opposed to a student dropping a course that s/he is failing after working hard at it; but it's ridiculous for someone to be able to drop a course in Week 10 that they never showed up for.
The one bright spot: all but three of the students who wanted to drop were of the "never showed up" sort rather than ones who didn't like the grade they were getting. I'm sure that others have changed their grading option to pass/fail. But overall, I do feel like the quizzes have done their job of keeping students informed of their performance and forcing them to be realistic about their chances of getting an A in the course.
One of my plans this summer is to work with our Center for Teaching and Learning assessment specialists to get a better sense of who drops the class and why. The number isn't very big, but it's big enough that I think we might be able to sketch some "types" and think about how to address this. Even though this course is just fulfilling an elective requirement, dropping it late means that a student is going to have to pay for and take another course down the line, perhaps instead of taking a course in his/her major that would move him/her closer to graduation.
Update 5/26/2013: In Fall 2011, I had 5/220 students late drop (Q) or withdraw (W). In Fall 2012, 12/387 students earned a Q or W. In Sprint 2013, 16/387 students earned a Q or W. The numbers between Fall 2011 and Fall 2012 stayed relatively stable; they increased slightly in Spring 2013. This was surprising to me since the students received significantly more feedback on their performance and had multiple opportunities to make necessary adjustments. Interestingly, though, the most influential factor was likely the fact that, in Spring 2013, I was no longer giving high-stakes midterm exams. In previous semesters, 30% or so of the grade was outstanding by the last class day. Some students clearly decided to dig in, study, and see if they could earn a passing grade. In Spring 2013, all but about 13% of the grade was determined before the final class day. The increase in Q drops actually represents students making informed decisions--they knew that their chances of earning whatever grade they wanted were slim (or non-existent) and so opted to drop the class. At least half of these students, I noticed, had never engaged in the class discussion board and had often done poorly on the weekly quizzes. In essence, they were "bad fits" for a class that required a "learn as you go" rather than "cram for the exam" approach. I suspect that one way to decrease the number of late drops is simply to be able to have a better sense of "who" the typical late-drop student is; and then remind students that, if this is them, they will not find that my class is a good fit for them.
Subscribe to:
Posts (Atom)