Monday, February 25, 2013

MOOCs and the Liberal Arts

In his recent Salon article, Andrew Leonard put in words something that has been nagging at me for months, namely, the fact that humanities disciplines are badly served by the higher ed MOOC hype.  It has been common to analogize the rise of the MOOC (and, more generally, the push to deliver more online courses as a way of lowering costs) to seemingly similar revolutions in the music and news industries.  As Leonard perceptively observes, however:

"it’s become clear to me that there is a crucial difference in how the Internet’s remaking of higher education is qualitatively different than what we’ve seen with recorded music and newspapers. There’s a political context to the transformation. Higher education is in crisis because costs are rising at the same time that public funding support is falling. That decline in public support is no accident. Conservatives don’t like big government and they don’t like taxes, and increasingly, they don’t even like the entire way that the humanities are taught in the United States."

I want to emphasize Leonard's key insight: the transformation in higher education that is being touted has a decidedly political context that has been missing from transformations of other industries.  Whereas changes in the music and newspaper business models were driven by economics, those in education are largely being driven by politics (for a similar argument on the economics that are driving the higher education revolution, see Predatory Privatization: Exploiting Economic "Woes" to Transform Higher Education).  As well, the education of a populace has long been considered a social good in a way that the supply and medium of music or news delivery is not.  Thus, when The New York Times declares 2012 "The Year of the MOOC" and proclaims the rapid proliferation of these "courses"--and, more to the point, the emergence of businesses like Coursera and EdX--a game-changer, we need to think long and hard about what this means (see also Audrey Watters' excellent overview).  This is not a value-neutral declaration; and the consequences for the landscape for higher education are potentially vast.

A lot has been written, and more is published each day, about the shortcomings of MOOCs to match the quality of a class delivered in a "bricks and mortar" classroom.  The dropout rates are massive (approaching 90% for most courses) and there's not a lot of quality control.  Some MOOCs are high quality while others seem to function more like teasers to attract applicants to the university or college that produced the course; to sell copies of the instructor's book(s); and to attract donations from alumni.  A great deal of a course's success depends on the instructor's ability to adapt the content and delivery of content for an enormous and international audience.  It also depends on the platform's suitability to the course's content.  The platforms were developed for math, computer science, and "hard" science courses and are reasonably good at supporting that content.  As well, the MOOC environment is particularly well-suited to the sort of course that has clearly correct and incorrect answers; and where much of the learning can (but probably should not) be reduced to memorizing and applying formulas.

A reasonably intelligent and self-motivated student can buy the textbook, follow the recorded explanations of concepts, and then apply those concepts to homework assignments.  Unclear concepts can be reasonably well-explained by peers, in part because it is reasonably clear when someone knows what s/he is talking about.  A peer might not walk a student through a difficult problem as smoothly or straightforwardly as an instructor, but I suspect that this process happens with success on a regular basis.  Reflecting on my own experiences with math, physics, and chemistry in college, I can also imagine that a peer sometimes has an easier time of identifying and clarifying misunderstandings because they, too, just worked through the exact same set of steps.  In part, though, the success of this process depends on the fact that it is also obvious when a peer is NOT able to be helpful (because they fail to arrive at the correct answer).  It also depends on the fact that solutions follow a predictable set of steps.

The process of learning in an introductory programming or math course is entirely different from that in an introductory writing or even psychology course.  It's not that math/science courses have objective answers while the social sciences and humanities are subjective.  It's that there are sometimes more than one correct answer; and the process of arriving at correct answers can vary from student to student.  In addition, students often struggle to distinguish between good and poor advice from their peers in humanities courses.  They are more prone to persuasive rhetoric because they are less familiar with the facts of an issue--indeed, if they already knew the facts, they wouldn't need to be taking the course.  The learning process differs in significant ways, yet the MOOC platforms and modes of delivery don't account for this at all.  If Blackboard tries to do too much, to be everything for every course, the Coursera and EdX platforms don't do enough (yet) to support the delivery of a humanities MOOC that is more than a kind of living textbook.

The most significant challenge for humanities instructors is the role of written "free response" to the learning process, whether in the form of single words; short answers; or longer essays.  Those of us who use peer review in our courses can only stand agape when the Coursera founders naively suggest it as the solution to grading written work (Andrew Ng discussed this during a lecture at UT Austin in March 2013, c. min. 16).  Certainly, peer review can work very well, but it requires an immense amount of structure and oversight in relatively homogenous student groups.  In much larger groups, it's simply not a good approximation of instructor feedback (Audrey Watters on the problems with the Coursera peer review model).  The point of peer review isn't about grades--preliminary studies indicate that peers do a reasonable job on this front.  It's about providing sustained and thoughtful critiques that encourage deeper thinker and learning on the part of the writer.  Sure, some learning happens in the process of writing; but most of it happens in the process of reading and responding to insightful comments.  Peer review can be part of this process; but in a decade of using it as part of writing assignments at all levels, I've never seen it come close to replacing the comments of the instructor (or even teaching assistant) in an undergraduate course--and even in a graduate seminar, it is a nice supplement to rather than replacement of the specialist instructor's comments.

The implications of the lack of fit between current MOOC platforms and humanities content are significant.  If MOOCs are here to stay, as seems to be the assumption, then humanities faculty need to take this problem seriously.  They need to recognize the political dimensions to the rise of the MOOC; and recognize the potential threat these pose to the survival of the humanities at so-called state-funded institutions.  They need to take seriously the challenge to develop functionalty that allows for the delivery of a serious and pedagogically sound humanities course to a large audience.  This does not necessarily require the use of the existing platforms.  Indeed, humanities faculty might well opt to develop their courses on a different platform, for instance, Instructure's Canvas LMS.  We can hope that an enterprising, humanities-focused company might come onto the scene with a platform that better serves our needs.

If we do not take this problem seriously, however, we run the risk of sending the message to the public at large that humanities topics are merely fun and light (the college student's "blow off course"); and don't involve complexity of thought, analysis, and real learning.  At the moment, MOOCs on humanities topics tends towards the sort of class that appeals to lifelong learners and those who want to do something fun but intellectual in their spare time.  This is not because these instructors are lightweights or because their "brick and mortar" classes aren't serious and demanding.  It's because the current functionality of the Coursera and EdX platforms is not sufficient for the kinds of things that support real learning in a humanities course.  The answer, I think, is not to lament this failing but rather, to work actively with Ed Tech experts to figure out how we might deliver a reasonable approximation of what we do in the classroom to a larger audience.

***********
6/8/2013: Cara Reichard, MOOCs Face Challenges in Teaching Humanities ("Richard Saller, dean of the School of Humanities and Sciences, suggested that there are certain qualities of the humanities that are better suited to an intimate classroom setting than to a massive online format.  “The humanities have to deal with ambiguity [and] with multiple answers,” Saller said. “The humanities, I think, benefit hugely from the exchange of different points of view [and] different arguments.”)

************
6/12/2013: Posthegemony revisits this topic of MOOCs and the Humanities, way that they are not a good fit.

Wednesday, February 20, 2013

Midterm #1: The No-Cram Exam?

The first midterm for my "stealth-flipped" Intro to Rome class took place yesterday.  I was dreading this first exam, not only because I needed to make sure that the exam itself was more challenging than than one I gave in the fall; but because the 3-4 days before the exam were going to tell me how well my stealth flip was working to encourage a "learn as you go" approach.  It would also tell me a lot about how well I'd done to manage the students' expectations for the exam.  I focused a lot of energy on defusing potential anxieties: I created a short video about the short answer section of the exam (something that they had not done before in the class); the TA who is running Supplementary Instruction sections focused the week's meetings on preparation for the short answer questions; and I gave the students a list of all potential short answer questions and encouraged them to work collaboratively on it as a Google Doc.  I reminded them that the exam questions might be worded differently, but that the questions would test the same content.  I wanted them to focus their energy on study rather than fretting, speculating, and anxiety-posting on Facebook.

I suspect I was more nervous for the exam than some of my students.  In the fall, I could accurately predict class performance on an exam from the lecture viewing data.  The steeper the spike in the 48 hours before the exam, the lower the scores would be.  I felt like I was watching a car crash unfold in a dream, completely helpless to change the course of events. (I wrote about the most spectacular of the two crashes here).  As it turned out, the current class behaved nothing like the fall class.  The weekend passed quietly with few questions posted on Piazza--and all of them legitimate and reasonable questions rather than the "I am too lazy to look this fact up" sort.  Nobody on the teaching team, including me, was inundated with emails.  When I received data about their viewing of pre-recorded and in class lectures, there were no notable spikes.  Throughout the day on Monday, I was waiting for a spike that never came.

Instead, by all appearances, the students were refreshing knowledge that they already knew pretty well; and working through the study tools I provided (practice multiple choice questions for the content not tested by a quiz + potential short answer questions).  Starting on Saturday, they were hard at work on the Google Doc, refining and adding details (and sometimes acting like trolls, but that's a topic for a separate post).  Their behavior suggested that they knew what to expect on the exam and that they were focusing their energy on study rather than anxiety-driven, unproductive speculation.  From what I heard, there wasn't much chatter on FB and Piazza remained almost completely silent throughout the lead-up to the exam yesterday afternoon.

I don't yet know how the students performed on the exam, but I am optimistic.  It was a difficult exam--intentionally so, because I don't want them to get overly confident and then perform poorly on the next exam, which covers much more complex content.  To be honest, I am a bit in shock that--at least so far as the data suggests--the class as a whole was not cramming.  I'm sure some students did cram, but the vast majority clearly did not.  Hopefully, they realized that the weekly quizzes kept them on top of their learning for the class and that preparation for the exam wasn't nearly so stressful because of this regular preparation along the way.  Certainly, their behaviors before and during the exam suggest that, on the whole, they knew what to expect and what to do to prepare.  They clearly felt oriented to the course expectations and understood that it was on them to put in the work to learn.

We are planning to do an exam wrapper after we return the exams next week, primarily to encourage the students to actively reflect on their study and learning process (but also to collect some preliminary feedback about their experience in the course thus far).  I am eager to see what they say, to see if their own sense of their preparation process matches up with my observations of their behavior.  I can say with a lot of confidence that this group approached the midterm in a completely different fashion from the group in the fall.  I won't declare the stealth flip a success until we are deeper into the semester (and until their time management skills are really being challenged).  Nonetheless, it does seem that the combination of weekly quizzes and providing a very long list of potential short answer questions has had a remarkable effect on changing student learning behaviors--and anxiety levels.  Most of all, it lets students who have little experience with learning in the humanities know what to expect and how to study.

I can tell them that it is not possible for 90% of students to do well in my course with a "cram for the exam" approach until I am blue in the face, but everyone wants to believe they are in the select 10% who *can* learn this way.  This false belief has been reinforced repeatedly during their secondary and even post-secondary education.  As well, many (most?) of them lack the time management skills to discipline themselves to learn regularly even if they know better unless a grade is not involved.  It is absolutely fascinating to me to see that, by creating an assessment structure that rewards a "learn as you go" approach; making a very big effort to orient the students in the class; and training them in the methods of learning in a humanities/history course, I am seeing significant and positive changes in their behaviors and (so it seems) attitude.  Knock on wood.

Thursday, February 14, 2013

Posting PPTS

When I first started teaching large enrollment classes many moons ago, I refused to post extended notes or my PowerPoint slides.  I didn't take attendance (I had no efficient way to do so pre-i>clicker) and didn't want to encourage students to skip class and then rely on a skeleton outline of the lecture.  About three years ago, I realized that I needed to change my tactics.  Students were pooling notes via GoogleDocs; they were using their phones to snap pics of the PPTs in class (!); and some had disabilities accommodations that entitled them to all my notes and slides (which were then magically being distributed to other students in the class who did not have the same accommodation).  Around the same time that I realized I was fighting a losing battle, I chanced into teaching in a room equipped with lecture capture technology (Echo360).  After some back and forth, I opted to use the lecture capture for my class lectures--and loved it.  No more students asking for PPTs, no more pooling of notes.  Attendance wasn't fantastic, but it wasn't significantly lower than it would have been without lecture capture.  As well, because the lectures were recorded, I felt like I could hold the students to a higher standard of learning on the midterm exams.

Last fall, I used Echo360 extensively, to pre-record lectures and also to record in class lectures.  Within the first few weeks of the semester, I had students asking for me to upload the PPT slides that went along with the pre-recorded lectures.  In part because I had decided to give them every possible tool and then observe how they used it, I did so.  I could appreciate the argument that they wanted to be able to take notes from the lectures directly on print outs of the slides.  Unfortunately, it soon became clear that, when pressed for time, the students were skipping the lectures entirely and simply reading through the PPT presentations.  These presentations had a fair amount of text on them, in part because I was worried I would forget important details while taping the lectures.  Still, they weren't a sufficient replacement for the actual lectures.

Once again this semester, I had students asking for the PPT slides of the Echo recordings.  The argument was that it was difficult to scroll through the recording to find specific details.  Certainly, depending on the speed of one's connection, this can be true.  It is nevertheless interesting to me that, just 18 months ago, students were delighted to have the recordings.  Nobody asked for the PPTs--after all, that is part of the point of using lecture capture.  I opted not to provide the PPTs this semester.  I explained why I wasn't doing so and suggested some techniques for finding individual slides within a lecture.  I want the students going back to the recordings; I want them listening to what I am saying and not just reading a phrase on a slide.  I hope that, maybe, in going back to the lecture they will listen to longer parts of it.  I am thinking that I might excerpt the images and provide those for them to review outside of the Echo recordings.  But, honestly, I feel like they are lucky to have the recordings (I'm one of the few professors on campus to use this technology) and am not inclined to do anything to encourage shortcuts.  Yes, it would take less time to look at a PPT slide than to scroll through a lecture; but the entire point of recording the lecture is so that they will go back through them and listen to all or parts of them, depending on where they have gaps in knowledge and understanding.

Wednesday, February 13, 2013

What Lecture Does (and Doesn't Do)

One helpful side effect of the national dialogue about higher education is the realization that it is very difficult to teach a large enrollment class (<100 students).  This realization is especially important as budgets are cut and fewer new faculty and lecturers are hired.  Consequently, fewer sections of courses are offered and class sizes are in turn rapidly increasing (my own course doubled in size from 220 to 400 because of need for seats).  Large enrollment courses are traditionally taught via lecture with PowerPoint, Prezi, vel sim.  They are taught in large auditoriums with tiered seating and projection screens at the front of the room.  To the students in the audience, the experience of going to lecture is not particularly different from that of going to a film or a live performance of a ballet or play (apart from the fact that few professors deliver their lectures in a pink tutu).

It's no wonder that students sit back in their seats and assume the role of the passive observer.  Sure, they may take some notes (usually nothing beyond writing down the phrases on our presentation slides); but, on the whole, they are taking in the show rather than actively engaging in and helping to create their own learning experience on any given day.  It's also no wonder that they don't learn very much--a situation not aided by their own expectation that learning happens only in the classroom.  As many studies have documented, the lecture--however entertaining an individual lecturer might be--is not particularly conducive to student learning.  Certainly, a powerful lecture can inspire students to go off and learn on their own; but qua teaching tool, the lecture relies on the flawed idea that knowledge and conceptual understanding can magically travel from the mind of the instructor to the mind of the student.

If lecture doesn't work very well as a learning tool (and assuming that we care about learning and not just student evaluations that reflect not their learning but their enjoyment of their classroom experience), what are we left with?  More to the point, if learning--especially deep learning--requires active engagement, practice, and regular feedback, how can we facilitate that in a large enrollment classroom?  The techniques of blended learning provide some of the answers: shift (some) content delivery to outside of class--and set up a system of incentives for your students to do that work; and then use class time to facilitate active learning via i>clicker and other student response systems; peer discussion; and larger class discussion.  (Erik Mazur has spoken eloquently about his conversion from lecturer to facilitator of learning.)

Lecture may not be a very effective tool for the transmission of knowledge, especially more conceptual knowledge; but we should be careful not to throw out the proverbial baby with the bath water.  In a large enrollment class, a class in which the instructor has very little personal contact with individual students and is often a mysterious figure at the front of the class, lecture *does* transmit a tremendous amount of significant information.  First and foremost, it gives the students a sense of connection to the course.  It gives them a sense of who the instructor is, and lets us construct a professorial self for the students.  Sure, this can be done to some extent via pre-recorded lectures, but, given that we lack acting experience, it's difficult to do more than deliver content.  It feels weird to crack jokes and we have no audience to play off of.

I was reminded of this benefit of the in class lecture yesterday when I told my class the story of the Roman republican consul Regulus.  Regulus was captured by the Carthaginians and, eventually, killed in rather gruesome fashion.  The stories vary: in some versions, he was put in a box and spikes were driven in until he stopped screaming in agony.  In others, he was put in a spiked barrel and rolled down the hill.  After warning my class that I had a dark sense of humor, I told this story to my class with a certain amount of glee, emphasizing the gruesome aspects.  As I told the story, I watched their reactions and adjusted accordingly.  After class, one of my TAs found a great medieval illustration of Regulus' death, which we posted on Piazza.  This led to additional commentary by one of the students, who noted that I told the story in class with a smile on my face.  This exchange was such a reminder for me of why a certain kind of lecture matters.  I try to avoid using class time for the rehearsal of facts and figures, focusing instead of stories and interpretation.  But, after shifting all content--including the stories--outside of class last semester, I realized that I had thrown out the baby.  This semester, I've tried to leave the bathwater outside of class while bringing the baby back into class.

To my mind the truly significant challenge of teaching a large enrollment class these days is figuring out how to make the best use of technology.  To do so requires an intimate understanding of how students learn; and of what each learning tool can and cannot do.  It requires figuring out what to use the in class lecture for (in the case of my course, story-telling and modeling interpretation); and what to shift to outside of class.  It requires having a good sense of students' tolerance for watching lectures out of class.  It requires understanding their behavior and working with it to create an assessment structure that rewards them for making good choices.  It used to be that large lecture classes were a kind of plum for distinguished professors.  They walked into class, awed the students with their learning and, in some cases, their entertainment skills, and then went back to their research.  They had a team of teaching assistants to handle student questions (though, of course, instructors also held office hours) and to grade the 3-4 midterm exams.  These days, effective teaching in a large enrollment class requires enormous effort and a significant investment of time by the entire teaching team, from the instructor down to undergraduate student teaching assistants.  The in class lecture is just a small but significant part of the package of learning tools we instructors provide our students.

Tuesday, February 12, 2013

Early Warning System: The Low Tech Version

The integration of weekly quizzes to the assessments in my 400-student Intro to Rome course has had many benefits.  First and foremost, it requires the students to stay on top of the course material and learn it cumulatively, week by week, rather than trying to stuff it into their brains over a 48 hour, Red Bull-fueled cram session.  This means that, for all intents and purposes, my students are flipping.  They are therefore able to engage actively in class and make good use of the opportunities for reviewing and applying content in class, under my guidance.  The quizzes, which are machine graded, also provide the students with weekly feedback about their learning strategies.  I take care that the quizzes are challenging, so that there is good separation among the students; and so that they do not get lulled into thinking that they can get away with not knowing the week's material deeply.  So far, this is working well.  We saw many of the students making adjustments from the first quiz to the second quiz, and we also saw many students performing consistently well from quiz to quiz.

The other significant benefit of these weekly quizzes is the information it provides to the teaching team.  We are now starting the fifth week of the semester--1/3 of the way into the course.  The class has its first midterm on the 19th of February.  Those students who are scoring 10-12 on the quizzes are likely in good shape for the midterm and will be able to prepare with relatively little extra work, just a few hours of careful review.  Those students who are scoring around 50% on the first two quizzes, however, are likely not in such good shape.  In an effort to get them to confront this now, while there is still time to either a. drop the class before a significant investment of time and energy (which also detracts from their study for their other classes); or b. fully commit and bring up their grade, I had one of my teaching assistants go through the gradebook manually and identify students who had scored 50% or lower on both of the first two quizzes.

These "at risk" students--23 out of 387 (though there were about 20 more on the bubble)--were sent a short email expressing our concern and urging them to contact their assigned TA or me; and reminding them that we were there to help them be successful in the course.  I have emailed all of the TAs a list of the students who received an "early warning" email and will check to see how many follow-up before the first midterm; and, also, how many are able to raise their scores on their own.  We will likely do something similar after the first midterm.  Our goal is to let these underperforming students know that there are resources to help them; and to do what we can to force them out of their denial, out of their belief that somehow everything will be ok without them actively making changes to their learning behaviors.

We also hope that some of these students will recognize that perhaps they should drop the course.  We want them to drop as early in the term as possible, so that they aren't wasting time on a class they aren't going to finish.  Nothing is more frustrating than signing a drop form for a student who has limped along for months and then, a week before the end of the semester, finally admitted that they aren't going to raise their grade.  It is a waste of the student's time but also of the time and resources of the teaching time.  Often, we will invest hours of time working with these students, knowing that they are likely to drop but unable to convince them to do so.  One goal of these early warnings is to force students to be honest with themselves, to force them to decide whether they are willing to make the necessary changes to their learning behaviors for my class.

Someday, hopefully in the near future, this early warning process will be mechanized, built into any LMS.  I will be able to set it to send warnings that I have written.  I can even imagine a time when the system will be able to develop algorithms for particular courses, on the basis of several years of data.  It will be able to identify less obvious at risk students because the data will tell us that students who perform a certain way on early quizzes are likely to score a certain way on later assessments.  After a decade of teaching and several iterations of this Rome class, I have an intuitive sense of these algorithms.  But I can imagine that data (not necessarily BIG DATA, but just a few years of 1000 students/year) will eventually be able to predict with some reasonable accuracy how students are likely to perform in a course after just a few assessments.  I can imagine that this will give me a big stick to use in persuading at risk students to change their approach (or decide to drop the class).

As for the success of my low-tech early interventions, stay tuned....

Monday, February 11, 2013

Practice Makes Perfect

In his book Outliers, Malcolm Gladwell famously makes the claim that the key to success in any field is putting in around 10,000 hours of practice. Native intelligence matters, but not nearly as much as many people think.  Practice--and the willingness to practice long hours--is often what sets apart the very highest achievers.  Indeed, one of the guiding principles of the flipped classroom is that practice makes perfect--or, at least, makes for deeper learning and better grades.  Yet practice is often what our students resist the most and are least well-trained to do.  By shifting content delivery out of class, we instructors can be present with our students while they practice recalling and applying the content they learned outside of class.

This in class practice takes many forms: i>clicker questions and opinion polls; class discussion; and peer instruction.  As more BYOD technologies come on the market, like Learning Catalytics, I expect that the in class experience will become ever more interactive and sophisticated.  This in class practice of content is crucial to student learning.  In a history class like mine, however, much of this practice depends first of all on a clear understanding and ability to recall the basic facts.  Thus, I need to spend at least some time making sure that they are understanding key concepts and ideas (and highlighting certain people/events/concepts as important to know).

Unfortunately, these sorts of questions can become tedious very quickly, even when discussion questions are interspersed.  In the fall, I saw that the students came to dislike the i>clicker questions and viewed them as nothing more than a way for me to take attendance (and catch them out for coming to class late or leaving early).  Part of the problem was that my i>clicker questions focused too much on quizzing facts.  In redesigning the class for the spring, I realized that I could kill two (or, really, about five) birds with one stone by integrating weekly short quizzes into the assessments.  In addition to the graded 12 question quiz (10 min.), I also post a practice quiz of about 25 questions.  I draw a few of the graded quiz questions from the practice quiz to incentivize the students to do the practice quiz.  By shifting this kind of content practice to a highly structured online activity, I am able to use class time to focus i>clicker questions on things other than simple recall (though I still include 1-2 of those just to keep students on their toes).  Now I use them to initiate a debate, to get a discussion of a complex issue started (e.g. should the Romans have sowed Carthage's fields with salt?)

Something I am realizing is that, for my subject, there are different kinds of practice that are necessary for success in the course.  Some kinds, such as recall of basic facts, is better done online while practice with conceptual knowledge and application of those facts is best done in class or on the online discussion board.  The challenge is motivating the students to engage in the practice when they aren't being observed.  My solution, which seems to be working very well thus far, was to let them see how much easier the graded quiz is when they have done the practice quiz, seen some of the questions, etc.  We can see on Blackboard which students have done the practice quiz (but not anything about their results--I deliberately made that hidden so that they would not feel self-conscious).  One of my TAs is keeping track of this data so that we can correlate it with quiz performance. 

On this same issue, we are releasing a data bank of possible short answer questions, about 100 for the first midterm.  From this set of 100 questions, we will take 8 for the exam.  Once again, the message we are trying to send is that learning is about practice and effort.  We expect that the students will create a Google doc for the class and that some students will try to just memorize the answers.  I am confident, though, that the number of potential questions is large enough that if one is just trying to memorize answers, they are unlikely to stick in the memory.  Each week, we will add more and more questions to this databank, and the short answer portion of the exam is cumulative so they will have to keep that knowledge relatively fresh.  But, if they put in the practice, they will almost certainly do well on the assessments.

Sunday, February 10, 2013

The Power (and Problem) of Discomfort

Educators know that moments of discomfort are very often also moments of significant learning.  Still, it can be difficult to persuade our students of this principle, and even more difficult to persuade them that it is normal to experience discomfort during a class--and that this doesn't necessarily mean that the instructor is doing something wrong.  At the same time, I am learning, we educators must also appreciate the value of comfort to the learning experience.  The tricky part is finding a good balance between letting students settle in and feel comfortable in a class; and forcing them out of their comfort zones from time to time.

In the fall semester version of the flipped class, I did the equivalent of throwing the students into the deep end.  I did explain to them what a flipped class was and how it would improve their learning, but did not really appreciate the extent to which their experience of a flipped class would be so disorienting.  In part, the disorientation came from the space of the classroom itself--a very large and badly designed lecture hall.  It is the kind of room one walks into and expects to hear a lecture from the sage on the stage.  At the front of the class, there is a small podium and two projection screens.  The design of the room encourages focus to be on that stage at the front, and it is very difficult to navigate the seats in the room.  When I didn't lecture, students were confused--even if I had told them at the outset that I was not going to lecture, they somehow still expected it and wanted it.  They also struggled to understand how the i>clicker and discussion questions connected to their out of class work.  To them, it just seemed like a review.  Even when I tried to explain that I was using the i>clicker polls to check comprehension of difficult concepts; and using peer discussion and class discussion to do higher order analytical thinking, they didn't get it.

Going into the fall semester, it never occurred to me that shifting lectures to online would result in such a high level of student disorientation and discomfort.  Yet it clearly did.  As well, it was at a level that was not conducive to learning.  Rather, it was causing frustration, anxiety, and rebellion against the flipped class model.  For my part, I was blindsided and struggled to make sense of their discomfort.  It was difficult for me to understand why they would be so disoriented by such a simple shift of content.  I didn't understand why they struggled to make what seemed to me to be a relatively small cognitive adjustment.  Ultimately, I suspect that there were many causes of their discomfort, including their own choice to cram for exams instead of preparing the work as assigned.  Still, it was incredible to observe how much power the space of the classroom exerts on student expectations.

In re-designing the class for the spring, I took very seriously the issue of student (dis)orientation.  I knew that the only way I could push them up against the edges of their comfort zones was if I let them find a comfort zone.  To that end, much of the first month of this semester has been devoted to orienting the students and meeting their explicit and implicit expectations.  I am lecturing (though I will be gradually shifting away from in class lecture in the coming weeks); every exercise that the students are asked to do has a practice component to it (practice quizzes, they can post until they are satisfied with the grades that will count) and was modeled for them by the teaching team.  I have created numerous written documents and recordings to explain to them expectations for different elements of the course.  A substantial amount of structure has been added to the course, so that students know each week what they will be doing on a given day.

As we start the fifth week, I am pleased with the redesign (but am also keeping in mind that, at this point in the fall, everything was going smoothly too).  I do feel like the students are much more oriented and that they have fallen into the weekly rhythm of the course.  We are getting very few logistical questions (unlike last fall, when we got many for weeks after the start of the semester) and my sense is that the students' know my expectations of them and that I am meeting their general expectations as well.  I have worked hard to let them settle in to the class and feel a sense of comfort, perhaps even make some friends.  Then, once they have that sense of comfort, I can chip away at it from time to time.  I can encourage them to move out of their comfort zone, into a state of discomfort, where most formative learning happens.  I've come to appreciate, however, that the power of discomfort for the learning process only exists if the students start from a place of comfort.

Friday, February 8, 2013

The Teaching Team, or Adventures in Management

Another significant and (for me, at least) unanticipated challenge of teaching 400 students in a flipped class is managing the team of teaching assistants and graders.  The first challenge was to persuade my college that I needed more than two graduate student teaching assistants.  I spent the better part of two months campaigning myself and getting every administrator I could find to send emails to my dean explaining that two teaching assistants for 400 students was not going to work.  Finally, after several weeks of sweating it out, I was informed that I would have four teaching assistants.  I also persuaded my department chair to pay two advanced undergraduate students a small stipend to work as graders.  This meant that, in the fall, I had a team of 6 to work with: 4 graduate TAs and 2 undergraduate graders (who only graded and otherwise had no interactions with the students).  I depend on my department chair to assign me specific students and I was given two fairly advanced students and two brand new MA students to work with.

I understood early on that, with a team of this size and with a range of jobs needing done, it would make sense to distribute duties.  I decided that one of the two advanced grad students would be in charge of conducting the weekly review sessions (held during classtime on Fridays).  The other would be the grade czar, coordinating all the grading and dealing with students who had questions about exam grading.  The MA students would come to class and help with in class activities; and would grade 1/4 of the exams apiece.  The two graders would grade 1/4 of the exams each.  This system worked fairly well at first, until suddenly my "Friday Review" TA was reassigned to another course.  This meant that I was now in front of the class MWF.  It also meant that I had one fewer TAs to assist with classroom management, and as the semester progressed, it became clear that I really needed to have a second, experienced TA.  Losing this TA meant that I was now spending a lot of time dealing with issues and questions that should have been directed to an experienced TA (this became even more of an issue because of the inexperience of the two MA students).

The other complication that came up was in grading non-exam work.  During the semester, I offered the students the chance to do extra credit twice, in an effort to get them to learn some key concepts.  Unfortunately, this meant that 400 written assignments needed to be graded.  In retrospect, I should have involved several of the other TAs.  Instead, I graded them myself.  I spent the final 5 weeks of the course grading 800 written assignment + a final exam with an extensive written section.  I did have some help with the final exam grading, thank heavens.  But, by the time grades were submitted, I was completely exhausted.  I had also learned some very important lessons about managing a teaching team in such a large class.

This spring, I once again have a team of 6: 4 graduate TAs and 2 undergraduate graders.  In this revised version of the course, however, I have made significant changes also to how I utilize my teaching team.  I have a "head TA", who is in charge of grades (the same TA I had in the fall, which has been a huge boon).  I have another very experienced graduate student who manages all things ethics.  When the students hand in worksheets connected to the ethics case studies, she records them and comments on them.  She will review their ethics portfolios mid-semester and grade them (with assistance from me and another TA) at the end of the semester.  She is the person to whom students can take their ethics questions.  She also hands out candy to students who participate during class.  A third TA will be working virtually (she is based in Iowa this semester).  Her job is to manage and grade the discussion threads on Piazza.  She has some help from a volunteer TA--a former student of mine who is now in medical school but enjoys keeping up with her Roman history/culture.  My fourth TA helps with things like bringing scantrons for quizzes, making copies of exams, and he will grade 1/3 of the exams.  The two undergraduate graders will each grade 1/3 of the exams.  As well, I have divided the class into thirds and assigned each third to one of the three campus-based TAs to serve as their "point-person", that is, the TA they should think of as their own.

So far, I am very pleased with my new system of role distribution.  Each TA knows very clearly what their responsibilities are; they are very good about doing their job with minimal drama; and thus far, I have not been inundated with questions from students that should be taken to the TAs.  It also helps, I suspect, that I set up a special email account for this class, so that I can easily find (and forward to the right TA) the emails that come in from students.  Certainly, some issues have to be handled by me.  But many are things that are easily handled by a teaching assistant.

I don't think I appreciated at all how important it would be to develop some basic "best practices" for managing a larger team of TAs.  With two TAs, it was pretty simple to just have them do the same thing and share the workload.  But with a larger class and a more complex course design, it suddenly requires a much clearer division of specific roles.  The other thing it requires is for me to leave the TAs alone to do their job.  I have a tendency to micromanage, to fret.  I am slowly but surely learning that, for me to do my part of the course well, I have to assume that my TAs are doing their parts and not waste mental energy double-checking or intervening.  Or, worse, deciding that I am the only one who can do a particular task (as happened when I graded 800 written assignments in the fall).

As faculty, not only are we not particularly trained in pedagogy; but we are not trained in basic management.  It can be a pretty big adjustment to work with teaching assistants, and to figure out how to make good use of them (without overusing them, of course!).  That challenge becomes even greater as the class size and, consequently, the size of the teaching team, increases.  Eventually, I'd like to have an even larger teaching team, probably by adding more undergraduate student TAs who would come to class and help with discussion activities.  Other departments at my university have done this very successfully; yet it is clear that this is successful because a lot of time is devoted to mentoring and training.  Still, as we move away from traditional lecture and towards more active, student-centered classrooms, it will be imperative to have a fairly large teaching team--and that will require substantial management, division of roles, and basic oversight on the part of the instructor.

Thursday, February 7, 2013

Weekly Quizzes

One of the most significant changes I made to my 400 student Intro to Rome course this spring was to the assessment activities and structure.  I realized that if I wanted to send the mess age to students that, in a history class, learning was cumulative and not something that could be done effectively 48 hours before a midterm exam, I needed to put my percentages where my mouth was.  So I drastically reduced the weight placed on midterm exams and added in 10-minute, weekly quizzes.  The three cumulative midterms are worth 40% of the final grade.  The 9 quizzes (8 of which count towards the final grade) are also worth 40% of the final grade.  My students are very attuned to grades.  They care a lot about them and will, generally, work hard to get a good one.  By setting up the assessment this way, I am taking a characteristic of their learning behavior and using it to try to inculcate good study skills (or, at least, an approach to the course that is far more likely to help them learn the content in the short run and remember more of it in the long run).  I don't love the fact that years of standardized testing has trained my students to be overly focused on outcome, at the expense of the process.  Yet, with a lot of hard thinking and a bit of creativity, I feel like I have found a way to turn that potentially negative learning behavior into a positive.  They care about outcomes, so channel that into weekly studying; and give them weekly feedback on how well their learning strategies are working so that they can make changes (if necessary) before the midterm exam.

In the fall, it was clear that I needed to implement weekly quizzes to motivate the students to stay caught up and, therefore, able to be active and engaged learners.  With weekly quizzes, they really cannot get more than a week behind without significant consequences for their course grade.  It was interesting that, in one of the fall student surveys, a common suggestion for improving the course was to incorporate weekly quizzes.  As well, in information conversations in my office with various students from the fall course, they were clear that they needed the outside motivation of a graded quiz to "force" them to do assigned work.  They knew that it was better for their learning to do the assigned work incrementally rather than try to cram for the exam, but seemed incapable of managing their time in a way that allowed them to avoid cramming.

The real obstacle to incorporating graded, weekly quizzes into the class was logistics.  It is a big deal (and takes about 5-7 minutes) to hand out 400 scantron sheets and then collect them.  It uses a lot of paper, especially if we also hand out copies of the quiz.  I talked to a lot of people about possible ways to administer a short quiz to 400 students.  I was less worried about the time since the spring class is T/TH  (75 minutes rather than 50).  I was more worried about all the paper--including making sure we got the quiz itself back from the students (though, as it happens, I've decided to post them on Bb for the students to review).  Initially, I thought I might embed the quizzes in the Learning Management System and have them take them in class.  My classroom has the technological capacity for that.  Soon enough, though, we realized that a. there were bound to be tech issues ("I can't get online"; "Bb ate my quiz"); and b. we had no way of preventing someone from taking the quiz from another location.  To prevent that would have required something like having the students register their presence with an i>clicker and then us having to compare that to the names of 400 quizzes.  Not reasonable.

Finally, we settled on a fairly basic plan: we would hand out the scantrons and then use a doc cam to project the questions and a PPT slide for images.  This seemed like a good idea in theory; in practice, it didn't work very well, for reasons entirely out of our control.  First, the doc cam could only focus and magnify 2-3 questions at a time and I hadn't worked out the timing for individual questions.  I had to play it by ear (which seemed fine--I gave them more time than they needed).  Second, separating the images from the question meant that we had an image on one screen and the question on another.  The architecture of my classroom made it nearly impossible for large swaths of students to see either the image or the question.  I ended up reading the answer options aloud twice, which was a good bandaid.  Still, after the first quiz, it was clear that we needed to find another way to administer the questions.  Fortunately, one of my TAs (who was with my in the fall as well) came up with the perfect solution: a PPT slide show with timed slides.  This way, we could project everything on both screens at all times.  The images and the questions about them were on the same slide; and I could add timings so that easy questions got less time while harder questions allowed for more time.  This ended up shortening the 12-question quiz to about 8.5 minutes.  This second quiz went very smoothly.  I seemed to get the timing right for individual questions and nobody had any trouble seeing the questions.

With the first quiz we had a somewhat unusual issue--four quizzes went missing.  I have several procedures in place to ensure that, once they are in our hands, we don't lose them without knowing it.  This meant that they must have gone missing during the collection process--a reminder that, even with this, we need to take special care.  An enormous challenge of a large enrollment class is that you can't know if an individual student is lying to you; or really was the victim of some freak circumstance.  In this particular case, evidence is mounting that someone funny happened.  When we ran the scantrons for the second quiz, those four quizzes magically re-appeared.  We know they had to have been added by the students, either the same students or some other student.  It's difficult to know what to do since we can't prove who added the quizzes (perhaps they fell into a student's bag and, rather than giving them to us, he thought he would slip them into the stack?)

More than likely, these four students are trying to pull a fast one, but I do feel like I have to a. let them know that I know; but still let them retake the quiz.  In the future, a TA will leave the room after we collect the quizzes and count them.  And I will be clear that we will only run the quizzes we have in our possession after they are collected.  This episode is a reminder to me of just how much effort, care, and time it takes to incorporate something as simple as a weekly, 10 minute quiz into a large enrollment class.  At the same time, the weekly has proven to be very effective at getting the students--totally unaware of what they are doing--to engage in a flipped class.  In addition, it gives the students substantive and relevant feedback on their learning strategies long before the first midterm.

Wednesday, February 6, 2013

Online Education, #foemooc, and #moocmeltdown

The response to the surprisingly swift implosion--and shutdown--of Coursera's Fundamentals of Online Education in the higher ed community has been interesting to observe, particularly because I was a student in the course.  Much of what is being reported is true (Slate; Chronicle of Higher Ed; Inside Higher Ed; Washington Post + comments to each of these): the course was very obviously not up to the quality of most other MOOCs on the Coursera menu.  It was chaotic from the start.  My first contact with the instructor was an email informing me that there were problems: no welcome to the class, no effort to orient me to the experience to come. Things only got worse over the course of the week before, suddenly, we were informed that it was being suspended indefinitely.  As I wrote in an earlier post, I was surprised by the decision to pull the course rather than use the negative feedback as an opportunity to problem solve.  I suspect that the instructor was completely blindsided by the whole situation, including some rather harsh personal attacks on the discussion board, and felt too overwhelmed to innovate while the class was in session.  Certainly, one need only go to Twitter (e.g. #foemooc) to find plenty of criticism.

The decision to use Google Docs for students to organize themselves into groups at the very start of the semester) was shockingly naive.  Google docs are great for small classes; they are fine for large enrollment courses like mine, with 400 students.  They are useless in a course with 41,000 students (because only 50 people can be editing the doc at one time).  In the aftermath of the Google Docs disaster, Coursera is doing damage control and blaming the negative feedback from users entirely on the fact that the Google Docs feature wasn't properly tested beforehand.  To excerpt from the Inside Higher Education article: According to Richard A. DeMillo, director of Georgia Tech's Center for 21st Century Universities, there wasn't enough time to test the features for group discussions.  Asked if such testing should have taken place, DeMillo said that it was important to put the issue in perspective. "In a bricks and mortar course, it would have taken months to identify and make changes." DeMillo said it was important to let instructors experiment. "If we tell people to just do safe things, we'll stifle innovation," he said.  So, basically, it's ok to do shoddy work and try to pass off a half-baked course on 41,000 students if you are, by someone's definition, being innovative.  Andrew Ng seems to endorse this when, in the same article, he refers to the Google Docs discussion plan as "really innovative."  Uh, ok.

But let's be clear here.  This course didn't implode simply because of one bad (and, apparently, completely untested) decision.  It imploded because it was badly designed course.  It imploded because the instructor, Dr. Fatima Wirth, seemed not to understand some basic principles for teaching a large number of students.  The fact that there were 41, 000 students in this particular class meant that every small problem very quickly became a big one.  But I want to be clear that all of the issues that emerged with this course had very little to do with online education or the usefulness (or quality) of MOOCs.  The course imploded because it was badly designed for its audience.  It was poorly organized; it paid no heed to the need to orient students to the learning environment; it was difficult to figure out what to do; and the pre-recorded lectures were not well done.  It would have been just as disastrous if she had tried to use the same design and same learning tools (like the uninspired, pre-recorded lectures) on my group of 400 students in a face to face classroom.

Most obviously, Dr. Wirth appeared to lack a clear understanding of who her audience was likely to be, or why they would be enrolled in her course.  She had no sense of the effect that scale would have on a course that was clearly designed for a much smaller, North American audience with access to fast and reliable broadband.  The course design didn't seem to allow for the more casual auditor who wanted to lurk in the back row (as I wanted to do since I am knee-deep in a very hectic semester and don't have 7-10 free hours/week to devote to another activity).  In teaching a large class, whether 400 or 40,000 students, it is crucial to know your students, to understand where they are coming from and what expectations they have.  A big advantage that instructors of large classes have is the simple fact that, the larger the class, the easier it is to look around at similar classes and get a general sense of your own audience.  

For instance, even if I don't know my individual students all that well in a 400 student class, I can describe many of their characteristics (non-classics majors; pretty evenly distributed from freshmen to senior; about 20% are highly motivated while 60% are intelligent and might get interested in the course content but struggle to manage their time without a lot of help; another 20% really are just there to check off a box on their graduation requirements).  This semester, I worked with two learning and assessment specialists from our Center for Teaching and Learning to develop a survey instrument that gathered information from my students about their background, other commitments (like paid work), reasons for taking the course, and expectations. With 41, 000 students, it is even easier to get an understanding of the different kinds of students.  Surely Coursera is working to create abstract student profiles for their different courses, so that instructors can have an even better sense of their audience and how to best meet the needs of such a diverse community of learners?

The other striking absence from Dr. Wirth's course was an effort to orient students to the learning environment: no welcome to the class message; no clear instructions of what to do or how to do it (though, certainly, some students did figure all of this out on their own and were able to complete the first week's assignment without much hassle).  Whether teaching face to face or online, orientation and organization are crucial.  For the first several weeks of a large class, when students are doing every part of the course for the first time, it is necessary to provide as much modeling and guidance as possible.  Sure, some students will run ahead and figure things out for themselves.  But many won't, and those will be the dissatisfied students posting negative feedback on the discussion board or tweeting about their frustrations.  They will also be the students who fill up your inbox and end up costing hours more time than would have been spent creating a detailed "course map" with clear instructions in the first place.  I am very comfortable in an online environment yet, when I went to the course site, I had trouble figuring out what I was supposed to do or why I was supposed to do it.  Was I supposed to watch the videos and then do the readings?  What was I supposed to focus on in the readings?  I didn't really want to get involved in a group discussion, but would it be ok to post thoughts on the course discussion board?  At one point, I asked to be assigned to a group but never heard back.  Ultimately, I felt very disoriented--and I wasn't even that invested in the course.

Teaching large groups of students, regardless of the environment, also requires the instructor to develop a thick skin.  It is important not to close our ears to criticism--that's how we improve our course design and teaching.  At the same time, much of the grumbling will be unfair or uninformed.  It might even be unnecessarily personal and overly aggressive.  But it *will* happen.  We cannot make everyone happy and, in this day and age when everyone gets to play the critic via blogs, Twitter, Facebook, Yelp, various "rate your doctor" sites, etc., we are going to hear about every little dissatisfaction.  This will especially be true when running a course for the first time (because, without a doubt, it *will* have elements that can be improved).  So it goes. For this reason, instructors have to learn how to sort through the useful feedback and ignore the rest.  This was a really tough lesson for me.  I have the sense that the initial, very negative reaction from some of the course participants led to the halt of the FOE course.  In effect, this gave those voices too much power.  There were many others, including me, who recognized that the initial chaos wasn't terribly unusual for the "getting organized" part of a class.  But we figured that, once the kinks got worked out, the rest would flow relatively smoothly and we'd learn a lot.  Hopefully the course will resume shortly; but it's unfortunate that it was taken offline at all--especially since, in real life, we just have to problem solve on our feet as best we can.

One final point: the pre-recorded videos.  These are the cornerstone of MOOCs (and certainly will play an important role in other kinds of online courses).  They are also, increasingly, being used in face to face classes, as a way to free up some class time for student engagement/peer discussion (i.e. hybrid or blended learning).  There isn't yet a "best practices" guide for producing these videos, and it will certainly vary from discipline to discipline.  At the same time, the videos need to be thoughtfully done, engaging, and useful.  They can't simply be a narration of bullet points on a PPT slide.  Students will quickly grow bored and stop watching them.  Or will speed through them and merely write down the info on the slides without listening to the commentary.  Att this stage, online courses of all sorts--including MOOCs--should probably also be thinking about ways to improve the content delivery via video (it gets boring to see the same person talking to you for 1-2 hours, even if the individual videos are shorter) and visuals.  It might also make sense to start investing time in developing other forms of content delivery (i.e. modules in a LMS like Canvas).

I hope FOE makes it back online.  I suspect that I can learn a lot from it.  I also hope that Dr. Wirth and her team are addressing the larger design flaws of the course and not simply focusing on the Google Docs fiasco.  This course imploded because it was badly designed.  It would have created chaos in any group larger than about 50 students.  Its implosion is not a sign that MOOCs are dead (or should be dead) or that online education doesn't work.  It's a sign that students are picky customers and they aren't going to tolerate a badly designed product from us instructors.  This is as true at a brick and mortar university as it is online.

Setting and Enforcing Boundaries

Any parent knows that a critical part of good parenting is setting and enforcing boundaries: bedtimes, appropriate snacks, how much (if any) television, computer time, etc. is permitted.  The particulars of these boundaries will differ household by household; and households where the setting and enforcement of boundaries is absent might be characterized as dysfunctional.  More and more, I am understanding that university teaching is not so different from parenting teenagers (in part, I suspect, because more of the parenting work is being left to us to do).  My house is my classroom, and in that classroom I'm the parent who has a responsibility to set and enforce rules.  My rules are likely to differ in their particulars from those of my colleagues around campus.  A well-run, functional university, however, would be characterized by a faculty who understood the need for setting and enforcing rules and boundaries in their classrooms and in their relationships with their students.  Among other things, this might remove the need for Boards of Regents to convene to fret over inappropriate relationships between faculty/staff and students (but that's another story...).

Back to classrooms and boundaries.  The first week of any class, but especially a large enrollment course, is the time to set the tone for the class.  Part of this tone-setting involves the explication of classroom rules.  That is, boundary-setting.  In my course syllabus, for instance, I have an entire section dedicated to classroom etiquette and I explain in detail what is and is not acceptable behavior for my classroom.  I also spend about 20 minutes during the second class meeting reviewing these rules and explaining why they are in place and how they will be enforced.   Finally, the students have to sign and hand in a "syllabus contract" stating that they have read and agree to the course policies, including policies governing classroom behavior.  This might seem rather extreme to some, but it is absolutely necessary.  Even though I know that we aren't possibly going to be able to enforce every single rule (e.g. no web surfing during class), it is important for those rules to be there if we have an egregious case that does need to be addressed.  It also communicates a lot to the students about me as an instructor.  In basic terms, it tells them not to mess with me.

Yet, like all children, some students will inevitably test the boundaries.  This can provoke a lot of discomfort in instructors, especially because most of us don't particularly like being cast in the role of parent vis-a-vis our students.  On some level, we want to be more akin to their wise older sibling rather than their mean parent.  In smaller classes, it might be possible to have loose boundaries and not have the class devolve into chaos (and have the students as a whole lose all respect for their instructor).   In a large enrollment class, however, a willingness to enforce boundaries is job requirement #1.

Let me give a personal example of boundary setting from my current class: I am having my students use Piazza for asking questions and also for guided discussions.  Last week, I had a student post (anonymously, though a quirk in Piazza lets us see their names once we make the thread private) something stating his dislike of the wording of an i>clicker question from class (of course posted while class was in session!).  A couple of students responded to him, telling him he was wrong and to chill out.  I then responded, explaining in more detail why the answer to the i>clicker question was correct.  I concluded by saying that Piazza wasn't a place to air beefs with me and that he would do better to talk about these in person or at least in a private email.  After a few hours, the student--still anonymous--responded angrily, saying that I was unprofessional and his fellow students were jerks.  The TA immediately took down the thread and instructed the student privately to raise his issue in person with a member of the teaching team.

In retrospect, I will take such threads down immediately (I actually didn't know I could do that and only learned from this experience) and respond directly to the poster in private.  It never would have occurred to me that I would need to tell a student not to be a troll (as happened a few time with the first discussion thread); or not to post private beefs with the professor on a public forum like Piazza, all in an effort to stir up the indignation of other classmates--and then react angrily when those classmates disagree with your position.  I have to remember that this Facebook generation has a very disturbed sense of private and public, and that many of them have no real sense of propriety when it comes to different kinds of social media.  Part of why I incorporated the discussion board into the course is precisely to give them practice and feedback on the art of conducting a virtual discussion.  It's a skill they will certainly need to have.  At the same time, I'm learning, the rules of good behavior aren't intuitive to all of them.  In the fall 2013 version of my course, I will be giving the students a very detailed guide to appropriate discussion board etiquette.

Monday, February 4, 2013

Learning from Failure

There is a fascinating article in The Atlantic this month on the topic of failure.  Specifically, it is arguing that children who are not allowed to fail never learn how to recover from setbacks.  At some point, usually late adolescence, these over-protected children go to college, suffer a setback (perhaps so small as getting an A- in a class), and freak out.  They have few if any coping skills; they don't realize that they will survive and that their lives are still worth living.  I see these freak outs every semester (including what amount to threats of suicide), and their frequency is increasing exponentially.  I often feel like my colleagues and I are being cast in the role of parent, trying to teach our students the basic coping skills required to manage and bounce back from a setback.  All of it makes me very grateful to have grown up in an era when only the winning team (and maybe the 2nd place team) got trophies; when my parents and teachers didn't shield me from experiencing disappointment, frustration, and even failure. As an athlete, I learned at an early age that there were always winners and losers; and that, sometimes, I would lose no matter how hard I tried or how much I wanted to win.  It was good preparation for the academic life.

Learning from setbacks (and outright, ugly failures) has been a recurrent theme of my professional life since September.  I wasn't really prepared for it--generally, my students like me and rate my classes highly; this (often vocal) element of student dissatisfaction was hard to process.  After all, I had worked so hard to design and prepare a course that I was really proud of and that would have been great if only the students had done what I had asked them to do.  Yet, I had to admit, I should have anticipated that many of them would not behave like "ideal" students.  I dug in my heels, quietly gave thanks that I was well-prepared to respond constructively to setbacks, and tried to learn as much as I could from my fall class.  A lot of things were out of my control--problems with the communication between i>clicker and Bb when switching between Mac and PC; the time block and location of the class; the physical space of the classroom.  I worked really hard to address the issues that *were* in my control.  I observed how the students used the learning tools; and spent many hours thinking and talking to learning and tech specialists about my plans to tweak the design. 

These tweaks were, in the end, pretty simple: addition of some in class lecture; weekly quizzes; written worksheets for the ethics cases; fewer ethics cases but more time spent on them; overhaul of the assessment structure; addition of a discussion board with threads that feed into the class lecture.  So far, I am extremely pleased with how the spring class is going.  I know better than to assume that I am home free.  But, at this point in the semester, they've done every activity/exercise once.  The extra time given over to allowing them get settled and orienting them to all the different tools in the course seems to have paid off in the form of a noticeably higher comfort level among the students.  The discussion board has been an outstanding addition.

I've also come to appreciate that some lecture should always stay in the mix--it does far more than just communicate information.  It lets the students get to know me, experience my quirky sense of humor, and feel a human connection to me.  It also fulfills those expectations they have when they walk into a massive auditorium.  By making the videos serve as supplements to rather than replacements for in class lecture, my current students seem to feel much more comfortable, much more oriented.  In reality, my in class presentations aren't substantially different from what I did in the fall.  But, with the frame of a lecture around the i>clicker and peer discussion and group discussion, it doesn't seem to them that I am doing anything particularly different than what happens in other lecture classes (even if I likely am).

I've also learned how to choreograph a class lecture.  I've learned that I can't drone on for 20 minutes without letting the students do *something*.  I've learned to give them short amounts of time for i>clicker and discussion questions.  They can answer most i>clicker questions in 30 seconds; and after about 90 seconds of peer discussion, have accomplished a lot.  I'd rather cut them off early than have them chatting about their weekend plans.  I've learned to set clear boundaries and to reinforce them frequently.  I've learned how to manage a teaching team (and am forcing myself to stop micromanaging them).  I know from experience that this class will only work if I give the teaching team discrete tasks and then leave them alone to do their jobs.  In the fall, I ended up doing a substantial amount of the grading in the final month, largely because I convinced myself that it was necessary to avoid a lot of student complaints.  It wasn't necessary.  It was a nightmare.

Over this past week, I have been reflecting a lot on the principle of learning from failure as I watched the Coursera Fundamentals of Online Education course implode and eventually be taken offline after one week.  I was swamped with work and so my only real sense that there was trouble were the frequent emails from the instructor trying to address some or another issue (largely start of semester organizational issues).  Midweek, I logged on and started exploring the site and reading the discussion board.  It was certainly true that the course wasn't well organized and the instructor (as I did in the fall) seemed to design it with the assumption that tens of thousands of students would behave ideally and not require any instructions for tasks.  Instead, it was total chaos and plagued by technical problems (which really seem to have been design problems, in that poor choices were made about how to organize groups and where to do so).

I felt increasing empathy for the instructor, Dr. Fatima Wirth.  It was obvious that she was completely blindsided by the organizational difficulties and not was becoming increasingly harried.  Given the absence of clear instructions for the first week's assignments and the apparent expectation that 41K students would intuitively know how to enroll themselves in a group, arrange discussions on vague topics, and navigate the course site with ease, it seemed apparent that the course preparation was rushed to meet a deadline and that the organizational issues that cropped up were the result of a  rushed preparation process.  It also seemed more like a course for a small group of American students with fast and easy broadband access.  I was a little surprised that Coursera (apparently) didn't do more to advise the instructor; and that the course wasn't beta-tested ahead of time.  I've taken several Coursera courses as an auditor with no substantive issues.  In the case of Fundamentals of Online Education, it seemed that Dr. Wirth was utterly unaware of who her audience was; or the multiplicity of ways that Coursera students take classes.  The design relied heavily on group discussion and assumed that every registered student would want the same (highly invested) experience.  Never mind that a substantial number of the students are faculty who are mid-semester and may not have the flexibility to spend 7-10 hours/week outside of work.

I was also disappointed that the decision was made to yank the class rather than to figure out how to solve the problems and keep it running.  I suspect that much would have been learned, by Dr. Wirth and by the Coursera people, in the process.  The users of this course were a particularly good audience for soliciting feedback about how to improve the experience since many teach blended or online classes and are well-versed in the topic.  I can understand that Coursera doesn't want bad press; and I am sure that Dr. Wirth was in shock.  Still, I can't help but think that things would have turned around and that, in the process, a much better design for future students could have been created.  To my mind, a great opportunity for learning was squandered (though, admittedly, I don't know the whole story and perhaps Dr. Wirth and her team simply did not have the time or energy to invest in a total overhaul while the course was in session).

I spent some time this morning reading through tweets at #foemooc.  It was very informative, but perhaps the most interesting thing to me was seeing a group of students working cooperatively to resume the class on their own.  This is perhaps the most important lesson from this #moocmeltdown: any design has to be driven by an informed understanding of the audience and their range of behaviors.  It must assume that users/students will sometimes behave in unpredictable ways and have room for that.  At the same time, it must provide enough orientation (objectives, assignment directions, rationales for tasks) that students are herded towards the gates that will lead to a positive learning experience.  They will never "use" a course precisely as it was intended--just as readers will never read a novel and recapture the exact intention of the author.  As instructors, all we can do is know our audience well enough to create a design that takes account of a range of behaviors; put in place incentives to encourage students to use the  learning tools largely as we intended; and not be afraid to spend a lot of time early in the semester on orientation as well as setting and enforcing boundaries.

The other thing we instructors have to do, though, is to be able to tolerate failure.  It sucks, to be sure, but it's also an opportunity to learn and do something better.  My recent experience with flipping my large enrollment Intro to Rome class has brought back memories of all the time I spent at pitching practice with my dad (my catcher).  There were days when things just weren't going my way and I wanted to throw in the towel.  One of the most important life lessons my dad taught me was the importance of tenacity, of finishing what I started.  It is a lesson that has stuck with me for 40 years, sometimes for the worse but very often for the better.  It was when I was struggling in practice that I worked the hardest; it was when I was getting hit hard in a game that I learned how to dig in, gut it out, and find a way to get out of the inning.

When challenges came up during my fall class, I was taken aback and found it hard to deal with the powerful force of student resistance.  I didn't like feeling like I was the enemy to some of my students.  It was a struggle to keep my game face on and finish the semester.  Yet, in doing so, I learned so many of the lessons--and learned them in a way that will stick with me!--that led to the design improvements for my spring class.  If everything had gone swimmingly well and the students had all been satisfied with my first effort at a flipped class, I wouldn't have been motivated to think hard about the mechanics of the learning process; and to figure out how to make the class work better for future students.