Over the past six months, as I have moved forward with redesigning and implementing my flipped class, I have adopted the phrase "evidence-based" as my mantra. Especially when it comes to teaching, we are too often forced to rely on our own impressions and uncontextualized student self-report (usually in the form of end-of-semester course evaluations, administered during one of the most stressful weeks of their semester). I have countless conversations about teaching with other faculty members, in the course of which they tell me about what their students are thinking and how their students are behaving. Of course it's possible to observe certain aspects of student behavior in the classroom. Still, I'd maintain that, too often, what we think we see isn't what is really happening. Certainly, speaking for myself, I'm too busy thinking about my own part (delivering the lecture) to really notice how many students are paying attention and taking notes; and how many are on Facebook, texting, or doing other things of that sort. This even became clear to me this past Friday. My TA was leading the review session and I sat out on the middle of the auditorium. As I looked around, I could see phones out on desks and students reading and writing occasional texts--this in a classroom where computers and tablets are banned (but apparently I wasn't specific enough?)!
One of the biggest problems in teaching large classes is getting students to read before class. They know that their chances of getting "called out" aren't very high. Certainly, the risk isn't high enough to motivate preparation. As well, in a traditional lecture format, the rewards are also not high enough to motivate preparation. After all, what's the point of knowing stuff if they can't share it or apply it?
Similarly, arguments against using lecture capture have centered on the claim that it will encourage students to skip class, save all their studying for the night before the exam, and then cram. I am sure this happens, but it is not clear that it is the norm even in a regular lecture class. Our ITS Department does a survey at the end of each semester and, at least in my class, they seem to have used the lectures as the class went along and then to review for the exam.
Thanks to lecture capture (and the fact that we can see number of views of each lecture), I know before I walk into my class how many views each lecture has had. This isn't identical to unique views and it isn't broken down to tell me how long they watched each video (they have that data but I just want a rough idea before each class). What I am seeing is that about 25% have watched the pre-recorded lectures prior to the Friday review (but, since they just finished up working on the previous week's material on Wednesday, that is a lot). By Monday, when we use that material in class, about 75% of them have watched the lectures. That's incredible and I suspect a much higher number than did the reading in previous semesters. I have some data to back this up because they consistently score about 75% accuracy on i>clicker questions quizzing the material. As the semester progresses, I will be gathering more material about when/if they do textbook readings. I do know that they are reading the primary source readings because they are able to answer questions (at about 75% accuracy) and apply it in peer discussion.
What is motivating this level of preparation? I am pretty sure it's simply the chance to demonstrate and apply it, either through i>clicker questions or peer discussion on various topics. Certainly, we saw an uptick in views in the 2 days before the exam, but it wasn't massive and seemed to indicate students reviewing the material, not encountering it for the first time.
Something I love about getting data about views before I walk into class? I can adapt how I integrate peer instruction. If I know that only 25% have viewed the videos, then I know I am going to be doing more teaching than reviewing for most of them. We still do i>clicker questions but I will often follow up a poll with a "turn to your peer" and then retake the poll and then review the question. If I know that 75% have done the viewing, then I might move through polls on basic facts a bit more quickly and focus on complex concepts. My prepared PowerPoint presentation remains the same, but I can improvise based on the numbers I get from ITS.
At the end of the semester, I will be working with ITS and some analysts from our campus Center for Teaching and Learning to engage in some more serious data analysis, in an effort to better understand student learning. We will be looking at unique views as well as patterns in how they watched the videos (including how often they closed the window with my talking head in it!). In this way, we will be able to move away from impressionistic ideas about student learning habits to data-driven, evidence-based arguments. In doing so, hopefully, we will be able to work towards a better understanding of how learning worked in my particular class; and, perhaps, start to get at describing some "best practices" for a large enrollment flipped class in the humanities.
No comments:
Post a Comment