The integration of weekly quizzes to the assessments in my 400-student Intro to Rome course has had many benefits. First and foremost, it requires the students to stay on top of the course material and learn it cumulatively, week by week, rather than trying to stuff it into their brains over a 48 hour, Red Bull-fueled cram session. This means that, for all intents and purposes, my students are flipping. They are therefore able to engage actively in class and make good use of the opportunities for reviewing and applying content in class, under my guidance. The quizzes, which are machine graded, also provide the students with weekly feedback about their learning strategies. I take care that the quizzes are challenging, so that there is good separation among the students; and so that they do not get lulled into thinking that they can get away with not knowing the week's material deeply. So far, this is working well. We saw many of the students making adjustments from the first quiz to the second quiz, and we also saw many students performing consistently well from quiz to quiz.
The other significant benefit of these weekly quizzes is the information it provides to the teaching team. We are now starting the fifth week of the semester--1/3 of the way into the course. The class has its first midterm on the 19th of February. Those students who are scoring 10-12 on the quizzes are likely in good shape for the midterm and will be able to prepare with relatively little extra work, just a few hours of careful review. Those students who are scoring around 50% on the first two quizzes, however, are likely not in such good shape. In an effort to get them to confront this now, while there is still time to either a. drop the class before a significant investment of time and energy (which also detracts from their study for their other classes); or b. fully commit and bring up their grade, I had one of my teaching assistants go through the gradebook manually and identify students who had scored 50% or lower on both of the first two quizzes.
These "at risk" students--23 out of 387 (though there were about 20 more on the bubble)--were sent a short email expressing our concern and urging them to contact their assigned TA or me; and reminding them that we were there to help them be successful in the course. I have emailed all of the TAs a list of the students who received an "early warning" email and will check to see how many follow-up before the first midterm; and, also, how many are able to raise their scores on their own. We will likely do something similar after the first midterm. Our goal is to let these underperforming students know that there are resources to help them; and to do what we can to force them out of their denial, out of their belief that somehow everything will be ok without them actively making changes to their learning behaviors.
We also hope that some of these students will recognize that perhaps they should drop the course. We want them to drop as early in the term as possible, so that they aren't wasting time on a class they aren't going to finish. Nothing is more frustrating than signing a drop form for a student who has limped along for months and then, a week before the end of the semester, finally admitted that they aren't going to raise their grade. It is a waste of the student's time but also of the time and resources of the teaching time. Often, we will invest hours of time working with these students, knowing that they are likely to drop but unable to convince them to do so. One goal of these early warnings is to force students to be honest with themselves, to force them to decide whether they are willing to make the necessary changes to their learning behaviors for my class.
Someday, hopefully in the near future, this early warning process will be mechanized, built into any LMS. I will be able to set it to send warnings that I have written. I can even imagine a time when the system will be able to develop algorithms for particular courses, on the basis of several years of data. It will be able to identify less obvious at risk students because the data will tell us that students who perform a certain way on early quizzes are likely to score a certain way on later assessments. After a decade of teaching and several iterations of this Rome class, I have an intuitive sense of these algorithms. But I can imagine that data (not necessarily BIG DATA, but just a few years of 1000 students/year) will eventually be able to predict with some reasonable accuracy how students are likely to perform in a course after just a few assessments. I can imagine that this will give me a big stick to use in persuading at risk students to change their approach (or decide to drop the class).
As for the success of my low-tech early interventions, stay tuned....