Saturday, August 31, 2013

The Real Magic Formula: Formative Assessment


Earlier this week, Dr. James Pennebaker and Dr. Sam Gosling, two UT Austin professors in the Department of Psychology, launched a live-streaming online course that they are calling a SMOC (Synchronous Massive Online Course).  The nomenclature is a clear riff on the MOOC (Massive Open Online Course); but the SMOC differs in a couple of ways.  First, it is not open.  In fact, it costs $550 (still, less expensive than a classroom-based course at UT Austin).  It is massive by most standards, in that it enrolls about 1000 students (they are hoping to increase that number to 10,000 eventually)--but not the multiple tens or even hundreds of thousands who register for MOOCs.  Finally, as they highlight, it is live rather than pre-recorded.  Students are required to sign in at the start of class to take short "benchmark" quizzes.  Since there is no participation grade, it does seem that students can opt to sign out once they have finished the quiz and watch the archived recording of the class at their leisure to prepare for the next benchmark quiz.  The course does include moments for live chats and polls, in an effort to engage students during the live broadcast.  Still, as someone who teaches a similar audience, I wonder how many will, in fact, engage regularly during the live broadcasts, especially as the semester goes on.

The start of the course has been accompanied by quite a lot of publicity: an op-ed in the Houston Chronicle; an article in Inside Higher Ed; several press announcements from UT Austin; a television appearance on Good Day Austin; The Daily Texan (and here); and, now, articles in the Wall Street Journal and the New York Times.  There is also a Twitter feed for the class: @PsyHorns.  The press coverage emphasizes the innovation of the model, in particular, the claim that the live streaming broadcast as well as the delivery platform (an in-house product developed by UT Austin and aptly named Tower) facilitate connectivity and active engagement in a way that the MOOCs and other large enrollment online classes have not.  This is certainly a noble aspiration, but a lot depends on the students.  This is especially true since no part of the final grade is connected to engagement in the live broadcast.  I do hope that, at the end of the semester, Pennebaker and Gosling will release at least preliminary data about student engagement: how many logged off immediately after the quiz?  How many engaged in one poll or pod discussion?  all of them?  Are there connections between final grades and participation in the live broadcast versus reviewing the archived broadcast?  These are important questions for faculty who are designing their own version of an online course.

I am grateful that I've been given the chance to audit this course.  Mainly, I am interested in experiencing Tower from the student perspective, as I work through the design of Rome Online.  I am especially interested in the question of connectivity--how to facilitate connections to me but especially between students in a virtual classroom that may well contain hundreds of students.  I am looking forward to seeing how Pennebaker and Gosling make use of the Tower platform; and how they manage the strengths and weaknesses of a synchronous broadcast to a large audience (though, at least in the current iteration, not any larger than they've taught in previous years in various configurations: c. 1000 students).

One tool I am going to be watching closely is the benchmark quiz.  The Tower platform has a very nice quiz tool that I am hoping to use in my own online course.  I am eager to experience it from the student point of view.  As I discovered in my own rather large (but only 400 students) course last spring, a key element for improving student learning is structure--and especially, frequent quizzes.  This point is made clearly in the WSJ article: "Recently, they moved one class of students online and gave them multiple-choice tests that delivered instant feedback. That group performed better than their offline class—and the online students' grades even improved in their other classes. The professors hypothesize that it is because the regular quizzes helped the students acquire better study habits."

There's an important point here: improved student performance in a course (whether the Intro to Psychology or any other large enrollment course) is not a consequence of any technological bells and whistles but rather, an instructional design decision--to overhaul the class assessment and put much more emphasis (and weight) on quizzes rather than large stakes midterms.  This was the case for Intro to Psych; it was also true in my Intro to Ancient Rome class.  In large classes, regardless of mode of instruction, students require a lot of structure.  In my Rome class, I continued to give midterms, so that I had ways to compare the different cohorts.  I suspect that, if midterms were still offered in the Intro to Psych class, they'd find that the benchmark quiz cohort performed better than the midterms only cohort.  There's a pretty basic reason for this improved performance: quizzes require students to study as they go rather than try to cram a month's worth of learning into a few days.

One thing that would be interesting to know is whether, in a "benchmark quiz" cohort, there's any correlation between grades on quizzes and engagement during the live broadcast.  If there is, that's a good argument for the high cost of producing a live streaming broadcast.  If there's not, that tells us that the value is in the formative assessment rather than the mode of delivery.  It's these sorts of questions that are important to test as we move forward with offering online versions of our campus-based courses.

1 comment:

  1. On frequent quizzes, besides encouraging students not to cram, it likely exploits the "spacing effect" ( http://en.wikipedia.org/wiki/Spacing_effect ) and if of sufficient difficulty, "deliberate practice" (see http://www.issues.org/29.1/carl.html ).

    ReplyDelete