For me, the hardest part about getting started was finding the right balance in my classroom – the right balance between lecture and activities; the right balance between in-class and out-of-class learning; the right balance between student accountability and student responsibility. None of this, however, really had much at all to do with the randomization-based curriculum. I had taught courses for pre-service and in-service K-12 teachers that focused on simulation-based methods . . . I knew it was effective pedagogically. The hard part came when a colleague and I decided that we would try to flip our classrooms the same semester we implemented the randomization-based curriculum. And, that too in a classroom with 2-3 times as many students as a “typical” intro class in our department.
… the right balance between lecture and activities; the right balance between in-class and out-of-class learning; the right balance between student accountability and student responsibility.
It had been 15 years since I’d last taught a general, introductory statistics course. In those intervening years I had learned so much about the importance of stressing conceptual understanding over calculation, and I’d spent a lot of those years training our department’s TAs to emphasize statistical thinking over all else. I had seen how the in-service and pre-service teachers I’d worked with had fallen in love with the randomization-based approach, and developed richer statistical reasoning skills. The process of discovery-driven learning worked so well with the teachers . . . it would work with generic undergraduates too, right? I’d get a chance to show the TAs how the introductory class should really look, and implement all of the ideas I’d been pitching to them! This would be awesome!
In retrospect, I was clearly bound to fall on my face. Maybe we were a bit idealistic when we decided to try flipping. Surely, in exchange for no graded homework, students would read the book! We started the semester by giving daily reading assignments, and anticipated using class time solely for group activities and discussion. We planned bi-weekly quizzes, to keep them honest. We reasoned that if they came to class, participated in the group activities, and did the readings, the quizzes should be a breeze! We even had the video resources (from Tintle et al.’s Introduction to Statistical Investigations)—if the students needed extra help, they could listen to Nathan explain the concepts.
And then . . . they didn’t read. They didn’t watch the videos. They (mostly) came to class and participated in the group activities, but it wasn’t enough to ensure understanding. I’d have an Exploration (which are guided activities, examples of which can be found here) planned for class time, but the students were so unprepared that they didn’t even know where to start. A few weeks in, we decided we needed to start at least quasi-lecturing. I didn’t just want to repeat everything in the book—I still wanted them to read! Instead, I would find an example similar to one being used in the current section, and work through it, asking questions of the students along the way. For example, while covering Section 6.2 (simulation-based approach to comparing two means), we used our class data on haircut prices that we had collected in Section 6.1. Then, after we’d discussed the example as a class, they would be turned loose in their groups to tackle the corresponding Exploration. Our class met for 50 minutes 3 days per week, so we’d spend about 20-25 minutes going through an example as a class, and then the remaining time would be spent in small group work and/or whole class discussion to wrap up the Exploration.
We still asked them to read the sections in preparation for class, but we added some accountability to the mix. Often, we’d ask them to complete the first few questions of an Exploration—the questions about experiment versus observational study, the hypotheses in words and symbols, etc. We still didn’t grade them for accuracy—there were just too many students–but we did start checking them for completeness. Once the students saw those check-marks start showing up in the on-line grade book, did their level of engagement change? For some, yes. Not so much for others. Quiz and exam grades did seem to go up as the semester progressed, so maybe it worked?
We still asked them to read the sections in preparation for class, but we added some accountability to the mix.
I don’t think by the end of the semester I found the ideal balance of lecture/activities or between expecting students to be responsible for their own learning and holding them accountable for assignments. Maybe there is no ideal balance. I know that next time I’ll start doing assignment checks from the very first day. I’ll also add “reading” questions to the quizzes. I’m also toying with the idea of adding a “group assessment” to the final overall grade . . . having students assess the engagement/preparedness of their fellow group members. Has anyone tried this? I’d love to hear about how it worked!