By Alex Lyford (University of Georgia)
Constructed-response questions, those in which students must respond to a posed question using their own words, have been shown to help researchers and instructors understand students' knowledge and understanding better than multiple choice questions. One principle advantage of these open-ended questions is that students are able to elaborate, often both correctly and incorrectly, about how they arrived at their answers. Unfortunately, instructors are often unable to take advantage of these open-ended questions due to the time-intensive nature of their construction and scoring. One effective approach in reducing this time requirement is to leverage machine learning algorithms to make classifications about student responses. Machine learning algorithms utilize a set of training data comprised of pre-categorized responses to make predictions about future, un-categorized responses. In this presentation, I will utilize several machine learning algorithms together in an ensemble to classify student responses and provide real-time instructor feedback. I will also show sample classifications made for questions asking students to describe a right-skewed histogram and for questions asking students to compare the variability between two histograms or dot plots. I will then demonstrate how these classifications can be used to improve instruction in all courses and offer unique perspective into student knowledge for large-lecture courses.