Shiny Apps for Teaching Machine Learning


Abhishek Chakraborty (Lawrence University), Eric Friedlander (The College of Idaho)


Location: Memorial Union Great Hall

Abstract

 

Background. There are many examples of Shiny Apps that are being used to teach statistical concepts. Most of them are intended for audiences at the introductory level (see references [1, 3, 6, 8, 10, 12, 14] in the Additional Information section). To our knowledge, very few apps are designed to teach concepts in upper-level statistics courses (see [4, 12]). Concentrating on the field of machine learning, the currently existing apps (see [9, 11, 13]) focus on specific approaches. Despite these resources, we have observed that students often lack an intuitive understanding of general model complexity and performance, and how predictive models behave with different (hyper)parameters. Online articles (see [2, 5]) that cover these concepts often use static visualizations that lack interactivity. This motivated us to develop several web-based, interactive Shiny apps to aid in the instruction of fundamental ideas such as the bias-variance trade-off and various classification metrics. Our goal is to provide students with an interactive platform where they can play with these predictive models and actively learn about these seemingly abstract ideas. The apps display different regression and classification modeling strategies; thus, we believe our work connects well with this year’s USCOTS theme of “Useful Models”.

 

Methods. We are in the initial stages of class-testing these apps. This consists of having students fill out surveys regarding the apps' design and usefulness.  As no assessment tool like ARTIST or CAOS exists for machine learning, students are primarily giving qualitative feedback similar to [7].  In addition, we have contacted several faculty at other institutions to class-test the apps and are collecting survey data from both them and their students. Based on the results of these surveys we intend to modify the apps and conduct a larger study in the future.

 

Findings. We are currently in the process of class-testing these apps. We hope to present student and instructor feedback on their experience interacting with these apps during the poster session.

 

Implications For Teaching and For Research. We hope that these interactive apps will help instructors to effectively demonstrate complex/abstract machine learning ideas in their classes. We also hope that the apps will improve student engagement and understanding of machine learning concepts. Ideally, we’d love to class-test whether our apps improve students’ understanding in a more rigorous manner. However, to our knowledge, there is no validated assessment tool for machine learning like those that exist for introductory statistics (e.g. ARTIST, CAOS, etc.). We would be very interested in discussing or contributing to such a project with other researchers at the satellite conference.

 

Additional Information. These apps are intended to be helpful for students from different backgrounds as they visually explore concepts without going into mathematical details. The references cited above are listed below.

 

[1] Arnholt, A. T., (2019), “Using a Shiny App to Teach the Concept of Power,” Teaching Statistics, 41: 79–84, https://doi.org/10.1111/test.12186  

 

[2] Brownlee, J., (2019), “Gentle introduction to the bias-variance trade-off in machine learning,” Gentle Introduction to the Bias-Variance Trade-Off in Machine Learning - MachineLearningMastery.com   

 

[3] Doi, J., et al., (2016), “Web Application Teaching Tools for Statistics Using R and Shiny,” Technology Innovations in Statistics Education, 9(1), https://doi.org/10.5070/T591027492  

 

[4] Fawcett, L., (2018), “Using Interactive Shiny Applications to Facilitate Research-Informed Learning and Teaching,” Journal of Statistics Education, 26(1), 2–16, https://doi.org/10.1080/10691898.2018.1436999  

 

[5] Fortmann, S., (2012), “Understanding the bias-variance tradeoff,” Understanding the Bias-Variance Tradeoff 

 

[6] Freire, S. M., (2019), “Using Shiny to Illustrate the Probability Density Function Concept,” Teaching Statistics, 41: 30–35, https://doi.org/10.1111/test.12176   

 

[7] González, J. A., et al., (2018), "Assessing Shiny Apps through Student Feedback: Recommendations from a Qualitative Study," Computer Applications in Engineering Education, 26: 1813-1824, https://doi.org/10.1002/cae.21932  

 

[8] Lu, Y., (2023), "Web-Based Applets for Facilitating Simulations and Generating Randomized Datasets for Teaching Statistics," Journal of Statistics and Data Science Education, 31(3), 264-272. https://doi.org/10.1080/26939169.2022.2146614  

 

[9] Sage, A. J., Liu, Y., and Sato, J., (2022), “From Black Box to Shining Spotlight: Using Random Forest Prediction Intervals to Illuminate the Impact of Assumptions in Linear Regression,” The American Statistician, 76(4), 414–429, https://doi.org/10.1080/00031305.2022.2107568 

 

[10] Sisso, D., et al., (2023), “Teaching One-Way ANOVA with Engaging NBA Data and R Shiny within a Flexdashboard,” Teaching Statistics, 45, 69–78, https://doi.org/10.1111/test.12332   

 

[11] VON BORRIES, G. F., and QUADROS, A. V. de C., (2022), "ROC app: an application to understand ROC curves," Brazilian Journal of Biometrics, 40(2), https://doi.org/10.28951/bjb.v40i2.566   

 

[12] Wang, S. L., et al., (2021), "Student-developed Shiny Applications for Teaching Statistics," Journal of Statistics and Data Science Education, 29(3), 218-227, https://doi.org/10.1080/26939169.2021.1995545 

 

[13] Wang, Q., and Cai, X., (2023), "Active-Learning Class Activities and Shiny Applications for Teaching Support Vector Classifiers," Journal of Statistics and Data Science Education, 32(2), 202-216, https://doi.org/10.1080/26939169.2023.2231065   

 

[14] Williams, I. J., and Williams, K. K., (2018), “Using an R Shiny to Enhance the Learning Experience of Confidence Intervals,” Teaching Statistics, 40, 24–28, 10.1111/test.12145.