Alright, it looks like numbers are starting to… Level off a little bit, so I think most people have been able to get into the room. I'm gonna go ahead and share my screen. And… You can go ahead and start. Well, hey everyone, uh, welcome back to the JSDSE Cause webinar series. Today, we're joined by Mina Doju from UC Irvine, Sabel Kazak from Middle East Technical University. And Josh Rosenberg from the University of Tennessee, Knoxville, who will be presenting their work that the design and implementation of a Bayesian data analysis lesson for pre-service mathematics and science teachers. Joining me in hosting and moderating the webinar today is Adam Loy from Carlton College, who is also an associate editor with JSDSE. I'd also like to give a special thank you to Andrew Ferguson, the cause webmaster, who manages the cause website, the listserv, and handles the tech for these webinars. And before we turn it over to our speakers, we have a few announcements to make, and then at the end of the talk, you can put your questions in the chat, and we can help share them with the speakers. Now, first I want to highlight a couple of recent articles from the Journal. These are just a few of the most recent papers, which cover a variety of the types of articles that are published in JSDSC. Michael von Maltitz has a paper on portfolios of learning evidence and interview assessments in a mathematical statistics course. Laura DeLuca and co-authors have a paper on developing student statistical expertise through Writing in the Age of AI, and that's going to be a topic for a webinar actually coming up in the fall. Andrew Ackerman has a paper on teaching a course at the intersection of a Applied statistics and moral philosophy, and one of our speakers today, Mina Doju, has a paper with co-authors on a systemic literature review of undergraduate data science education research. Next, a couple, oops, uh, next a couple of, uh, uh, US COTS announcements. The US COTS is coming up pretty soon. You can still register for the conference, and Birds of a Feather proposals are due on June 20th, and the next GSDSE Cause webinar will be in September, and I'll send out further details and registration information as the fall semester comes a little bit closer. All right, on to our wonderful speakers. Mina Doju is Associate Professor of Teaching and Vice Chair of Undergraduate Studies in the Department of Statistics at UC Irvine. Her work focuses on modern pedagogical approaches in the statistics curriculum, making data science education accessible, and undergraduate Bayesian education. She's the co-author of the book Bayes Rules, an introduction to Applied Bayesian Modeling, and I'm going to go slightly off-script here and just plug that, as well as a really great resource for teaching Undergraduate Bayesian education. Dr. Sabel Kazak is an Associate Professor of mathematics education at Middle East Technical University in Turkia. Her research interests include teaching and learning of probability statistics and data science, especially at school level, and in teacher education. She's an associate editor for JSDSC and the Statistics Education Research Journal, and the Vice President at International Association for Statistical Education. And Joshua Rosenberg is an Associate Professor of STEM education at the University of Tennessee, Knoxville. He studies the use of data in education, especially in STEM education contexts. He's also an associate editor for JSDSE and a committee member for the forthcoming report, Developing Competencies for the Future of Data and Computing. The role of K-12. Thank you all so much for coming, and for sharing your work with us. I'm going to go ahead and stop sharing my screen here and turn it over to you all. Hi everyone, I'm Mina, and my collaborators are… To a sense, well, I don't know if they want to say it. Hello. So, I'll start off by… we're actually going to collectively present today, because this was through a… truly a work of collaboration, so we're each going to present the parts that are really strengths. Contributing to this project. So we'll start with some, um. Motivational questions that get us to doing this project in the first place, like. Because all of us have some teaching experience, and we know that students in schools have many questions that they might be asking in everyday life that is related to uncertainty in science, such as, like, why are there more birds visiting the courtyard outside of their school today? Or, how come an investigation during a chemistry laboratory did not work out as well for students in one lab group than the other? So these are some, uh, like, scientific uncertainties part of their class life, their everyday life. And so forth. But when we connect this scientific uncertainty with statistics education, one way we actually teach to dealing with the scientific uncertainty is inference, using statistical inference. And most, uh, precisely, we teach a lot of hypothesis testing. Perhaps when we think about, like, what the scientists practice, and what we teach. Perhaps some questions might look something like, given that we have observed some data, what is the probability that the hypothesis is correct? Or if the hypothesis is incorrect, what's the probability of observing the data or more extreme data? So these kinds of questions are the ones that take place in our classrooms, statistics classroom, uh, often, and… Perhaps the second one more so than the first one. The second one you might realize as the p-value question. And the first one is the basin question. So, since p-value is something that we teach a lot, and Bayesian is questions are… are based in data analysis is one way, uh, that is less covered in statistics classes. We wanted to focus on the Bayesian side of things. So, our overall goal was this. We wanted to develop a Bayesian activity. And we wanted to design this activity and work with pre-service math and science teachers in delivering this activity. We have designed the activity for the teachers, but our indirect goal of designing this activity was really K-12 learners using Bayesian approach. So even though we are working with the math and science pre-service teachers, our intended audience is that hopefully they will take this activity for their own students. Excuse me. So, there are several examples of related research in both Science education and in mathematics education. With respect to science education, there's been a few papers on a Bayesian approach on what science educators and science education researchers call The, uh, science and engineering practices. If you're involved with K-12 science at all, you're familiar with this Part of the… how the standards are written. And so there's been some more kind of Bayesian perspective on one of the science and engineering practices, argumentation. So, a little bit… different from using Bayes' theorem to analyze data. Um, when it comes to using… Bayesian data analysis approach, there is some relevant work that's been carried out at the undergraduate level. So this is work by, uh… a physics education researcher, Aaron Warren. Who used a sort of Bayesian shortcut for undergraduate students learning physics to interpret data in a Bayesian way. And, uh, Mine and I and colleagues adapted this approach in this app that we created in the far right, and wrote a paper on. What we call the confidence updater, basically using the approach Warren used with undergraduate students. For, uh, work with K-12. Students. So that's… there's some work in science education, but nothing exactly like what we're, um, what we… sought to do with pre-service features, uh, in the… Uh, in the work we're sharing today. Mina, do you mind to go to the next slide on mathematics education? There's also relevant research in mathematics education, especially a strong foundation of research on how K-12 learners understand probability. And there's also research that's a little bit more related to how Um, how probability plays into data analysis in terms of informal statistical inference. This is work by Makar and Ruben and colleagues especially, as well as work that, uh, Sibel has led with young learners, which relates to The next slide. Thanks, Josh. Okay, yeah, so, uh, like Minet pointed out, our indirect goal was to, uh. See the development in young learners' understanding of Bayesian approach. So, when we look at the literature in mathematics education or statistics education at K-12 level, there has been ongoing debate about how and when to introduce pacing ideas. Particularly in relation to interpreting probability, reasoning with uncertain time making informal, statistical inferences. Um, in… so, uh, there are two studies with young students. One is with a 7-, 8-year-old children. As you can see in the image on the right-hand side, the children were asked what's the likelihood of the bug, um. Random lay landing on a flower in this planter, uh. Box, and so they made some, uh, predictions first, initial estimate. On this non-numeric scale, uh, on a happy face scale. So you can see here their initial estimate, and then their estimate after generating 24 trials using a spinner. And then finally, doing a computer simulation with 500 trials in Tinker Plus, and you can see how they're, uh. Predictions converge to the expected, um… likelihood. So, and in another study, a little bit with older students, 10-year, 11-year-olds. Again, in this study. Students' reasoning about uncertainty in relation to their personal degree of confidence, uh. In their proposition regarding whether the chance game is fair or not is explored. And this study also shows that students initially relied on their intuitions to decide whether the game was fair or not, and often led to incorrect conclusions. Uh, but as they collected more data through physical experiments and computer simulations using TwinklerPlus. Both their reasoning and confidence, uh, seem to improve. In addition to what Saba was saying about younger learners, we also have quite… some literature in terms of college learners as well. So we know that college learners are actually… they're not exposed to many Bayesian courses, there are a limited number of Bayesian courses in… At least in high-ranking institutions. And these Bayesian courses have a high number of prerequisites. But one good thing of it is that they were relevant to all STEM majors, like, including obviously stats, math, and Computer Science, but also biology, Astronomy, and so forth. We also have a rich literature, many of these papers coming from JSDSC, actually, of exemplary Bayesian courses. So, even though Bayesian courses are few in number, we do have example courses out there for sure. And we have also seen Bayesian ideas to be introduced even in introductory level, uh. Courses, and… We also have seen many tools, like, different tools, uh, technological or, uh, tactile forms, like M&M candies or web simulators. To teach Bayesian concepts to make them more accessible to college-level learners. So, what's my most thing about young learners versus what I was saying about college learners. This is perhaps where we would place our own activity. Our intended learners are really somewhere between young learners and college learners. But I would say probably closer to college learners than young learners, that's Sabel was covering. But at the same time, as I mentioned, we were actually working with pre-service math and science teachers. So they are college learners themselves, even if they are going to be teachers very soon. We implemented the vision data analysis activity that we'll describe in more detail in just a moment, with free service science and mathematics teachers, so… This is a context served by teacher education courses. So these are individuals who are earning their teaching, uh, license. And they were split roughly half and half between future science teachers, several different subjects, life sciences. Physics, chemistry, and mathematics. A few at the middle grades level, and many at the high school level. And 16 of those 35 consented to participate in the research, and we used data from most of the those 16 students in the findings that we'll share in a little bit. So we implemented the activity over one two-hour class period. With students working primarily in one of small groups, and the groups were deliberately constructed to have some science teachers. In some mathematics teachers. And one thing I remembered as I was reading the paper to prepare for the webinar was that we did this in a partially hybrid format. This was taught in 2021, so in the aftermath of COVID, and some of the students were taking the course online. Some of them were taking the course in person. We worked really hard to pick a context that worked for both science and math, uh, teachers. So, what that meant was that the topic we chose had to have some science ideas that we could At least connect to, if not have this centrally be about. Um, in addition, we had to scope out the specific question that our students, who, again, were future math and science teachers. Would be answering through data that was amenable to mathematical modeling. So there's a lot of scientific questions that are really difficult to realistically collect and analyze and interpret data in a classroom context. Um, and on the other hand, there's a lot of… problems that are mathematically, um. Workable, that wouldn't have those connections to science. So, we picked sustainability as a topic that we thought will sort of work for both of these groups of students. And specifically, what students were working to answer was what is the proportion of unoccupied university rooms with lights on? So, not that science-y on first read, but we wanted to connect this to ideas about how energy was generated possibly calculating how much energy is used. By rooms with lights on over some period of time. Um, so the activity started with the problem, uh, Josh just introduced. Uh, about what's the proportion of unoccupied rooms on campus with Lyson. And, uh, there were several parts to the activity. First one was about prior ideas. We asked, uh, preserved teachers to make an initial estimate about the proportion of the lights And, um… and then mark their confidence level on a scale from 0% to 100%. And, uh, we also asked them to explain the assumptions behind their estimate. So we would know how they are reasoning with, which is an important idea, starting with the prior estimate in basin, uh, approach. And, um… They started using a shiny app, um. Which, uh, Minay will talk about in more detail later. They use the app to reflect the prior idea about the proportion. Kuan stated away using the beta binomial model, and we particularly chose this model because it's the most, uh, simplest and basic model when teaching Bayesian data analysis. In the next part, uh… resource teachers started recording the status of lives in unoccupied rooms from different large buildings on campus using a Google Sheet. We assigned these to each group in advance. And, uh, in the third part, um. They started using data, but only from the first 5 rooms. Uh, we wanted to, uh, write, uh… Write them to, uh, what they noticed and wondered about their prior. Uh, likelihood and posterior distributions in the app, and marking their confidence about the estimate on the scale, uh, based on the posterior distribution. And in the next part, they analyzed even more data from their group, the group, how many, whatever they collected. And as you can see on the right-hand side, they started… using the posterior from the previous part as the prior. In the, uh, in this part, when they have more data, and they compare the updated posterior with the previous one, and we asked them again to update if there's an update on confidence about the estimate based on the new information. And then the next… the following part, uh, involved analyzing the class data, or the whole 205 data. And again, they wrote about what they noticed about their estimate and evaluated their confidence again on the scale. And we finished the activity with the reflections, and we asked three questions about what are your takeaways, uh, remaining questions, and classroom applications. Now I'll take over to show a little bit of the app really quickly. And I should also note, we have used this app… the app is something we developed as this group. But the… really, the basic idea of the app comes from the base rules package. Which is my collaboration with Alicia Johnson and Miles Ott, so the visualizations are really from that app. So, the students start the activity with building their own prior, as Saba said, just to give you some examples, students can set the alpha and beta parameters of the beta distribution. So if a student thinks that pi is some high value, like proportion of rooms. With Lysonis high, they would perhaps set their beta to be something like this. Or we can possibly see a student on the opposite idea, where they think most people turn off the lights when they leave. And maybe something like this. So here, they're actually trying to build their prior in this tab of the app. And then, as Sebast said, they go ahead and… those 5 observations. And maybe, let's say they have 3… lights on out of 5 rooms observed. What we can… what they are doing here is trying to observe what their prior was, what the data tells them. And then what their posterior, after combining prior and likelihood. Of course, like, these students, it's the first time they are doing Bayesian data analysis. Much of this is supported with the lecture's instructions, so… to make meaning of what's the prior, what's the likelihood, what's the posterior. So it's not just the app that comes into play here. And you might have noticed, perhaps, that's very important here, many of these students are not familiar with the binomial distribution, so we didn't use words like success or a number of trials, but we try to make these be more generic names, as statisticians are not the best ones to name things. We thought success does not necessarily mean everything… the same thing. So let's say in the next step they, uh, viewed 20 rooms, and then maybe 18 of these rooms had, um… lights on, but they want… they had to do was really notice that Their posterior, after analyzing at the step. Is now there prior. At the next step. And then they even have more data, maybe they could have add 150 of the room, and then maybe see 80, but what should… by axes, isn't it? Let me fix my y-axis. What they should say is… Maybe not a good y-axis. What they should say at this point is. More data actually gives them more evidence, and they should see things approaching more to likelihood. And they should also see that basins can do sequential analysis. They can keep collecting more data and more data, and more data. I should all… I also wanted to bring your attention to what we have in the appendix, so that it doesn't get unnoticed, because some people here possibly are here for the activity itself, not necessarily the paper. The app is hosted online for free. We also have the handouts, feel free to use them, modify them however you feel. The need to, and the slides that I said, like, really the instructor is really leading the students, those slides are also included in the appendix as well. Um, after the participants experience working with this app, we asked about their reflections on the activity, and their responses seem to focus on these three main areas. One is about understanding the content For example, one of the participants that I now have a better intuition regarding the differences between prior likelihood and posterior probabilities. And, um, the other theme was lesson format and intention for teaching, since they are, uh, gonna be teachers, they focus on, uh, teaching as well. One participant saying, this is a great group activity for teachers, and to be introduced to the value and limitation of data, and another important idea came out from this reflection was the technology integration, uh, especially the use of Shiny app for data analysis and visualization was, uh, valuable for the Participants. And, uh, we also saw, uh, in their responses that there was a growing interest in learning probability and statistics. Uh, because this resource teachers not necessarily have had some background in these topics, even… If their math resource issues or science teachers, so they want to learn more about the best ARM and how it's applied in other situations in real life. And, uh, the, uh, the last team was the broad perceived applicability. Of this space in data analysis, so both math and science resource teachers make a connection between the basin approach. Uh, and their subject-specific examples, saying, like, for example, in the context of measurement surveys, experiments and so on, they… Said they reported that they could use this. Approach. So, we think the Bayesian data analysis approach could be useful for Future math and science teachers. In-service math and science teachers and their students, but there were some limitations. One is that we introduced this in a single super fast lesson. And so, necessarily, the scope and length of, um, our… teachers' engagement with these ideas was pretty limited. Another was that science focus that we mentioned earlier. We think that could be emphasized more. And really leaning into using a Bayesian data analysis approach to analyze data in science classrooms, we think has a lot of potential. But there's a lot of challenges, too. We might need to move from a web app to something like some software that lets learners run a regression model. That, um, is a part of this Bayesian approach to um, analyzing data, and to hearing out specifying and estimating a regression model. And, uh, lastly, if we stuck within the, um… within using… within the constraint of using the tool of a web app. We're curious about whether we could extend this to other types of models beyond the, um… as Sibel mentioned, the… simple model that we use to introduce these ideas. And for anyone who might be interested in, like, implementing it in their classes, or with in a capacity, I think, uh… there is enough out there, not just our work, to adopt this to a different math level of students. Like, because we were working with science and math teachers. And they have different varying math level, but for instance, I'm pretty sure there are people in the audience who are like, oh, how can you teach this without first teaching binomial distribution, or how can you teach this without calculus, or something like, along those lines? If you have that kind of intuition in here, like, oh, I can't do this in my class, feel free, like, the math level can easily be adopted. It's just the web app provides accessibility for those who don't have that math level background, at least. And also, this can easily be adopted to different scientific contexts, we just use sustainability because we thought it worked well, but same with statistical context, like, it can be adopted to different models, um. And one thing we realized is, especially trying to do something for 2 hours. For the first time, like, introducing these in ideas is very difficult Uh, people do need time, really. One thing that can be modified very easily is Especially if you take data collection to outside of class time. Then that would give more time for discussion in class. Focusing on the modeling aspects of things. And, as Josh mentioned, like, the web app is very limited. It was serving us well for the type of activity we've done, but it cannot be really useful for other types of activities beyond beta binomial. Models, so perhaps full-on things might be… software might be more useful. One example is Jazz. It's a point-and-click, so if you're not teaching programming, that might be useful. Another example, if you're more on the programming side, is, like, something like Stan, JAX, you can go all the way, again, just like math, computing is also in the spectrum, uh, depending on your students' backgrounds. And we would also like to thank National Science Foundation. I and Josh, we were supported by… separately by National Science Foundation to work on this work. And we are also active on social media, if you would like to connect. And that's, I think, for us, for now. We're ready for questions. Oh, Jaren, are you gonna say something? I didn't want to cut you off, okay. Well, thank you so much. That was… that was wonderful, and really, I'd read your paper, and then this really helped complement everything. Um, for folks who have questions, please, uh, add them to the chat for us, and I can help moderate Those questions. Uh, but I thought I'd get the ball rolling a little bit with a question of my own. And so you have talked us through this one activity, and I'm curious if you've thought about And if so, how expanding it past a one-day, 2-hour… Um, encounter to something more like a unit. I think I'll… maybe Josh might be in a better position to respond, because he has an NS… this… he took this app and meant for an NSF career and got awarded, so… Well, yeah, thanks, Adam. Thanks for meeting, especially. Really briefly, I think the big challenge we're facing is trying to… Um, align. The sophistication of the Bayesian data analytic approach with the goals of the teachers that I'm lucky to be working with through this project. And so I think… I'm thinking back to Mine's image of, um. Like, a more conceptual Bayesian approach at one end of the spectrum, and a full-on Bayesian data analysis like professional scientists or statisticians carry out on the other. So where do we lie on that spectrum? As Mine pointed out, the approach in this paper was a little bit more toward the mathematical end. Um, and… Uh, and so… I think we need to develop, uh, like, tools like this. We need to develop teaching strategies that can make these ideas connect to things that teachers already are doing or are concerned about. Um, and… I'll just mention again that work by Aaron Warren that's a little bit more on the conceptual end Um, what it's doing is… asking, um… Asking students to express their initial confidence in a hypothesis. And then instead of using the… data, um. To… I think of it intuitively as… tell us how much signal there is in the data, and then use Bayes' theorem to weigh that relative to the prior. Students are simply indicating how strong the data they analyzed is qualitatively. I think the data was really strong. I think it was a little strong. It didn't… the data didn't tell me anything at all. And then, um, using a simple calculation to present a Um, a confidence in a hypothesis based on that prior, and then students' own evaluation of the weight of the evidence. So that's a little bit more conceptual, and to be honest, I'm not sure if that's enough. And so. Back to the big picture. I think there's more work to be done to figure out how we can introduce these ideas, um. Especially in K-12 science, in a way that is not too… too scary and mathy, but also, um… is rigorous and kind of includes the core elements. So, hope that wasn't… too much, but basically, we need to figure it out, and I'm working on that. Sure. Great. Uh, not seeing any questions yet, if I'm missing anyone, let me know. Oh, okay, here's one that just came in. The first question I always get is when I'm having frequentist statistics, then why Should I use the Bayesian approach? So how do you convince people that are seeing it for the first time. I can go with that one. Uh, so I think, like. Scientifically speaking, I don't… Like, there are certain reasons why Bayesian statistics was opposed historically, like, one… like, or less popular, should I say. One was, um… computational reasons, like, the reason people use Frequentist is this historical popularity, not necessarily that it necessarily answers the scientific questions that people answer, like. So many scientists use p-value, but they're not necessarily asking p-value questions. So, one argument to make comments for Bayesian is because your question might be answered, perhaps, by Bayesian methods. And another reason, in terms of teaching, is intuition. Like, if you… like, many people ask me, like, how do you teach Bayesian statistics to early learners, like, intro students? Uh, isn't it so difficult? Because people's idea is that they… because they… many people who learn Bayesian learn Bayesian after, like, 6 statistics classes, let's say. So they assume these ideas have to be, like, you have to have a really big jump to get to based in statistics. But I'm not saying that, have you ever thought sampling distribution? Like. Handling distribution is a lot harder to teach, like, try to have interest students make meaning of confidence interval, like, it's so difficult. But try to make students make meaning of credible intervals so easy, because that's exactly what they're already intuitively feeling, that there's a 95% chance that it is So intuitively, it's easier. Uh, computationally, it's not that difficult anymore, as much as it used to be. I mean, there are still, for complex modeling, of course, computational is still a problem. Like, teaching is easier, and I would also say, like, um… one opposition to teaching, or using frequent Bayesian ideas is that Uh, prior with the… is an issue people don't always feel comfortable, especially students don't feel comfortable with prior, because Before they come to Bayesian courses, or before they're faced Bayesian ideas. They're always… there's only one correct answer. And all of a sudden, you're selling them, no, you can have your own prior, and they completely oppose that. And one way to counteract that, they actually make assumptions in frequentist modeling, too. They just don't necessarily realize it. Like, likelihood is an assumption, too. So, like, action not just for statistical modeling, like, throughout the scientific inquiry, you're making assumptions all the time. So prior is just one of the assumptions that you make. So these are my way of handling this. And maybe I can add to that, uh, from, uh, younger learners' point of view, if… I mean. Is… we presented earlier, even young students can make this kind of, uh, intuitive, uh, reasoning based on Bayesian approach, or Bayesian-like approach. Uh, with the use of technology, etc. Um, the problem is. In… at school level, uh, I mean, it took a while to bring even the frequencies perspective. It used to be more, like, classical, um, probability calculations, and then frequent tests came in with the… use of, uh, different technology, and, uh, but, uh, still the subjective probability is left out, and we don't focus that enough. Maybe if students are, um… Are familiar with these ideas in early years. They may have maybe different attitude when they come to college, and when they start learning more formal ideas of Bayesian method. Great, great. We did have Another question pop up, this one's from the Q&A, and so it's kind of a multi-part question, and asking for some opinions, I think. Uh, so is there any chance of getting the Bayesian, or getting Bayesian methods introduced in the AP Stats curriculum? We have so many people taking AP stats right now. Uh, and more than the calc. So maybe we'll start there, and we can always… follow-up about the graphing calculator. For Shiny App Later. Or if so, how can we convince them? Yeah, I think… I don't know. Did anyone else want to answer? I spoke first last time. This seems like a Mine question. Okay, I'll go then. Okay, there's so many questions, okay, I'll take it. Uh, so, I think, um… In order for AP stats to change, college-level stats courses us to change, because AP exists because it's accrediting college courses. So I think one way to change AP status would be 15. If we change our… and if college professors cannot embrace Bayesian. Statistics, I think it would be very hard to argue for high school math teachers. To embrace Bayesian statistics, so the change has to start from college professors, I think. But at the same time, it shouldn't be college professors working on their own, high school, like, the change should come collectively. And it's not, like, one way affecting the other, both ways, but… AP… I see AP Statistics very much of a corporate corporation trying to… gain some college, like, helping students gain college credit. So, because of that business structure of API exams, I think that will be coming from college side, I think. And I think there was another… fall to that question. The graphing calculator to shiny apps. That, I am not sure. Because I think part of that is exam logistics more than, um… anything else? Thanks, Mina. Um, just a… a, um… Connection to the other part of this, the… Um, thanks, Eugene, for the great question. There's a lot of students taking AP science classes as well, and statistics is a part of many of those curricula. So, if you look at, like, the ACT, a lot of it is something like graph interpretation. Um, interpreting patterns and grasping, um. And so I do think there's an opportunity. It's probably far off, there's a lot of work that has to be done for, um. Getting vision methods into potentially AP… various science-related subjects, curricula. Right now, if you look at how… Um, analyzing data is described in many of these curricula. It's kind of all over the place. It's kind of a mix of Some frequentist ideas, a lot of… Sort of graphing and interpreting data, and I think the appeal, um. My original appeal to the Bayesian approach is that it gives you a bit of a structure for how to think about how to interpret data in a science context. There's obviously a ton of depth to Bayesian data analysis, but At its core, you have this idea about how something in the world is working, you collect some data and update your confidence about that idea. I think that has a lot of connections to science teachers and science curriculum developers. Wonderful. Thank you, thank you again. Um… We have a question about, uh, having you clarify and talk through the likelihood and posterior results based on the experiment you conducted. In the research, and so maybe it… Uh, Harrison, if you have any clarifications, let us know, but is it kind of clarifying the role of the likelihood and posterior and things like that? I think, uh, maybe going back to… a figure. Oh, yeah. Maybe the app? Yeah, if… One second, let me plop that up. Very quickly. So, I think if this is… I can also, like, if this is a content question, I can also assign reading. I love doing homework. But what's happening here is, let's say, the students started with this prior idea that they I think the proportion is some low number. With some variance, of course, they're not giving a point estimate, but some plausible value is for pi. And then they observe 3 rooms, lights on, and 5 rooms, so when you look at this, like, the hood, uh, uh, it should reflect the data, so 3 out of 5 says this should peak around 0.6. Uh, so… but… like the hood, it's a… like, it could be any other pi value as well, but the maximum likelihood estimate is 0.6. And posterior falls between the prior and likelihood, influenced by both of these. And different priors, like, more extreme priors, which we call highly informative. Impacts differently. But poster is basically influenced by both the prior and the likelihood. And sequentially, your prior… your posterior then becomes your prior at the next step, and again, you have more data. Which includes your analysis, then includes your prior and likelihood, and so forth. But that's the relationship between posterior and, like, if I understood the question. Correctly. But I don't… I don't want to promote my own book, but… because the app is coming as a sidekick of the book, maybe, like, some early chapters of Beis Rules' book might help for understanding these concepts. I might just piggyback on that question, if that's okay. While you're had that really nice visualization of the prior and the likelihood in the posterior up there. So there's these three quantities in in Bayesian inference, whereas in frequentist inference, we spent a lot of time just thinking about the likelihood. Of those three, the prior the likelihood and the posterior, which of those did the pre-service teachers find the easiest to reason about? Yeah, I'll hop in. Um, if I recall. Um, there… it was not completely as easy as we hoped. For our students to specify the prior. Um, so maybe that's something where we could maybe spend a little bit more time. So I don't think, as Mine pointed out. This needs to… these need to change into classes on probability. But just helping people to think about how maybe we have this range of confidence values that we think are kind of plausible. And helping them think about how much of those values need to map onto their intuitive sense of, like, I don't know, I'm, like, 60% to 80% sure. So then what do you do with the distribution to have that correspond to that kind of Um, intuition. I think we could spend a little more time there. Um, I think having kind of done that step, and then seen, um. The likelihood. I think interpreting the posterior was a little bit easier, just because of that very limited, but some kind of engagement with those ideas. I think, Karen, I'm going to add something about college learners, since we're here. I think likelihood is a lot more difficult for them, like, at least college students to get, if they know priority. Because, like, even teaching them likelihood is not probability, it takes is a learning bump, so prior and posterior are usually easier, and that's why, like, interpreting, because they're more… it's between 0 and 1. Probability easy to interpret, but likelihood. Even seeing it scaled in the visualizations you might have seen, like. They also ask, what is the scale, initially, when they see, and then they remember what likelihood is, and so forth, like… That makes a lot of sense, thank you. And that matches up with some of the challenges that I've had, just on the likelihood part when I teach more frequentist inference. Um, I'm trying to see here, I know we're running… into our time here. Um, do we have any last ones? I guess one question I see here is. Do you have a quick, uh… bit of advice on… the approach to teach that Bayesian interpretation of credible intervals. That was a follow-up to a previous question. Or do you find that students kind of just naturally gravitate toward it? A credible interval interpretation. I think it's very… like, I've never seen issues with interpretation, just because that's already what they think it is. I see more problems interpreting confidence intervals. Uh, because… as soon as you show one confidence interval, and then you make them imagine 100 confidence intervals. Which they're like, oh, we don't have 100 confidence intervals. I'm like, I know, you don't, but let's just imagine that we do. So I think… Based in confidence, credible intervals are quite easy to interpret. They just… look at the middle 95%, and that's it. At least I am not experienced, like, difficulty in that. Teaching. But I teach college students, so… Great. Oh, one more just came in. Uh, how do you get statistics educators interested in doing research? Into teaching Bayesian statistics. I think of us all, Siba maybe has the most research, and we can go in reverse order this time. Oh, okay. Well, it depends on which statistics educators we're talking about. All of us, K-12 plus college. Yes, um, usually my interaction with the K-12 people, but yeah. Um… Yes, I think, uh… If the activities like this, uh… are shared in… various sites, and I know that that you know, the statistics teacher journal, uh, etc. I think, um, and we show, uh, how they're… Were implemented in classrooms, uh, would be… household, uh, starting with young learners, even, and… Uh, at different levels towards the college as well. I think, uh. The… we need more practical, uh. Work, I think, uh, activities that researchers can use in their classrooms. Uh, or teachers, instructors, so that they would have more interest in teaching phasing Beijing. I agree, this is a Sibel question, so I mean, I agree with you there. Um, again, just mentioned, there's an interest in these ideas, and uh, broken record here, science education. Um, and so I think there just needs to, sort of piggybacking on Sabelle, your point, there just needs to be more… work more examples documenting what's not working, what's working. Um, kind of, again, back to that, um, spectrum that Mine put up, explore sort of different degrees of sophistication or complexity with respect to how Bayesian ideas are brought into different contexts. Maybe that's a high-level takeaway, but kind of try things out. Share what's working and what's not. I kind of see it right now as, like Yeah, this is a really good idea. Like, this really could work. Um, we just kind of have to do… do the work of creating those examples and doing the research. So I guess, Juana, to your point about doing research. Search. Um… Maybe showing some of the benefit, too, of these approaches relative to the frequentist approach. There was a question earlier on that, and so that could be another way. Meeting? So, I think I'll answer by two ways. I think what Sava was saying, the activities are very important, and the reason for that is, in order for you to do research in Bayesian education. Teaching-based and education, you need to teach base in. So, first. Starting point would be teaching it, and as you're teaching it. Um, sometimes, like, the question might be activity-driven, like, does this activity work? But as you're teaching those, uh, that activity, you might actually drive more questions. For instance, like, do I teach frequencies first, then Bayesian, or vice versa? Um, things like that, like… that act of teaching will bring the research questions along with it But I will shamelessly plug in my latest The paper that Karen was mentioning at the beginning, the systematic literature review. The reason I'm plugging it, that one in is because we're looking at different data science education papers. And I thought that classification we came up with by reading so many papers was, like. For instance, there's publications on, like, course examples, activity examples, pedagogical questions, or education technology, right? So even, like, we obviously didn't go that route, but we could have gone the route, does this app work? Like, specifically asking about an educational technology tool and so forth. So, I think that classification can help, maybe, uh, education researchers think, like, are more questions more about the activity itself? Is it more about the tool? Is it more about the pedagogical approach, and so forth? Than generating the appropriate research question that you have would be the good starting point, I think. Wonderful, wonderful. Um, I think, uh, it looks like we're winding down now. Uh, Kieran, do you wanna… Do you have less things to Ask or say or anything? Uh, nothing more to ask, but I do want to thank, uh, all of our wonderful speakers for sharing their work with us. Adam, thank you so much for co-hosting. Andrew, thank you so much for doing the tech for the webinars. I've really enjoyed the talk today and getting to hear more about your work, so thank you all so much. Yeah, thank you, Karen. Thank you, Adam. Thank you, Andrew, and thank you all. Thank you for having us.