My first Shiny experience – CLT applet

When introducing the Central Limit Theorem for the first time in class, I used to use applets like the SOCR Sampling Distribution Applet or the OnlineStatBook Sampling Distribution Applet. If you are reading this post on Google Chrome, chances are those previous links did not work for you. If on another browser, they may have, but you may have also seen warnings like this one:

java_warning

Last year when I tried using one of these applets in class and had students pull it up on their own computers as well, it was a chaos. Between warnings like this and no simple way for everyone in their various computers and operating systems to update Java, most students got frustrated. As a class we had to give up playing with the applet, and the students just watched me go through the demonstrations on the screen.

In an effort to make things a little easier this year, I searched to see if I could find something similar created using Shiny. This one, created by Tarik Gouhier, looked pretty promising. However it wasn’t exactly what I was looking for. For example, it’s pretty safe to assume that my students have never heard of the Cauchy distribution, and I didn’t want to present something that might confuse them further.

Thanks to the code being available on GitHub, I was able to re-write the applet to match the functionality of the previous CLT applets: http://rundel.dyndns.org:3838/CLT.

clt_applet

I’m sure I’ll make some edits to the applet after I class-test it today. Among planned improvements are:

  • an intermediary step between the top (population distribution) and the bottom (sampling distribution) plots: the sample distribution.
  • sliders for input parameters (like mean and standard deviation) for the population distribution.

None of this is revolutionary, but it’s great to be able to build on someone else’s work so quickly. Plus, since all of the code is in R, which the students are learning anyway, those who are particularly motivated can dive deeper and can see the connection between the demonstration and what they’re doing in lab.

If you use such demonstrations in your class and have suggestions for improvements, leave a comment below. If you’d like to customize the applet for your use, the code is linked on the applet page, and I’ll be transitioning it to GitHub as I work on creating a few more of such applets.

(I should also thank Colin Rundel who helped with the implementation and is temporarily hosting the applet on his server until I get my Shiny Server set up — I filled out the registration form last night but I’m not yet sure what the next step is supposed to be.)

Thinking with technology

Just finished a stimulating, thought-provoking week at SRTL —Statistics Research Teaching and Learning conference–this year held in Two Harbors Minnesota, right on Lake Superior. SRTL gathers statistics education researchers, most of whom come with cognitive or educational  psychology credentials, every two years. It’s more of a forum for thinking and collaborating than it is a platform for  presenting findings, and this means there’s much lively, constructive discussion about works in progress.

I had meant to post my thoughts daily, but (a) the internet connection was unreliable and (b) there was just too much too digest. One  recurring theme that really resonated with me was the ways students interact with technology when thinking about statistics.
Much of the discussion centered on young learners, and most of the researchers — but not all — were in classrooms in which the students used TinkerPlots 2.  Tinkerplots is a dynamic software system that lets kids build their own chance models. (It also lets them build their own graphics more-or-less from scratch.) They do this by either dropping “balls” into “urns” and labeling the balls with characteristics, or through spinners which allow them to shade different areas different colors. They can connect series of spinners and urns in order to create sequences of independent or dependent events, and can collect outcomes of their trials. Most importantly, they can carry out a large number of trials very quickly and graph the results.

What I found fascinating was the way in which students would come to judgements about situations, and then build a model that they thought would “prove” their point. After running some trials, when things didn’t go as expected, they would go back and assess their model. Sometimes they’d realize that they had made a mistake, and they’d fix it. Other times, they’d see there was no mistake, and then realize that they had been thinking about it wrong.Sometimes, they’d come up with explanations for why they had been thinking about it incorrectly.

Janet Ainley put it very succinctly. (More succinctly and precisely than my re-telling.)  This technology imposes a sort of discipline on students’ thinking. Using the  technology is easy enough  that they can be creative, but the technology is rigid enough that their mistakes are made apparent.  This means that mistakes are cheap, and attempts to repair mistakes are easily made.  And so the technology itself becomes a form of communication that forces students into a level of greater precision than they can put in words.

I suppose that mathematics plays the same role in that speaking with mathematics imposes great precision on the speaker.  But that language takes time to learn, and few students reach a level of proficiency that allows them to use the language to construct new ideas.  But Tinkerplots, and software like it, gives students the ability to use a language to express new ideas with very little expertise.  It was impressive to see 15-year-olds build models that incorporated both deterministic trends and fairly sophisticated random variability.  More impressive still, the students were able to use these models to solve problems.  In fact, I’m not sure they really know they were building models at all, since their focus was on the problem solving.

Tinkerplots is aimed at a younger audience than the one I teach.  But for me, the take-home message is to remember that statistical software isn’t simply a tool for calculation, but a tool for thinking.