The birthday problem and voter fraud

I was traveling at the end of last week, which means I had some time to listen to podcasts while in transit. This American Life is always a hit for me, though sometimes I can’t listen to it in public because the stories can be too sad, and then I get all teary eyed in airports…

This past week’s was both fun and informative though. I’m talking about Episode 630: Things I Mean to Know. This post is about a specific segment of this episode: Fraud Complex. You can listen to it here, and here is the description:


We’ve all heard reports that voter fraud isn’t real. But how do we know that’s true? David Kestenbaum went on a quest to find out if someone had actually put in the work—and run the numbers—to know for certain. (17 minutes)
Source: TAL – Episode 630: Things I Mean to Know – Act One – Fraud Complex

The segment discusses a specific type of voter fraud, double voting. David Kestenbaum interviews Sharad Goel (Stanford University) for the piece, and they discuss this paper of his. Specifically, there is a discussion of the birthday problem in there. If you’re not familiar with the birthday problem, see here. Basically, it concerns the probability that, in a set of randomly chosen people, some pair of them will have the same birthday. The episode walks through applying the same logic used to solve this problem to calculate probability of having people with the same name and birthdate on voter records. However it turns out the simple calculation assuming uniform distribution of births over the year does a poor job at estimating this probability because of reasons like people born in certain times of the year to be more likely to be named a certain way (e.g. June for babies born in June, Autumn for babies born in fall, etc.). I won’t tell the whole story, because the producers of the show do a much better job at telling it.

If you’re teaching probability, or discussing the birthday problem in any way in your class, I highly recommend you have your students listen to this segment. It’s a wonderful application, and I think interesting applications tend to be hard to come by in probability theory courses.

 

Hand Drawn Data Visualizations

Recently the blog Brain Pickings wrote about the set of hand-drawn visualizations that Civil Rights activist W.E.B. Du Bois commissioned for the1900 World’s Fair in Paris. (In a previous post, Rob wrote about an art exhibit he saw that featured artistic interpretations of these plots.)

Every time I see these visualizations I am amazed—they are gorgeous and the detail (and penmanship) is amazing. The visualizations included bar charts, area plots, and maps—all hand-drawn! You can read more and see several of the data visualizations here.

My Ideal Bookshelf

This summer I read My Ideal Bookshelf, a book of hand-drawn illustrations depicting well-known cultural icons’ bookshelves. Each person was asked to identify a small shelf of books that represented her/him. These could be books that “changed your life, that have made you who you are today, your favorite favorites.”

My Ideal Bookshelf

I really enjoyed looking at the illustrations of the books, recognizing several that sat on my bookshelf. The accompanying text also told the back-story of why these books were chosen. Although this isn’t data visualization, per se (at least in the traditional sense), the hand-drawn illustrations were what drew me to the book in the first place.

Dear Data

Giorgia Lupi and Stefanie Posavec also created hand-drawn data visualizations for their Dear Data project. The project, as described on their website was:

Each week, and for a year, we collected and measured a particular type of data about our lives, used this data to make a drawing on a postcard-sized sheet of paper, and then dropped the postcard in an English “postbox” (Stefanie) or an American “mailbox” (Giorgia)!

They ultimately turned their visualizations into a book. You can learn more at their website or from the Data Stories podcast related to the project.

Postcards from the Dear Data project

Mapping Manhattan

Another project, that was also turned into a book, was Mapping Manhattan: A Love (and Sometimes Hate) Story in Maps by 75 New Yorkers. Becky Cooper, the project’s creator, passed out blank maps (with return postage) of Manhattan to strangers she met walking around the city and asked them to “map their Manhattan.” (Read more here.)

Map drawn by New Yorker staff writer Patricia Marx

The hand-drawn maps conveyed the personal story of each person’s map in a way that a computer drawn image never could. This was so compelling, we have thought about including a similar assignment in our Data Visualization course. (Although people don’t connect to Minneapolis as much as they do to New York.)

There is something simple about a hand drawn data visualization. Because of the time involved in creating a hand-drawn visualization, the creator needs to more carefully plan the execution of the final product. As should be obvious, this implies that this is not the best mode for exploratory work. However for expository work, hand drawn plots can be powerful and, I feel, connect to the viewer a bit more. (Maybe this connection is pure illusion…after all, I prefer typewritten letters to word-processed letters as well, despite never receiving either in today’s age.)

 

Data Visualization Course for First-Year Students

A little over a year ago, we decided to propose a data visualization course at the first-year level. We had been thinking about this for awhile, but never had the time to teach it given the scheduling constraints we had. When one of the other departments on campus was shut down and the faculty merged in with other departments, we felt that the time was ripe to make this proposal.

Course description of the EPsy 1261 data visualization course

In putting together the proposal, we knew that:

  • The course would be primarily composed of social science students. My department, Educational Psychology, attracts students from the College of Education and Human Development (e.g., Child Psychology, Social Work, Family Social Science).
  • To attract students, it would be helpful if the course would fulfill the University’s Liberal Education (LE) requirement for Mathematical Thinking.

This led to several challenges and long discussions about the curriculum for this course. For example:

  • Should the class focus on producing data visualizations (very exciting for the students) or on understanding/interpreting existing visualizations (useful for most social science students)?
  • If we were going to produce data visualizations, which software tool would we use? Could this level of student handle R?
  • In order to meet the LE requirement, the curriculum for the course would need to show a rigorous treatment of students actually “doing” mathematics. How could we do this?
  • Which types of visualizations would we include in the course?
  • Would we use a textbook? How might this inform the content of the course?

Software and Content

After several conversations among the teaching team, with stakeholder departments, and with colleagues teaching data visualization courses at other universities, we eventually proposed that the course:

  • Focus both on students’ being able to read and understand existing visualizations and produce a subset of these visualizations, and
  • Use R (primary tool) and RAWGraphs for the production of these plots.

Software: Use ggplot2 in R

The choice to use R was not an immediate one. We initially looked at using Tableau, but the default choices made by the software (e.g., to immediately plot summaries rather than raw data) and the cost for students after matriculating from the course eventually sealed its fate (we don’t use it). We contemplated using Excel for a minute (gasp!), but we vetoed that even quicker than Tableau. The RAWGraphs website, we felt, held a lot of promise as a software tool for the course. It had an intuitive drag-and-drop interface, and could be used to create many of the plots we wanted students to produce. Unfortunately, we were not able to get the bar graph widget to produce side-by-side bar plots easily (actually at all). The other drawback was that the drag-and-drop interactions made it a harder sell to the LE committee as a method of building students’ computational and mathematical thinking if we used it as the primary tool.

Once we settled on using R, we had to decide between using the suite of base plots, or ggplot2 (lattice was not in the running). We decided that ggplot made the most sense in terms of thinking about extensibility. Its syntax was based on a theoretical foundation for creating and thinking about plots, which also made it a natural choice for a data visualization course. The idea of mapping variables to aesthetics was also consistent with the language used in RAWGraphs, so it helped reenforce core ideas across the tools. Lastly, we felt that using the ggplot syntax would also help students transition to other tools (such as ggviz or plotly) more easily.

One thing that the teaching team completely agreed on (and was mentioned by almost everyone who we talked to who taught data visualization) was that we wanted students to be producing graphs very early in the course; giving them a sense of power and the reenforcement that they could be successful. We felt this might be difficult for students with the ggplot syntax. To ameliorate this, we wrote a course-specific R package (epsy1261; available on github) that allows students to create a few simple plots interactively by employing functionality from the manipulate package. (We could have also done this via Shiny, but I am not as well-versed in Shiny and only had a few hours to devote to this over the summer given other responsibilities.)

Interactive creation of the bar chart using the epsy1261 package. This allows students to input  minimal syntax, barchart(data), and then use interaction to create plots.

Course Content

We decided on a three-pronged approach to the course content. The first prong would be based on the production of common statistical plots: bar charts, scatterplots, and maps, and some variations of these (e.g., donut plots, treemaps, bubble charts). The second prong was focused on reading more complex plots (e.g., networks, alluvial plots), but not producing them, except maybe by hand. The third prong was a group project. This would give students a chance to use what they had learned, and also, perhaps, access other plots we had not covered. In addition, we wanted students to consider narrative in the presentation of these plots—to tell a data-driven story.

Along with this, we had hoped to introduce students to computational skills such as data summarization, tidying, and joining data sets. We also wanted to introduce concepts such as smoothing (especially for helping describe trend in scatterplots), color choice, and projection and coordinate systems (in maps). Other things we thought about were using R Markdown and data scraping.

Reality

The reality, as we are finding now that we are over a third of the way through the course, is that this amount of content was over-ambitious. We grossly under-estimated the amount of practice time these students would need, especially working with R. Two things play a role in this:

  1. The course attracted way more students than we expected for the first offering (our class size is 44) and there is a lot of heterogeneity of students’ experiences and academic background. For example, we have graduate students from the School of Design, some first years, and mostly sophomores and juniors. We also have a variety of majors including, design, the social sciences, and computer science.
  2. We hypothesize that students are not practicing much outside of class. This means they are essentially only using R twice a week for 75 minutes when they are in class. This amount of practice is too infrequent for students to really learn the syntax.

Most of the students’ computational experiences are minimal prior to taking this course. They are very competent at using point-and-click software (e.g., Google Docs), but have an abundance of trouble when forced to use syntax. The precision of case-sensitivity, commas, and parentheses is outside their wheelhouse.

I would go so far as to say that several of these students are intimidated by the computation, and completely panic on facing an error message. This has led to us having to really think through and spend time discussing computational workflows and dealing with how to “de-bug” syntax to find errors. All of this has added more time than we anticipated on the actual computing. (While this may add time, it is still educationally useful for these students.)

The teaching team meets weekly for 90 minutes to discuss and reflect on what happened in the course. We also plan what will happen in the upcoming week based on what we observed and what we see in students’ homework. As of now, we clearly see that students need more practice, and we have begun giving students the end result of a plot and asking them to re-create these.

I am still hoping to get to scatterplots and maps in the course. However, some of the other computational ideas (scraping, joining) may have to be relegated to conceptual ideas in a reading. We are also considering scrapping the project, at least for this semester. At the very least, we will change it to a more structured set of plots they need to produce rather than letting them choose the data sets, etc. Live and learn. Next time we offer the course it will be better.

*Technology note: RAWGraphs can be adapted by designing additional chart types, so in theory, if one had time, we could write our own version to be more compatible with the course. We are also considering using the ggplotgui package, which is a Shiny dashboard for creating ggplot plots.

 

 

Mapping Irma, but not really…

We’re discussing data visualization nowadays in my course, and today’s topic was supposed to be mapping. However late last night I realized I was going to run out of time and decided to table hands on mapping exercises till a bit later in the course (after we do some data manipulation as well, which I think will work better).

That being said, talking about maps seemed timely, especially with Hurricane Irma developing. Here is how we went about it:

In addition to what’s on the slide I told the students that they can assume the map is given, and they should only think about how the forecast lines would be drawn.

Everyone came up with “we need latitude and longitude and time”. However some teams suggested each column would represent one of the trajectories (wide data), while others came up with the idea of having an indicator column for the trajectory (long data). We sketched out on the board what these two data frames would look like, and evaluated which would be easier to directly plot using tools we’ve learned so far (plotting in R with ggplot2).

While this was a somewhat superficial activity compared to a hands on mapping exercise, I thought it worked well for a variety of reasons:

  1. It was a timely example that grabbed students’ attention.
  2. It generated lively discussion around various ways of organizing data into data frames (which hopefully will serve as a good primer for the data manipulation unit where we’ll discuss how data don’t always come in the format you need and you might need to get it in shape first before you can visualize/analyze it).
  3. Working backwards from a visualization to source data (as opposed to from data to visualization) provided a different challenge/perspective, and a welcome break from “how do I get R to plot this?”.
  4. We got to talk about the fact that predictions based on the same source data can vary depending on the forecasting model (foreshadowing of concepts we will discuss in the modeling unit coming up later in the course).
  5. It was quick to prepare! And quick to work through in class (~5 mins of team discussion + ~10 mins of class discussion).

I also suggested to students that they read the underlying NYTimes article as well as this Upshot article if they’re interested in finding out more about modeling the path of a hurricane (or modeling anything, really) and uncertainty.

Data Science Webinar Announcement

I’m pleased to announce that on Monday, September 11 , 9-11 am Pacific, I’ll be leading a Concord Consortium Data Science Education Webinar. Oddly, I forgot to give it a title, but it would be something like “Towards a Learning Trajectory for K-12 Data Science”. This webinar, like all Concord webinars, is intended to be highly interactive. Participants should have their favorite statistical software at the ready. A detailed abstract as well as registration information is here
https://www.eventbrite.com/e/data-science-education-webinar-rob-gould-tickets-35216886656

At the same site you can view recent wonderful webinars by Cliff Konold, Hollylynne Lee and Tim Erickson.

Envisioning Data Science Webinar Series and Call for Input

Webinar Series: Data Science Undergraduate Education

Join the National Academies of Sciences, Engineering, and Medicine for a webinar series on undergraduate data science education. Webinars will take place on Tuesdays from 3-4pm ET starting onSeptember 12 and ending on November 14. See below for the list of dates and themes for each webinar.

This webinar series is part of an input-gathering initiative for a National Academies study on Envisioning the Data Science Discipline: The Undergraduate Perspective. Learn more about the study, read the interim report, and share your thoughts with the committee on the study webpage at nas.edu/EnvisioningDS.

Webinar speakers will be posted as they are confirmed on the webinar series website.

Webinar Dates and Topics

  • 9/12/17 – Building Data Acumen
  • 9/19/17 – Incorporating Real-World Applications
  • 9/26/17 – Faculty Training and Curriculum Development
  • 10/3/17 – Communication Skills and Teamwork
  • 10/10/17 – Inter-Departmental Collaboration and Institutional Organization
  • 10/17/17 – Ethics
  • 10/24/17 – Assessment and Evaluation for Data Science Programs
  • 11/7/17 – Diversity, Inclusion, and Increasing Participation
  • 11/14/17 – Two-Year Colleges and Institutional Partnerships

All webinars take place from 3-4pm ET.  If you plan to join us online, please register to attend.  You will have the option to register for the entire webinar series or for individual webinars.

Share Your Input

The study committee is seeking public input for consideration in their upcoming report which will set forth a vision for the emerging discipline of data science at the undergraduate level.  To share your input with the committee, please fill out this form.

Revisiting that first day of class example

About a year ago I wrote this post: 

I wasn’t teaching that semester, so couldn’t take my own advice then, but thankfully (or the opposite of thankfully) Trump’s tweets still make timely discussion.

I had two goals for presenting this example on the first day of my data science course (to an audience of all first-year undergraduates, with little to no background in computing and statistics):

  1. Give a data analysis example with a familiar context
  2. Show that if they take the time to read the code, they can probably understand what it’s doing, at least at a high level

First, I provided them some context: “The author wanted to analyze Trump’s tweets: both the text, and some other information on the tweets like when and from what device they were posted.” And I asked the students “If you wanted to do this analysis, how would you go about collecting the data?”. Some suggested manual data collection, which we all agreed is too tedious. A few suggested there should be a way to get the data from Twitter. So then we went back to the blog post, and worked our way through some of the code. (My narrative is roughly outlined in handwriting below.)

The moral of the story: You don’t need to figure out how to write a program that gets tweets from Twitter. Someone else has already done it, and packaged it up (in a package called twitteR), and made it available for you to use. Here, the important message I tried to convey was that “No, I don’t expect you to know that this package exists, or to figure out how to use it. But I hope you agree that once you know the package exists, it’s worth the effort to figure out how to use its functionality to get the tweets, instead of collecting the data manually.”

Then, we discussed the following plot in detail:

First, I asked the students to come up with a list of variables we need in our dataset so that we can make this plot: we need to know what time each tweet was posted and what device it came from and we need to know how what percentage of tweets were posted in a given hour.

Here is the breakdown of the code (again, my narrative is in the handwritten comments):

Once again, I wanted to show the students that if they take some time, they can probably figure out roughly what each line (ok, maybe not each, but most lines) of code are doing. We didn’t get into discussing what’s a geom, what’s the difference between %>% and +, what’s an aesthetic, etc. We’ll get into those, but the night semester is young…

My hope is that next time I present how to do something new in R, they’ll remember this experience of being able to mostly figure out what’s happening by taking some time staring at the code and thinking about “if I had to do this by hand, how would I go about it?”.

Modernizing the Undergraduate Statistics Curriculum at #JSM2017

I’m a bit late in posting this, but travel delays post-JSM left me weary, so I’m just getting around to it. Better late than never?

Wednesday at JSM featured an invited statistics education session on Modernizing the Undergraduate Statistics Curriculum. This session featured two types of speakers: those who are currently involved in undergraduate education and those who are on the receiving end of graduating majors. The speakers involved in undergraduate education presented on their recent efforts for modernizing the undergraduate statistics curriculum to provide the essential computational and problem solving skills expected from today’s modern statistician while also providing a firm grounding in theory and methods. The speakers representing industry discussed their expectations (or hopes and dreams) for new graduates and where they find gaps in the knowledge of new hires.

The speakers were  Nick Horton (Amherst College), Hilary Parker (Stitch Fix), Jo Hardin (Pomona College), and Colin Rundel (Duke University). The discussant was Rob Gould (UCLA). Here are the slides for each of the speakers. If you have any comments or questions, let us know in the comments.

Modernizing the undergraduate statistics curriculum: what are the theoretical underpinnings? – Nick Horton

Hopes and dreams for statistics graduates – Hilary Parker

Expectations and Skills for Undergraduate Students Doing Research in Statistics and Data Science – Jo Hardin

Moving Away from Ad Hoc Statistical Computing Education – Colin Rundel

Discussion – Rob Gould

Novel Approaches to First Statistics / Data Science Course at #JSM2017

Tuesday morning, bright an early at 8:30am, was our session titled “Novel Approaches to First Statistics / Data Science Course”. For some students the first course in statistics may be the only quantitative reasoning course they take in college. For others, it is the first of many in a statistics major curriculum. The content of this course depends on which audience the course is aimed at as well as its place in the curriculum. However a data-centric approach with an emphasis on computation and algorithmic thinking is essential for all modern first statistics courses. The speakers in our session presented their approaches for the various first courses in statistics and data science that they have developed and taught. The discussion also highlighted pedagogical and curricular choices they have made in deciding what to keep, what to eliminate, and what to modify from the traditional introductory statistics curriculum. The speakers in the session were Ben Baumer from Smith College, Rebecca Nugent from CMU, myself, and Daniel Kaplan from Macalester College. Our esteemed discussant was Dick DeVeaux, and our chair, the person who managed to keep this rambunctious bunch on time, was Andrew Bray from Reed College. Here are the slides for each of the speakers. If you have any comments or questions, let us know in the comments, or find us on social media!

Ben Baumer – Three Methods Approach to Statistical InferenceRebecca Nugent – Lessons Learned in Transitioning from “Intro to Statistics” to “Reasoning with Data”

Mine Cetinkaya-Rundel – A First-Year Undergraduate Data Science Course

Daniel Kaplan – Teaching Stats for Data Science

Dick DeVeaux – Discussion

 

My JSM 2017 itinerary

JSM 2017 is almost here. I just landed in Maryland, and I finally managed to finish combing through the entire program. What a packed schedule! I like writing an itinerary post each year, mainly so I can come back to it during and after the event. I obviously won’t make it to all sessions listed for each time slot below, but my decision for which one(s) to attend during any time period will likely depend on proximity to previous session, and potentially also proximity to childcare area.

The focus of the sessions I selected are education, data science, computing, visualization, and social responsibility. In addition to talks on topics I actively work in, I also enjoy listening to talks in application areas I’m interested in, hence the last topic on this list.

If you have suggestions for other sessions (in these topics or other) that you think would be interested, let me know in the comments!

Sun, 7/30/2017

Sunday will be mostly meetings for me, and I’m skipping any evening stuff to see Andrew Bird & Belle and Sebastian!

Mon, 7/31/2017

  • DataFest meeting: 10am – 12pm at H-Key Ballroom 9. Stop by if you’re already an ASA DataFest organizer, or if you’d like to be one in the future!
    • First hour will be discussing what worked and what didn’t, any concerns, kudos, advice for new sites, etc.
    • Second hour will be drop-in for addressing any questions regarding organizing an ASA DataFest at your institution.
  • Computing and Graphics mixer: 6 – 8pm at H-Key Ballroom 1.
  • Caucus for Women in Statistics Reception and Business Meeting: 6:30 – 8:30pm at H-Holiday Ballroom 1&2.

8:30 AM – 10:20 AM

10:30 AM – 12:20 PM

2:00 PM – 3:50 PM

4:00 PM – 5:50 PM

ASA President’s Invited Speaker: It’s Not What You Said. It’s What They Heard – Jo Craven McGinty, The Wall Street Journal

Tue, 8/1/2017

8:30 AM – 10:20 AM

10:30 AM – 12:20 PM

2:00PM – 3:50 PM

4:00 PM – 5:50 PM

Deming Lecture: A Rake’s Progress Revisited – Fritz Scheuren, NORC-University of Chicago

Wed, 8/2/2017

  • Statistical Education Business Meeting – 6-7:30pm

8:30 AM – 10:20 AM

10:30 AM – 12:20 PM

2:00PM – 3:50 PM

4:00 PM – 5:50 PM

COPSS Awards and Fisher Lecture: The Importance of Statistics: Lessons from the Brain Sciences – Robert E. Kass, Carnegie Mellon University

Thur, 8/3/2017

8:30 AM – 10:20 AM

 10:30 AM – 12:20 PM