You can read about DataFest, which is quickly going national, at the fivethiryeight blog:
You can read about DataFest, which is quickly going national, at the fivethiryeight blog:
The L.A. Times ran an article on data privacy today, which, I think it’s fair to say, puts “Big Data” in approximately the same category as fire. In the right hands, it can do good. But…
I was at the vet yesterday, and just like with any doctor’s visit experience, there was a bit of waiting around — time for re-reading all the posters in the room.
And this is what caught my eye on the information sheet about feline heartworm (I’ll spare you the images):
The question asks: “My cat is indoor only. Is it still at risk?”
The way I read it, this question is asking about the risk of an indoor only cat being heartworm positive. To answer this question we would want to know P(heartworm positive | indoor only).
However the answer says: “A recent study found that 27% of heartworm positive cats were identified as exclusively indoor by their owners”, which is P(indoor only | heartworm positive) = 0.27.
Sure, this gives us some information, but it doesn’t actually answer the original question. The original question is asking about the reverse of this conditional probability.
When we talk about Bayes’ theorem in my class and work through examples about sensitivity and specificity of medical tests, I always tell my students that doctors are actually pretty bad at these, looks like I’ll need to add vets to my list too!
The city of Minneapolis recently elected a new mayor. This is not newsworthy in and of itself, however the method they used was—ranked choice voting. Ranked choice voting is a method of voting allowing voters to rank multiple candidates in order of preference. In the Minneapolis mayoral election, voters ranked up to three candidates.
The interesting part of this whole thing was that it took over two days for the election officials to declare a winner. It turns out that the official procedure for calculating the winner of the ranked-choice vote involved cutting and pasting spreadsheets in Excel.
The algorithm, described by Bill Bushey, is
As an example consider the following sample data:
Voter Choice1 Choice2 Choice3 1 James Fred Frank 2 Frank Fred James 3 James James James 4 Laura 5 David 6 James Fred 7 Laura 8 James 9 David Arnie 10 David
In this data, James has the most 1st choice votes (4) but it is not enough to win the election (a candidate needs 6 votes = 50% of 10 votes cast + 1 to win). So at this point we determine the least voted for candidate…Frank, and delete him from the entire structure:
Voter Choice1 Choice2 Choice3 1 James Fred <del>Frank</del> 2 <del>Frank</del> Fred James 3 James James James 4 Laura 5 David 6 James Fred 7 Laura 8 James 9 David Arnie 10 David
Then, the 2nd choice of any voter who voted for Frank now become the new “1st” choice. This is only Voter #2 in the sample data. Thus Fred would become Voter #2’s 1st choice and James would become Voter #2’s 2nd choice:
Voter Choice1 Choice2 Choice3 1 James Fred 2 Fred James 3 James James James 4 Laura 5 David 6 James Fred 7 Laura 8 James 9 David Laura 10 David
James still has the most 1st choice votes, but not enough to win (he still needs 6 votes!). Fred has the fewest 1st choice votes, so he is eliminated, and his voter’s 2nd and 3rd choices are moved up:
Voter Choice1 Choice2 Choice3 1 James 2 James 3 James James James 4 Laura 5 David 6 James 7 Laura 8 James 9 David Laura 10 David
James now has five 1st choice votes, but still not enough to win. Laura has the fewest 1st choice votes, so she is eliminated, and her voter’s 2nd and 3rd choices are moved up:
Voter Choice1 Choice2 Choice3 1 James 2 James 3 James James James 4 5 David 6 James 7 8 James 9 David Laura 10 David
James retains his lead with five first place votes…but now he is declared the winner. Since Voter #4 and #7 do not have a 2nd or 3rd choice vote, they no longer count in the number of voters. Thus to win, a candidate only needs 5 votes = 50% of the 8 1st choice votes + 1.
The actual data from Minneapolis includes over 80,000 votes for 36 different candidates. There are also ballot issues such as undervoting and overvoting. This occurs when voters give multiple candidates the same ranking (overvoting) or do not select a candidate (undervoting).
The animated GIF below shows the results after each round of elimination for the Minneapolis mayoral election.
The Minneapolis mayoral data is available on GitHub as a CSV file (along with some other smaller sample files to hone your programming algorithm). There is also a frequently asked questions webpage available from the City of Minneapolis regarding ranked choice voting.
In addition you can also listen to the Minnesota Public Radio broadcast in which they discussed the problems with the vote counting. The folks at the R Users Group Meeting were featured and Winston brought the house down when commuting on the R program that computed the winner within a few seconds said, “it took me about an hour and a half to get something usable, but I was watching TV at the time”.
See the R syntax I used here.
Fitbit, you know I love you and you’ll always have a special place in my pocket. But now I have to make room for the Moves app to play a special role in my capture-the-moment-with-data existence.
Moves is an ios7 app that is free. It eats up some extra battery power and in exchange records your location and merges this with various databases and syncs it up to other databases and produces some very nice “story lines” that remind you about the day you had and, as a bonus, can motivate you to improved your activity levels. I’ve attached two example storylines that do not make it too embarrassingly clear how little exercise I have been getting. (I have what I can consider legitimate excuses, and once I get the dataset downloaded, maybe I’ll add them as covariates.) One of the timelines is from a day that included an evening trip to Disneyland. The other is a Saturday spent running errands and capped with dinner at a friend’s. Its pretty easy to tell which day is which.
But there’s more. Moves has an API, thus allowing developers to tap into their datastream to create apps. There’s an app that exports the data for you (although I haven’t really had success with it yet) and several that create journals based on your Moves data. You can also merge Foursquare, Twitter, and all the usual suspects.
I think it might be fun to have students discuss how one could go from the data Moves collects to creating the storylines it makes. For instance, how does it know I’m in a car, and not just a very fast runner? Actually, given LA traffic, a better question is how it knows I’m stuck in traffic and not just strolling down the freeway at a leisurely pace? (Answering these questions requires another type of inference than what we normally teach in statistics. ) Besides journals, what apps might they create with these data and what additional data would they need?
The L.A. Times had a nice editorial on Thursday (Oct 30) encouraging City Hall to make its data available to the public. As you know, fellow Citizens, we’re all in favor of making data public, particularly if the public has already picked up the bill and if no individual’s dignity will be compromised. For me this editorial comes at a time when I’ve been feeling particularly down about the quality of public data. As I’ve been looking around for data to update my book and for the Mobilize project, I’m convinced that data are getting harder, and not easier. to find.
More data sources are drying up, or selling their data, or using incredibly awkward means for displaying their public data. A basic example is to consider how much more difficult it is to get, say, a sample of household incomes from various states for 2010 compared to the 2000 census.
Another example is gasbuddy.com, which has been one of my favorite classroom examples. (We compare the participatory data in gasbuddy.com, which lists prices for individual stations across the U.S., with the randomly sampled data the federal government provides, which gives mean values for urban districts. One data set gives you detailed data, but data that might not always be trustworthy or up-to-date. The other is highly trustworthy, but only useful for general trends and not for, say, finding the nearest cheapest gas. ) Used to be you could type in a zip code and have access to a nice data set that showed current prices, names and locations of gas stations, dates of the last reported price, and the username of the person who reported the price. Now, you can scroll through an unsorted list of cities and states and get the same information only for the 15 cheapest and most expensive stations.
About 2 years ago I downloaded a very nice, albeit large, data set that included annual particulate matter ratings for 333 major cities in the US. I’ve looked and looked, but the data.gov AirData site now requires that I enter the name of each city in one at a time, and download very raw data for each city separately. Now raw data are good things, and I’m glad to see it offered. But is it really so difficult to provide some common sensically aggregated data sets?
One last example: I stumbled across this lovely website, wildlife crossing, which uses participatory sensing to maintain a database of animals killed at road crossings. Alas, this apparently very clean data set is spread across 479 separate screens. All it needs is a “download data” button to drop the entire file onto your hard disk, and they could benefit from many eager statisticians and wildlife fans examining their data. (I contacted them and suggested this, and they do seem interested in sharing the data in its entirety. But it is taking some time.)
I hope Los Angeles, and all governments, make their public data public. But I hope they have the budget and the motivation to take some time to think about making it accessible and meaningful, too.
Just finished a stimulating, thought-provoking week at SRTL —Statistics Research Teaching and Learning conference–this year held in Two Harbors Minnesota, right on Lake Superior. SRTL gathers statistics education researchers, most of whom come with cognitive or educational psychology credentials, every two years. It’s more of a forum for thinking and collaborating than it is a platform for presenting findings, and this means there’s much lively, constructive discussion about works in progress.
I had meant to post my thoughts daily, but (a) the internet connection was unreliable and (b) there was just too much too digest. One recurring theme that really resonated with me was the ways students interact with technology when thinking about statistics.
Much of the discussion centered on young learners, and most of the researchers — but not all — were in classrooms in which the students used TinkerPlots 2. Tinkerplots is a dynamic software system that lets kids build their own chance models. (It also lets them build their own graphics more-or-less from scratch.) They do this by either dropping “balls” into “urns” and labeling the balls with characteristics, or through spinners which allow them to shade different areas different colors. They can connect series of spinners and urns in order to create sequences of independent or dependent events, and can collect outcomes of their trials. Most importantly, they can carry out a large number of trials very quickly and graph the results.
What I found fascinating was the way in which students would come to judgements about situations, and then build a model that they thought would “prove” their point. After running some trials, when things didn’t go as expected, they would go back and assess their model. Sometimes they’d realize that they had made a mistake, and they’d fix it. Other times, they’d see there was no mistake, and then realize that they had been thinking about it wrong.Sometimes, they’d come up with explanations for why they had been thinking about it incorrectly.
Janet Ainley put it very succinctly. (More succinctly and precisely than my re-telling.) This technology imposes a sort of discipline on students’ thinking. Using the technology is easy enough that they can be creative, but the technology is rigid enough that their mistakes are made apparent. This means that mistakes are cheap, and attempts to repair mistakes are easily made. And so the technology itself becomes a form of communication that forces students into a level of greater precision than they can put in words.
I suppose that mathematics plays the same role in that speaking with mathematics imposes great precision on the speaker. But that language takes time to learn, and few students reach a level of proficiency that allows them to use the language to construct new ideas. But Tinkerplots, and software like it, gives students the ability to use a language to express new ideas with very little expertise. It was impressive to see 15-year-olds build models that incorporated both deterministic trends and fairly sophisticated random variability. More impressive still, the students were able to use these models to solve problems. In fact, I’m not sure they really know they were building models at all, since their focus was on the problem solving.
Tinkerplots is aimed at a younger audience than the one I teach. But for me, the take-home message is to remember that statistical software isn’t simply a tool for calculation, but a tool for thinking.
What do we fear more? Losing data privacy to our government, or to corporate entities? On the one hand, we (still) have oversight over our government. On the other hand, the government is (still) more powerful than most corporate entities, and so perhaps better situated to frighten.
In these times of Snowden and the NSA, the L.A. Times ran an interesting story about just what tracking various internet companies perform. And it’s alarming. (“They’re watching your every move.”, July 10, 2013). Interestingly, the story does not seem to appear on their website as of this posting.) Like the government, most of these companies claim that (a) their ‘snooping’ is algorithmic; no human sees the data and (b) their data are anonymized. And yet…
To my knowledge, businesses aren’t required to adhere to, or even acknowledge, any standards or practices for dealing with private data. Thus, a human could snoop on particular data. We are left to ponder what that human will do with the information. In the best case scenario, the human would be fired, as, according to the L.A. Times, Google did when it fired an engineer for snooping on emails of some teenage girls.
But the data are anonymous, you say? Well, there’s anonymous and then there’s anonymous. As LaTanya Sweeney taught us in the 90’s, knowing a person’s zipcode, gender, and date of birth is sufficient to uniquely identify 85% of Americans. And the L.A. Times reports a similar study where just four hours of anonymized tracking data was sufficient to identify 95% of all individuals examined. So while your name might not be recorded, by merging enough data files, they will know it is you.
This article fits in really nicely with a fascinating, revelatory book I’m currently midway through: Jaron Lanier‘s Who Owns The Future? A basic theme of this book is that internet technology devalues products and goods (files) and values services (software). One process through which this happens is that we humans accept the marvelous free stuff that the internet provides (free google searches, free amazon shipping, easily pirated music files) in exchange for allowing companies to snoop. The companies turn our aggregated data into dollars by selling to advertisers.
A side affect of this, Lanier explains, is that there is a loss of social freedom. At some point, a service such as Facebook gets to be so large that failing to join means that you are losing out on possibly rich social interactions. (Yes, I know there are those who walk among us who refuse to join Facebook. But these people are probably not reading this blog, particularly since our tracking ‘bots tell us that most of our readers come from Facebook referrals. Oops. Was I allowed to reveal that?) So perhaps you shouldn’t complain about being snooped on since you signed away your privacy rights. (You did read the entire user agreement, right? Raise your hand if you did. Thought so.) On the other hand, if you don’t sign, you become a social pariah. (Well, an exaggeration. For now.)
Recently, I installed Ghostery, which tracks the automated snoopers that follow me during my browsing. Not only “tracks”, but also blocks. Go ahead and try it. It’s surprising how many different sources are following your every on-line move.
I have mixed feelings about blocking this data flow. The data-snooping industry is big business, and is responsible, in part, for the boom of stats majors and, more importantly, the boom in stats employment. And so indirectly, data-snooping is paying for my income. Lanier has an interesting solution: individuals should be paid for their data, particular when it leads to value. This means the era of ‘free’ is over–we might end up paying for searches and for reading wikipedia. But he makes a persuasive case that the benefits exceed the costs. (Well, I’m only half-way through the book. But so far, the case is persuasive.)
DataFest is growing larger and larger. This year, we hosted an event at Duke (Mine organized this) with teams from NCSU and UNC, and at UCLA (Rob organized) with teams from Pomona College, Cal State Long Beach, University of Southern California, and UC Riverside. We are very grateful to Vaclav Petricek at eHarmony for providing us with the data, which consisted of roughly one million “user-candidate” pairs, and a couple of hundred variables including “words friends would use to describe you”, ideal characteristics in a partner, the importance of those characteristics, and the all-important ‘did she email him’ and ‘did he email her’ variables.
The students had a great time, and worked hard for 48 hours to prepare short presentations for the judges. This is the third year we’ve done this, and I’m growing impressed with the growing technical skills of the students. (Which makes our life a lot easier, as far as providing help goes.) Or maybe it’s just that I’ve been lucky enough to get more and more “VIP Consultants” (statisticians from off-campus) and talented and dedicated grad students to help out, so that I can be comfortably oblivious to the technical struggles. Or all of the above.
One thing I noticed that will definitely require some adjustment to our curriculum: Our students had a hard time generating interesting questions from these data. Part of the challenge is to look at a large, rich dataset and think “What can I show the world that the world would like to know?” Too many students went directly to model-fitting, without making visuals or engaging in the content of the materials (a surprise, since we thought they would find this material much more easily-engageable than last year’s micro-lending transaction data), or strategizing around some Big Questions. They managed to pull it off in the end, most of them, but would have done better to brainstorm some good questions to follow, and would have done much better to start with the visuals.
One of the fun parts of DataFest is the presentations. Students have only 5 minutes and 2 slides to convince the judges of their worthiness. At UCLA, because we were concerned about having too many teams for the judges to endure, we had two rounds. First, a “speed dating” round in which participants had only 60 seconds and one slide. We surprised them by announcing, at the start, that to move onto the next round, they would have to merge their team with one other team, and so these 60-second presentations should be viewed as pitches to potential partners. We had hoped that teams would match on similar-themes or something, and this did happen; but many matches were between teams of friends. The “super teams” were then allowed to make a 5-minute presentation, and awards were given to these large teams. The judges gave two awards for Best Insight (one to a super-team from Pomona College and another to a super-team from UCLA) and a Best Visualization (to the super-team from USC). We did have two inter-collegiate super-teams (UCLA/Cal State Long Beach and UCLA/UCR) make it to the final round.
If you want to host your own DataFest, drop a line to Mine or me and we can give you lots of advice. And if you sit on a large, interesting data set we can use for next year, definitely drop us a line!
I’m often on the hunt for datasets that will not only work well with the material we’re covering in class, but will (hopefully) pique students’ interest. One sure choice is to use data collected from the students, as it is easy to engage them with data about themselves. However I think it is also important to open their eyes to the vast amount of data collected and made available to the public. It’s always a guessing game whether a particular dataset will actually be interesting to students, so learning from the datasets they choose to work with seems like a good idea.
Below are a few datasets that I haven’t seen in previous project assignments. I’ve included the research question the students chose to pursue, but most of these datasets have multiple variables, so you might come up with different questions.
1. Religious service attendance and moral beliefs about contraceptive use: The data are from a February 2012 Pew Research poll. To download the dataset, go to http://www.people-press.org/category/datasets/?download=20039620. You will be prompted to fill out some information and will receive a zipped folder including the questionnaire, methodology, the “topline” (distributions of some of the responses), as well as the raw data in SPSS format (.sav file). Below I’ve provided some code to load this dataset in R, and then to clean it up a bit. Most of the code should apply to any dataset released by Pew Research.
# read data library(foreign) d_raw = as.data.frame(read.spss("Feb12 political public.sav")) # clean up library(stringr) d = lapply(d_raw, function(x) str_replace(x, " \\[OR\\]", "")) d = lapply(d, function(x) str_replace(x, "\\[VOL. DO NOT READ\\] ", "")) d = lapply(d, function(x) str_replace(x, "\222", "'")) d = lapply(d, function(x) str_replace(x, " \\(VOL.\\)", "")) d$partysum = factor(d$partysum) levels(d$partysum) = c("Refused","Democrat","Independent","Republican","No preference","Other party")
The student who found this dataset was interested examining the relationship between religious service attendance and views on contraceptive use. The code provided below can be used to organize the levels of these variables in a meaningful way, and to take a quick peek at a contingency table.
# variables of interest d$attend = factor(d$attend, levels = c("More than once a week","Once a week", "Once or twice a month", "A few times a year", "Seldom", "Never", "Don't know/Refused")) d$q40a = factor(d$q40a, levels = c("Morally acceptable","Morally wrong", "Not a moral issue", "Depends on situation", "Don't know/Refused")) table(d$attend, d$q40a)
2. Social network use and reading: Another student was interested in the relationship between number of books read in the last year and social network use. This dataset is provided by the Pew Internet and American Life Project. You can download a .csv version of the data file at http://www.pewinternet.org/Shared-Content/Data-Sets/2012/February-2012–Search-Social-Networking-Sites-and-Politics.aspx. The questionnaire can also be found at this website. One of the variables of interest, number of books read in the past 12 months (q2), is recorded using the following scheme:
This could be used to motivate a discussion about the importance doing exploratory data analysis prior to jumping into running inferential tests (like asking “Why are there no people who read more than 99 books?”) and also pointing out the importance of checking the codebook.
3. Parental involvement and disciplinary actions at schools: The 2007-2008 School Survey on Crime and Safety, conducted by the National Center for Education Statistics, contains school level data on crime and safety. The dataset can be downloaded at http://nces.ed.gov/surveys/ssocs/data_products.asp. The SPSS formatted version of the data file (.sav) can be loaded in R using the read.spss() function in the foreign library (used above in the first data example). The variables of interest for the particular research question the student proposed are parent involvement in school programs (C0204) and number of disciplinary actions (DISTOT08), but the dataset can be used to explore other interesting characteristics of schools, like type of security guards, whether guards are armed with firearms, etc.
4. Dieting in school-aged children: The Health Behavior in School-Aged Children is an international survey on health-risk behaviors of children in grades 6 through 10. The 2005-2006 US dataset can be found at http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/28241. You will need to log in to download the dataset, but you can do so using a Google or a Facebook account. There are multiple versions of the dataset posted, and the Delimited version (.tsv) can be easily loaded in R using the read.delim() function. The student who found this dataset was interested in exploring the relationship between race of the student (Q6_COMP) and whether or not the student is on a diet to lose weight (Q30). The survey also asks questions on body image, substance use, bullying, etc. that may be interesting to explore.
One common feature of the above datasets is that they are all observational/survey based as it’s more challenging to find experimental (raw) datasets online. Any suggestions?