{ "metadata": { "name": "130418-Data-driven journalism" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "\"Literate computing\" and computational reproducibility: IPython in the age of data-driven journalism" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As \"software eats the world\" and we become awash in the flood of quantitative information denoted by the \"Big Data\" buzzword, it's clear that informed debate in society will increasingly depend on our ability to communicate information that is based on data. And for this communication to be a truly effective *dialog*, it is necessary that the arguments made based on data can be deconstructed, analyzed, rebutted or expanded by others. Since these arguments in practice often rely critically on the execution of code (whether an Excel spreadsheet or a proper program), it means that we really need tools to effectively communicate narratives that combine code, data and the interpretation of the results. \n", "\n", "I will point out here two recent examples, taken from events in the news this week, where IPython has helped this kind of discussion, in the hopes that it can motivate a more informed style of debate where all the moving parts of a quantitative argument are available to all participants." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Insight, not numbers: from literate programming to literate computing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The computing community has for decades known about the [\"literate programming\"](http://en.wikipedia.org/wiki/Literate_programming) paradigm introduced by [Don Knuth](http://www-cs-faculty.stanford.edu/~uno/lp.html) in the 70's and fully formalized in his famous 1992 book. Briefly, Knuth's approach proposes writing computer programs in a format that mixes the code and a textual narrative together, and from this format generating separate files that will contain either an actual code that can be compiled/executed by the computer, or a narrative document that *explains* the program and is meant for human consumption. The idea is that by allowing the authors to maintain a close connection between code and narrative, a number of benefits will ensue (clearer code, less programming errors, more meaningful descriptions than mere comments embedded in the code, etc).\n", "\n", "I don't take any issue with this approach per se, but I don't personally use it because it's not very well suited to the kinds of workflows that I need in practice. These require the frequent execution of small fragments of code, in an iterative cycle where code is run to obtain partial results that inform the next bit of code to be written. Such is the nature of interactive exploratory computing, which is the bread and butter of many practicing scientists. This is the kind of workflow that led me to creating IPython over a decade ago, and it continues to inform basically every decision we make in the project today.\n", "\n", "As [Hamming](http://en.wikipedia.org/wiki/Richard_Hamming) famously said in 1962, *\"The purpose of computing is insight, not numbers.\"*. IPython tries to help precisely in this kind of usage pattern of the computer, in contexts where there is no clear notion in advance of what needs to be done, so the user is the one driving the computation. However, IPython also tries to provide a way to *capture* this process, and this is where we join back with the discussion above: while LP focuses on providing a narrative description of the structure of an algorithm, our working paradigm is one where the *act of computing* occupies the center stage. \n", " \n", "From this perspective, we therefore refer to the worfklow exposed by these kinds of computational notebooks (not just IPython, but also Sage, Mathematica and others), as \"literate computing\": it is the weaving of a narrative directly into a live computation, interleaving text with code and results to construct a complete piece that relies equally on the textual explanations and the computational components. For the goals of communicating results in scientific computing and data analysis, I think this model is a better fit than the literate programming one, which is rather aimed at developing software in tight concert with its design and explanatory documentation. I should note that we have some ideas on how to make IPython stronger as a tool for \"traditional\" literate programming, but it's a bit early for us to focus on that, as we first want to solidify the computational workflows possible with IPython.\n", " \n", "As I mentioned [in a previous blog post about the history of the IPython notebook](http://blog.fperez.org/2012/01/ipython-notebook-historical.html), the idea of a computational notebook is not new nor ours. Several IPython developers used extensively other similar systems from a long time and we took lots of inspiration from them. What we have tried to do, however, is to take a fresh look at these ideas, so that we can build a computational notebook that provides the best possible experience for computational work *today*. That means taking the existence of the Internet as a given in terms of using web technologies, an architecture based on well-specified protocols and reusable low-level formats (JSON), a language-agnostic view of the problem and a concern about the entire cycle of computing from the beginning. We want to build a tool that is just as good for individual experimentation as it is for collaboration, communication, publication and education." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Government debt, economic growth and a buggy Excel spreadsheet: the code behind the politics of fiscal austerity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the last few years, extraordinarily contentious debates have raged in the circles of political power and fiscal decision making around the world, regarding the relation between government debt and economic growth. One of the center pieces of this debate was a paper form Harvard economists C. Reinhart and K. Rogoff, later turned into a [best-selling book](http://www.reinhartandrogoff.com), that argued that beyond 90% debt ratios, economic growth would plummet precipitously.\n", "\n", "This argument was used (amongst others) by politicians to justify some of the extreme austerity policies that have been foisted upon many countries in the last few years. On April 15, a team of researchers from U. Massachusetts [published a re-analysis](http://www.peri.umass.edu/236/hash/31e2ff374b6377b2ddec04deaa6388b1/publication/566) of the original data where they showed how Rienhart and Rogoff had made both fairly obvious coding errors in their orignal Excel spreadsheets as well as some statistically questionable manipulations of the data. Herndon, Ash and Pollin (the U. Mass authors) published all their scripts in R so that others could inspect their calculations.\n", "\n", "Two posts from [the Economist](http://www.economist.com/news/finance-and-economics/21576362-seminal-analysis-relationship-between-debt-and-growth-comes-under) and [the Roosevelt Institute](http://www.nextnewdeal.net/rortybomb/researchers-finally-replicated-reinhart-rogoff-and-there-are-serious-problems) nicely summarize the story with a more informed policy and economics discussion than I can make. James Kwak has a [series of posts](http://baselinescenario.com/2013/04/19/fatal-sensitivity) that dive into [technical detail](http://baselinescenario.com/2013/04/18/are-reinhart-and-rogoff-right-anyway/) and question the horrible choice of [using Excel](http://baselinescenario.com/2013/04/18/more-bad-excel/), a tool that should for all intents and purposes be banned from serious research as it entangles code and data in ways that more or less guarantee serious errors in anything but trivial scenarios. Victoria Stodden just wrote [an excellent new post](http://blog.stodden.net/2013/04/19/what-the-reinhart-rogoff-debacle-really-shows-verifying-empirical-results-needs-to-be-routine/) with specific guidance on practices for better reproducibility; here I want to take a narrow view of these same questions focusing strictly on the tools.\n", "\n", "As reported in Mike Konczal's piece at the Roosevelt Institute, Herndon et al. had to reach out to Reinhart and Rogoff for the original code, which hadn't been made available before (apparently causing much frustration in economics circles). It's absolutely unacceptable that major policy decisions that impact millions worldwide had until now hinged effectively on the unverified word of two scientists: no matter how competent or honorable they may be, we know everybody makes mistakes, and in this case there were both egregious errors and debatable assumptions. As Konczal says, \"all I can hope is that future historians note that one of the core empirical points providing the intellectual foundation for the global move to austerity in the early 2010s was based on someone accidentally not updating a row formula in Excel.\" To that I would add the obvious: this should *never* have happened in the first place, as we should have been able to inspect that code and data from the start.\n", "\n", "Now, moving over to IPython, something interesting happened: when I saw the report about the Herndon et al. paper and realized they had published their R scripts for all to see, I posted this request on Twitter:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Economics experts to turn analysis from Herndon, Ash & Pollin into IPython notebook? Data-driven journalism @jseabold peri.umass.edu/236/hash/31e2f\u2026

— Fernando Perez (@fperez_org) April 18, 2013
\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It seemed to me that the obvious thing to do would be to create a document that explained together the analysis and a bit of narrative using IPython, hopefully more easily used as a starting point for further discussion. What I didn't really expect is that it would take less than *three hours* for [Vincent Arel-Bundock](http://www-personal.umich.edu/~varel), a PhD Student in Political Science at U. Michigan, to come through with a solution:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

@jseabold @fperez_org here you go. only things missing: loess & linear hypo. code could be cleaner, but hey, it works nbviewer.ipython.org/5409848

— Vincent Arel-Bundock (@VincentAB) April 18, 2013
\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I suggested that he turn this example into a proper repository on github with the code and data, which he quickly did:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

@fperez_org @jseabold makes sense: github.com/vincentarelbun\u2026

— Vincent Arel-Bundock (@VincentAB) April 19, 2013
\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So now we have a [full IPython notebook](http://nbviewer.ipython.org/urls/raw.github.com/vincentarelbundock/Reinhart-Rogoff/master/reinhart-rogoff.ipynb), kept in a proper github repository. This repository can enable an informed debate about the statistical methodologies used for the analysis, and now anyone who simply installs the SciPy stack can not only run the code as-is, but explore new directions and contribute to the debate in a properly informed way." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "On to the heavens: the New York Times' infographic on NASA's Kepler mission" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As I was discussing the above with Vincent on Twitter, I came across this post by [Jonathan Corum](http://13pt.com), an information designer who works as NY Times science graphics editor:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

That Kepler graphic was my first try using ipython+pandas and coffeescript+d3 in the same project. nyti.ms/115HXag

— Jonathan Corum (@13pt) April 18, 2013
\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The post links to a gorgeous, [animated infographic](http://www.nytimes.com/interactive/science/space/keplers-tally-of-planets.html?smid=tw-share) that summarizes the results that [NASA's Kepler spacecraft](http://kepler.nasa.gov) has obtained so far, and which accompanies a full [article](http://www.nytimes.com/2013/04/19/science/space/2-new-planets-are-most-earth-like-yet-scientists-say.html?pagewanted=all) at the NYT on Kepler's most recent results: a pair of planets that seem to have just the right features to possibly support life, a quick 1200 light-years hop from us.\n", "\n", "Jonathan indicated that he converted his notebook to a Python script later on for version control and automation, though I explained to him that he could have continued using the notebook, since the `--script` flag would give him a `.py` file if needed, and it's also possible to execute a notebook just like a script, with a bit of additional support code:\n", "\n", "

@13pt pleasure! Note: with --script at startup, a .py is saved alongside nb.You can also 'run' a notebook: gist.github.com/jonathan-taylo\u2026

— Fernando Perez (@fperez_org) April 19, 2013
\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case Jonathan's code isn't publicly available, but I am still very happy to see this kind of usage: it's a step in the right direction already and as more of this analysis is done with open-source tools, we move further towards the possibility of an informed discussion around data-driven journalism.\n", " \n", "I also hope he'll release perhaps some of the code later on, so that others can build upon it for similar analyses. I'm sure lots of people would be interested and it wouldn't detract in any way from the interest in his own work which is strongly tied to the rest of the NYT editorial resources and strengths." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Looking ahead from IPython's perspective" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our job with IPython is to think deeply about questions regarding the intersection of computing, data and science, but it's clear to me at this point that we can contribute in contexts beyond pure scientific research. I hope we'll be able to provide folks who have a direct intersection with the public, such as journalists, with tools that help a more informed and productive debate.\n", "\n", "Coincidentally, UC Berkeley will be hosting on May 4 a [symposium on data and journalism](http://multimedia.journalism.berkeley.edu/blog/2013/mar/20/our-first-ever-data-journalism-symposium), and in recent days I've had very productive interactions with folks in this space on campus. [Cathryn Carson](http://history.berkeley.edu/people/cathryn-carson) currently directs the newly formed [D-Lab](http://dlab.berkeley.edu), whose focus is precisely the use of quantitative and datamethods in the social sciences, and her team has recently been teaching workshops on using Python and R for social scientists. And just last week I lectured in [Raymond Yee's](http://www.ischool.berkeley.edu/people/faculty/raymondyee) course (from the School of Information) where they are using the notebook extensively, following Wes McKinney's excellent [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do) as the class textbook. Given all this, I'm fairly optimistic about the future of a productive dialog and collaborations on campus, given that we have a lot of the IPython team working full-time here." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** as usual, this post is available as an [IPython notebook](http://nbviewer.ipython.org/urls/raw.github.com/fperez/blog/master/130418-Data-driven%20journalism.ipynb) in my [blog repo](https://github.com/fperez/blog)." ] } ], "metadata": {} } ] }