The Iraq War Diary – An Initial Grep

Posted: October 29th, 2010 | Author: | Filed under: Data Analysis, Philosophy of Data | Tags: , , , , | 16 Comments »

Editors’ Note: While data itself is rarely a source of controversy, dataists.com supports pursuing a data-centric view of conflict. Here, Mike Dewar examines the Wikileaks Iraq war documents with sobering results. Hackers should note the link to his source code and methods at the end of the post.

…any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bell tolls; it tolls for thee.” — John Donne, Meditation XVII

The Iraq War logs recently released by Wikileaks are contained in a 371729121 byte CSV file. It contains 390849 rows and 34 columns. The columns contain dates, locations, reports, category information and counts of the killed and wounded. The date range of the events spans from 2004-11-06 12:37:00 to 2009-04-23 12:30:00, and the events are located within the bounding box defined by (22.5,22.4), (49.6,51.8). Row 4 describes a female walking into a crowd and detonating the explosives and ball bearings she was wrapped in, killing 35 and wounding 36. Searching for events mentioning `ball bearing’ returns 503 events.

There were 65349 Improvised Explosive Device (IED) explosions between the start of 2004 and the end of 2009. Of these 1794 had one enemy killed in action. The month that saw the highest number of explosions was May of 2007, when Iraq experienced 2080 IED explosions. During this month 693 civilians were killed, 85 enemies were killed and 93 friendlies were killed. The ratio of civilian deaths to combatant deaths is 3.89 civilians per combatant. On the first day of May there were 49 IED explosions in which 3 people were killed.

IED explosions in Iraq

Location of all IED explosions as reported in the Wikileaks Iraq War Diary

108 different categories are used to categorise all but 6 events. The category with the most events is `IED explosion’ with 65439 events, followed by `direct fire’ with 57815 events. The category `recon threat’ has 1 event which occurred at 8am on the 17th of April, 2009, where 25 people were noticed with 6 cars in front of a police station in Basra. There are 325 `rock throwing’ events and 325 `assassination’ events.

There are 1211 mentions of the word `robot’, 4710 mentions of the word `UAV’, 1332 mentions of the word `predator’ and 443 mentions of the word `reaper’. The first appearance of one of these keywords is on the 3rd of October, 2006. There are 445 mentions of one or more of the words “contractor”, “blackwater” or “triple canopy”.

drones and contractors in iraq

Density showing the distribution over time of events mentioning contractors and drones

The joint forces report that 108398 people lost their lives in Iraq during 2004-2010. 65650 civilians were killed, 15125 members of the host nation forces were killed, 23857 enemy combatants were killed, and 3766 friendly combatants were killed.

Deaths in Iraq over time

The number of deaths per month as reported in the Wikileaks Iraq War Diary

Please don’t believe any of this. Go instead to the data and have a look for yourself. All the code that has generated this post is available on github at http://github.com/mikedewar/Iraq-War-Diary-Analysis. You can also see what others have been saying, for example the Guardian and the New York Times have great write ups.


What’s the use of sharing code nobody can read?

Posted: October 21st, 2010 | Author: | Filed under: Philosophy of Data | Tags: , , , | 8 Comments »

The basic data science pipeline is on its way to becoming an open one. From Open Data, through an open source analysis, and ending up in results released as part of the Creative Commons, every step of data science can be performed openly.

The problems of releasing data openly are being overcome either aggressively, via sites such as Wikileaks, peacefully through movements such as OpenGov and data.gov.uk, or commercially via sites like Infochimps.

The concept of Open Source is now well known. Through programs from sed to Firefox, open source software is a thriving part of the software ecosystem. This is especially important when performing analysis on open data: why should we be trusted if we don’t tell everyone how we analyzed the data?

At the end of the pipeline, the Creative Commons is becoming more mainstream: For example, much of the image content on Flickr is CC licensed. Authors like Cory Doctorow are proving that creative people can build a career around releasing their work in the creative commons. Larry Lessig, in a brilliant interview with Stephen Colbert, shows how value can be added incrementally to a creative work without anyone losing out.

The central part of this pipeline – Open Analysis – has a basic problem: what’s the use of sharing analysis nobody can read or understand? It’s great that people put their analysis online for the world to see, but what’s the point if that analysis is written in dense code that can barely be read by its author?

This is still a problem even when your analysis code is beautifully laid out in a high-level scripting language and well commented. The chances are that the reader who is deeply moved by some statistical analysis of the latest Wikileaks release still can’t read, or critique, the code that generated it.

The technological problems of sharing code are now all but solved: sites like sourceforge and github allow the sharing, dissemination, and tracking of source code. Projects such as CRAN (for R Packages) and MLOSS (Machine Learning Open Source Software) allow the colocation of code, binaries, documentation, and package information, making finding the code an easy job. There have been several attempts at making the code itself easy to read. We’ve got beautiful documentation generators – but these require careful commenting, and all you really end up with are those comments pretty printed – not so great for expressing your modeling assumptions. Another attempt at readable code is Literate Programming, which encourages you to write the code and its documentation all at the same time but, again, is labour intensive. And this, I think, is at the heart of the problem of writing readable code. It’s just plain hard to do.

Who’s got the time to write a whole PDF every time you want to draw a bar chart? We’ve got press deadlines, conference deadlines, and a public attention span measurable in hours. Writing detailed comments is not only time consuming, it’s often a seriously complicated affair akin to writing an academic paper. And, unless we’re actually writing an academic paper, it’s mostly a thankless task.

My contention is this: nobody is going to consistently write readable code, ever. It’s simply too time consuming and the immediate rewards to the coder are negligible. Yet it’s important for others to be able to understand our analysis if they’re making decisions, as citizens or as subjects, based on this analysis. What is to be done?

The answer lies, I think, in convention. The web development community has nailed this with projects like Ruby On Rails and Django. If I’m working within Ruby on Rails, and I name my objects according to convention, then I get a lot of code for free – I actually save time by writing good code. This saving is not a projected – “you’re not going to be able to read that code in 2 years” saving – but an immediate and obvious one. If I abide by the Ruby on Rails structure, then I don’t have to build my databases from scratch. Web forms are automagically generated. My life is made considerably easier and, without trying, my code has a much better structure.

So do we have any data science conventions? My argument is ‘hell yes’: if I don’t abide by some strong data science conventions then I’ll get into well justified trouble. Are the raw data available? Have I made the preprocessing steps clear? Are my data properly normalized? Are my assumptions valid and openly expressed? Has my algorithm converged? Have the functions I have written been unit tested? Have I performed a proper cross validation of the results?

I think that projects like ProjectTemplate, which imposes a useful structure for a project written in R, is a great start. ProjectTemplate treads a fine line: not upsetting those who like to code close to the metal, whilst rewarding those who follow some simple conventions. ProjectTemplate coaxes us into writing well structured projects by saving us time. For example, it currently provides generic helper functions that read and format most common data files placed into the  data/ folder, producing a well structured R object with virtually zero effort on the part of the coder.

A lot of code already exists to implement standard data science conventions. From cross validation packages to unit tests, our conventions are already well encapsulated. Collecting these tools together into a data science templating system would allow us to formalize best-practices and help with teaching the ‘carpentry‘ aspects of data science. Most importantly it would allow readers to get a clear view of the analysis, using well documented data science conventions as a navigational tool.

At a recent meeting in NYC a well-known data scientist said something like “is awk and grep the best we can do?” which, though a little incendiary, raised a serious question. Are we really destined, time and time again, to re-create a data science pipeline every time a new data set comes our way? Or could we come to some agreement that there is a set of common procedures that underly all our projects?

So I’m interested in hearing what the data science communities think our conventions are, and then in building these into software like ProjectTemplate. Please leave your ideas in the comments and, by automating these conventions, we can start to build more readable code structures. I’ll report on how these conventions evolve as I go along. Maybe we don’t have to reinvent the wheel over and over again – even if it does mean accepting some loose conventions. In return, we focus on the important aspects of analysis, and everyone else will find it much easier to trust what we have to say.


What Data Visualization Should Do: Simple Small Truth

Posted: October 14th, 2010 | Author: | Filed under: Philosophy of Data | Tags: | 5 Comments »

Yesterday the good folks at IA Ventures asked me to lead off the discussion of data visualization at their Big Data Conference. I was rather misplaced among the high-profile venture capitalists and technologist in the room, but I welcome any opportunity to wax philosophically about the power and danger of conveying information visually.

I began my talk by referencing the infamous Afghanistan war PowerPoint slide because I believe it is a great example of spectacularly bad visualization, and how good intentions can lead to disastrous result. As it turns out, the war in Afghanistan is actually very complicated. Therefore, by attempting to represent that complex problem in its entirety much more is lost than gained. Sticking with that theme, yesterday I focused on three key things—I think—data visualization should do:

  1. Make complex things simple
  2. Extract small information from large data
  3. Present truth, do not deceive

The emphasis is added to highlight the goal of all data visualization; to present an audience with simple small truth about whatever the data are measuring. To explore these ideas further, I provided a few examples.

As the Afghanistan war slide illustrates, networks are often the most poorly visualized data. This is frequently because those visualizing network data think it is a good idea to include all nodes and edges in the visualization. This, however, is not making a complex thing simple—rather—this is making a complex thing ugly.

Below is an example of exactly this problem. On the left is a relatively small network (V: ~2,220 and E:~4,400) with weighted edges. I have used edge thickness to illustrate weights, and used a basic force-directed algorithm in Gephi to position the nodes. This is a network hairball, and while it is possible to observe some structural properties in this example, many more subtle aspects of the data are lost in the mess.

Network Slide07.png

On the right are the same data, but I have used information contained in the data to simplify the visualization. First, I performed a k-core analysis to remove all pendants and pendant chains in the data; an extremely useful technique I have mentioned several times before. Next, I used the weighted in-degree of each node as a color scale for the edges, i.e., the dark the blue the higher the in-degree of the node the edges connect to. Then, I simply dropped the nodes from the visualization entirely. Finally, I added a threshold weight for the edges so that any edges below the threshold are drawn with the lightest blue scale. Using these simple techniques the community structures are much more apparent; and more importantly, the means by which those communities are related are easily identified (note the single central node connecting nearly all communities).

To discuss the importance of extracting small information from large data I used the visualization of the WikiLeaks Afghanistan War Diaries that I worked on this past summer. The original visualization is on the left, and while many people found it useful, its primary weakness is the inability to distinguish among the various attack types represented on the map. It is clear that activity gradually increased in specific areas over time; however, it is entirely unclear what activity was driving that evolution. A better approach is to focus on one attack type and attempt glean information from that single dimension.

Original Afghanistan WL viz Only 'Enemy Explosive' events

On the right I have extracted only the ‘Explosive Hazard’ data from the full set and visualized as before. Now, it is easy to see that the technology of IEDs were a primary force in the war, and as has been observed before, the main highway in Afghanistan significantly restricted the operations of forces.

Finally, to show the danger of data deception I replicated a chart published at the Monkey Cage a few months ago on the sagging job market for political science professors. On the left is my version of the original chart published at the Monkey Cage. At first glance, the decline in available assistant professorships over time is quite alarming. The steep slope conveys a message of general collapse in the job market. This, however, is not representative of the truth.

Slide10.png Slide11.png

Note that in the visualization on the left the y-axis scales go from 450 to 700, which happen to be the limits of the y-axis data. Many data visualization tools, including ggplot2 which is used here, will scale their axes by the data limits by default. Often this is desirable; hence the default behavior, but in this case it is conveying a dishonest perspective on the job market decline. As you can see from the visualization on the right, by scaling the y-axis from zero the decline is much less dramatic, though still relatively troubling for those of us who will be going on the job market in the not distant future.

These ideas are very straightforward, which is why I think they are so important to consider when doing your own visualizations. Thanks again to IA Ventures for providing me a small soap box in front of such a formidable crowd yesterday. As always, I welcome any comments or criticisms.

Cross-posed at Zero Intelligence Agents


Data Tools Contest Update

Posted: October 10th, 2010 | Author: | Filed under: R Explorations | No Comments »

At midnight this morning, Kaggle began accepting submissions for the data hacking contest that we announced on Thursday. Hopefully you’ve used the last few days to build predictions for the test data set. Once you submit your predictions, you’ll be able to see your position on the leaderboard. Good luck!


Using Data Tools to Find Data Tools, the Yo Dawg of Data Hacking

Posted: October 7th, 2010 | Author: | Filed under: R Explorations | 15 Comments »

by John Myles White and Drew Conway

Editors’ Note: One theme likely to recur on dataists.com is that data hackers love using their tools to analyze, visualize, and predict everything. Data hackers also love discovering and learning about new tools. So it should come as no surprise that Dataist contributors John Myles White and Drew Conway thought to develop a model that can predict which R packages a particular user would like. And in the spirit of friendly competition, they’re opening it up for others to participate!

Introduction

A graphical visualization of packages’ “suggestion” relationships. Affectionately referred to as the R Flying Spaghetti Monster. More info below.

As part of the kickoff for dataists, we’re announcing a data hacking contest tailored to the statistical computing community. Contestants will build a recommendation engine for R packages. The contest is being administered in collaboration with Kaggle. If you’re interested in the details of the contest, please read on.

By sponsoring this contest, we’re hoping to encourage the data hacking community to use its skills to build a recommendation engine that will help R programmers to find the best packages on CRAN, the standard repository for R libraries. Like many data-driven projects, the question has evolved with the availability of data. We started with the question, “which packages are best?” and replaced it with the empirical question, “which packages are used most often?” This is quite a difficult question to answer as well, because the appropriate data set is neither readily available nor can it be easily acquired. For that reason, we’ve settled on the more manageable question, “which packages are most often installed by normal R users?”

This last question could potentially be answered in a variety of ways. Our current approach uses a convenience sample of installation data that we’ve collected from volunteers in the R community, who kindly agreed to send us a list of the packages they have on their systems. We’ve anonymized this data and compiled a set of metadata-based predictors that allow us to predict the installation probabilities quite well. We’re releasing all of our current work, including the data we have and all of the code we’ve used so far for our exploratory analyses. The contest itself will go live on Kaggle on Sunday and will end four months from Sunday on February 10, 2011. The rules, prizes and official data sets are all described below.

Rules and Prizes

To win the contest, you need to predict the probability that a user U has a package P installed on their system for every pair, (U, P). We’ll assess your performance using ROC methods, which will be evaluated against a held out test data set. The winning team will receive 3 UseR! books of their choosing. In order to win the contest, you’ll have to provide your analysis code to us by creating a fork of our GitHub repository. You’ll also be required to provide a written description of your approach. We’re asking for so much openness from the winning team because we want this contest to serve as a stepping stone for the R community. We’re also hoping that enterprising data hackers will extend the lessons learned through this contest to other programming languages.

Getting Started

To get started, you can go to GitHub to download the primary data sets and code. The sections below describe the data sets that you can download and the baseline model you should try to beat.

Data Sets

For this contest, there are really three data sets. At the start, you’ll want to download the heavily preprocessed data set that we’ll be providing to you through Kaggle. This data set is also available on GitHub, where it is labeled as training_data.csv. This file contains a matrix with roughly 100,000 rows and 16 columns, representing installation information for all existing R packages and 52 users. The test data set against which your performance will be evaluated contains approximately another 30,000 rows.

Each row of this matrix contains the following information:

  1. Package: The name of the current R package.
  2. DependencyCount: The number of other R packages that depend upon the current package.
  3. SuggestionCount: The number of other R packages that suggest the current package.
  4. ImportCount: The number of other R packages that import the current package.
  5. ViewsIncluding: The number of task views on CRAN that include the current package.
  6. CorePackage: A dummy variable indicating whether the current package is part of core R.
  7. RecommendedPackage: A dummy variable indicating whether the current package is a recommended R package.
  8. Maintainer: The name and e-mail address of the package’s maintainer.
  9. PackagesMaintaining: The number of other R packages that are being maintained by the current package’s maintainer.
  10. User: The numeric ID of the current user who may or may not have installed the current package.
  11. Installed: A dummy variable indicating whether the current package was installed by the current user.

In addition to these central predictors, we are including logarithmic transforms of the non-binary predictors as we find that this improves the model’s fit to the full data set. For that reason, the last five columns of our data set are,

  1. LogDependencyCount
  2. LogSuggestionCount
  3. LogImportCount
  4. LogViewsIncluding
  5. LogPackagesMaintaining

The Kaggle data set is really the minimal amount of data you should use to build your model. For most users, you’ll quickly want to move on to the raw metadata that we’re providing on GitHub. This second-level data set is contained in several normalized CSV files inside of the data directory:

  1. core.csv: The R and base packages are listed here as core packages.
  2. depends.csv: The full dependency graph for CRAN as of 8/28/2010. An edge between A and B indicates that A depends upon B. For example, ggplot2 depends upon plyr, but plyr does not depend upon ggplot2.
  3. imports.csv: The full import graph for CRAN as of 8/28/2010. An edge between A and B indicates that A imports B.
  4. installations.csv: A list of the packages installed on 52 users’ systems. Each row indicates whether or not user A has installed package B.
  5. maintainers.csv: A list of the current maintainers for each package. We use this instead of the Author field because it is generally easier to parse.
  6. packages.csv: A list of all of the packages contained in CRAN on 8/28/2010.
  7. recommended.csv: A list of the packages recommended for installation by the R Core team.
  8. suggests.csv: The full suggestion graph for CRAN as of 8/28/2010. An edge between A and B indicates that A suggests B.
  9. views.csv: A list of all of the packages indicated in each of the task views on CRAN as of 8/28/2010.

To give you a taste of this richer data set we’re providing, we’ve built a visualization of the suggestions graph found in suggestions.csv:

In the graph (above), the package names are sized and colored by in-degree centrality (i.e., larger sized and darker colored nodes have higher centrality), which you can think of as a very rough proxy for importance. If you’re interested in producing similar visualizations of this data, you can use Gephi to produce new graphics like this. To better explore the graph toggle to full-screen mode.

For those interested, we’re also providing the R scripts we used to generate the metadata predictors we’re providing, in case you’d like to use them as examples of how to work with the raw data from CRAN. The relevant scripts are:

  1. extract_graphs.R: Extracts the dependency, import and suggestion graphs from CRAN.
  2. get_maintainers.R: Extracts the package maintainers from CRAN.
  3. get_packages.R: Extracts the names of all of the packages on CRAN.
  4. get_views.rb: Extracts the packages that are contained in each of the task views on CRAN. This program is written in Ruby, not R.

All of the other data sets described earlier were compiled by hand.

Please note that these data sets are normalized, so we are also providing preprocessing scripts that build one large data frame that contains all of the information we’ve used to build our predictive model. The lib/preprocess_data.R script performs the relevant merging and transformation operations, including logarithmic transformations that we’ve found improve predictive accuracy. The result of this merging is the training data set that we’re providing through Kaggle.

For the truly dedicated, you should consider CRAN itself to be the raw data set for this contest. If you want to use predictors beyond those we’re giving you, you’ll want to download a copy of CRAN that you can work with locally. You can do this using the Perl script, fetch_cran.pl, that we’re providing. To be kind to the CRAN maintainers, this download script sleeps for ten seconds between each step in the spidering process. Obviously you can change this, but please be considerate about the amount of bandwidth you use if you do make changes.

Please note: until you are familiar with the preprocessed data sets that we’re providing, we suggest that you do not download a copy of CRAN. For many users, working directly with a raw copy of CRAN will not be efficient.

Closing Remarks

We think this contest can help focus data hackers on an unsolved problem: using our current data tools to help us find the best data tools. We hope you’ll consider participating and even extending this work to new contexts. Happy hacking!

More Info

If you have further questions about this contest, please direct them to John Myles White.


Outlier: Code Quarterly

Posted: October 3rd, 2010 | Author: | Filed under: outliers | No Comments »

by Vince Buffalo

Outlier: Something a data hacker keeps their eye on.

Peter Seibel is a hacker extraordinaire. He is the author of the best selling introduction to Common Lisp, Practical Common Lisp, as well as Coders at Work, a collection of interviews with 15 great programmers. His new project is a “Hackademic Journal” entitled Code Quarterly.

This is something to watch for in the future. The format is excitingly different; less up-to-date buzz, more technical, in-depth pieces. One particularly exciting article type is code reads: actual “guided tours” through beautiful code. Book reviews, Q&A interviews with prominent programmers, and technical explanations of concepts are also to be featured.

Currently Peter is looking for writers and future readers. If you’re interested in writing for Code Quarterly, email him at editorQUACK@codequarterly.com (duck sound removed) and visit the writing guidelines and some topics they’re looking for. If you’re just interested in following the project, complete the form on Code Quarterly’s website.