Tutorial

About this tutorial

The best way to learn to use any tool is to actually use it. In this tutorial we will use agate to answer some basic questions about a dataset.

The data will be using is a copy of the National Registery of Exonerations made on August 28th, 2015. This dataset lists individuals who are known to have been exonerated after having been wrongly convicted in United States courts. At the time the data was exported there were 1,651 entries in the registry.

Installing agate

Installing agate is easy:

pip install agate

Note

You should be installing agate inside a virtualenv. If for some crazy reason you aren’t using virtualenv you will need to add a sudo to the previous command.

Getting the data

Let’s start by creating a clean workspace:

mkdir agate_tutorial
cd agate_tutorial

Now let’s download the data:

curl -L -O https://github.com/onyxfish/agate/raw/master/examples/realdata/exonerations-20150828.csv

You will now have a file named exonerations-20150828.csv in your agate_tutorial directory.

Getting setup

Launch the Python interpreter:

python

Now let’s import our dependencies:

import agate

Loading data from a CSV

The Table is the basic class in agate. To create a table from a CSV we use Table.from_csv():

.. code-block:: python
exonerations = agate.Table.from_csv(‘exonerations-20150828.csv’)

With no other arguments specified, agate will automatically create an instance of TypeTester and use it to figure out the type of each column. TypeTester is a “best guess” approach to determining the kinds of data in your table. It can guess wrong. In that case you can create a TypeTester manually and use the force argument to override its guess for a specific column:

tester = TypeTester(force={
    'false_evidence': agate.Boolean()
})

exonerations = agate.Table.from_csv('exonerations-20150828.csv', column_types=tester)`

If you already know the types of your data you may wish to skip the TypeTester entirely. You may pass sequences of column names and column types to Table.from_csv() as the column_names and column_types arguments, respectively.

For larger datasets the TypeTester can be slow to evaluate the data. In that case you can specify a limit argument to restrict the amount of data it will use to infer types:

tester = TypeTester(limit=100)

exonerations = agate.Table.from_csv('exonerations-20150828.csv', tester)

The dataset we are using in this tutorial is simple enough that we can rely on the built-in TypeTester to guess quickly and accurately.

Note

agate’s builtin CSV csv_py3.reader() and csv_py3.writer() support unicode and other encodings for both Python 2 and Python 3.

Note

agate also has Table.from_json() for creating tables from JSON data.

Aggregating column data

With the basics out of the way, let’s do some actual analysis. Analysis begins with questions, so let’s ask some.

Q: How many exonerations involved a false confession?

Answering this question involves counting the number of True values in the false_confession column. When we created the table we specified that the data in this column contained Boolean data. Because of this, agate has taken care of coercing the original text data from the CSV into Python’s True and False values.

We’ll answer the question using by using an instance of Count which is a type of Aggregation. Aggregations are used to perform “column-wise” calculations. That is, they derive a new single value from the contents of a column. In the case of Count, it will tell us how many times a particular value appears in the column.

An Aggregation is applied to a table using Table.aggregate().

It sounds complicated, but it’s really simple. Putting it all together looks like this:

num_false_confessions = exonerations.aggregate(agate.Count('false_confession', True))

print(num_false_confessions)
211

Let’s look at another example, this time using a numerical aggregation.

Q: What was the median age of exonerated indviduals at time of arrest?

median_age = exonerations.aggregate(agate.Median('age'))

print(median_age)

Answer:

/Users/onyxfish/src/agate/agate/warns.py:17: NullCalculationWarning: Column "age" contains nulls. These will be excluded from Median calculation.
  ), NullCalculationWarning)
/Users/onyxfish/src/agate/agate/warns.py:17: NullCalculationWarning: Column "age" contains nulls. These will be excluded from Percentiles calculation.
  ), NullCalculationWarning)
26

The answer to our question is “26 years old”, however, as the warnings indicate, not every exonerated individual in the data has a value for the age column. The Median statistical operation has no standard way of accounting for null values, so it removes them before running the calculation.

Q: How many individuals do not have an age specified in the data?

Now that we know there are null values in the age column, we might worry about our sample size. What if most of the rows don’t have an age?

num_without_age = exonerations.aggregate(agate.Count('age', None))

print(num_without_age)

Answer:

9

Only nine rows in this dataset don’t have age, so it’s certainly still useful to compute a median. However, we might still want to filter those rows out so we could have a consistent sample for all of our calculations. In the next section you’ll learn how to do just that.

Different aggregations can be applied depending on the type of data in each column. If none of the provided aggregations suit your needs you can use Summary to apply an arbitrary function to a column. If that still doesn’t suit your needs you can always create your own aggregation from scratch by subclassing Aggregation.

Selecting and filtering data

So what if those rows with no age were going to flummox our analysis? Agate’s Table class provides a full suite of SQL-like operations including Table.select() for grabbing specific columns, Table.where() for selecting particular rows and Table.group_by() for grouping rows by common values.

Let’s use Table.where() to filter our exonerations table to only those individuals that have an age specified.

with_age = exonerations.where(lambda row: row['age'] is not None)

You’ll notice we provide a lambda function to the Table.where(). This function is applied to each row and if it returns True, then the row is included in the output table.

A crucial thing to understand about these table methods is that they return new tables. In our example above exonerations was a Table instance and we applied Table.where(), so with_age is a new, different Table. The tables themselves can’t be changed. You can create new tables with these methods, but you can’t modify them in-place. (If this seems weird, just trust me. There are lots of good computer science-y reasons to do it this way.)

We can verify this did what we expected by counting the rows in the original table and rows in the new table:

old = len(exonerations.rows)
new = len(with_age.rows)

print(old - new)
9

Nine rows were removed, which is the number of nulls we had already identified were in the column.

Now if we calculate the median age of these individuals, we don’t see the warning anymore.

median_age = with_age.aggregate(agate.Median('age'))

print(median_age)
26

Computing new columns

In addition to “column-wise” aggregations there are also “row-wise” computations. Computations go through a Table row-by-row and derive a new column using the existing data. To perform row computations in agate we use subclasses of Computation.

When one or more instances of Computation are applied with the Table.compute() method, a new table is created with additional columns.

Q: How long did individuals remain in prison before being exonerated?

To answer this question we will apply the Change computation to the convicted and exonerated columns. Each of these columns contains the individual’s age at the time of that event. All that Change does is compute the difference between two numbers. (In this case each of these columns contain a Number, but this will also work with Date or DateTime)

with_years_in_prison = exonerations.compute([
    ('years_in_prison', agate.Change('convicted', 'exonerated'))
])

median_years = with_years_in_prison.aggregate(agate.Median('years_in_prison'))

print(median_years)
8

The median number of years an exonerated individual spent in prison was 8 years.

Sometimes, the built-in computations, such as Change won’t suffice. I mentioned before that you could perform arbitrary column-wise aggregations using Summary. You can do the same thing for row-wise computations using Formula. This is somewhat analogous to Excel’s cell formulas.

For example, this code will create a full_name column from the first_name and last_name columns in the data:

full_names = exonerations.compute([
    ('full_name', agate.Formula(text_type, lambda row: '%(first_name)s %(last_name)s' % row))
])

For efficiency’s sake, agate allows you to perform several computations at once (though they can’t depend on one another):

with_computations = exonerations.compute([
    ('full_name', agate.Formula(text_type, lambda row: '%(first_name)s %(last_name)s' % row)),
    ('years_in_prison', agate.Change('convicted', 'exonerated'))
])

If Formula is not flexible enough (for instance, if you needed to compute a new value based on the distribution of data in a column) you can always implement your own subclass of Computation. See the API documentation for computations to see all of the supported ways to compute new data.

Sorting and slicing

Q: Who are the ten exonerated individuals who were youngest at the time they were arrested?

Remembering that methods of tables return tables, we will use Table.order_by() to sort our table:

sorted_by_age = exonerations.order_by('age')

We can then use Table.limit() get only the first ten rows of the data.

youngest_ten = sorted_by_age.limit(10)

Now let’s use Table.print_table() to help us pretty the results in a way we can easily review:

youngest_ten.print_table(max_columns=7)
|------------+------------+-----+-----------+-------+---------+---------+------|
|  last_name | first_name | age | race      | state | tags    | crime   | ...  |
|------------+------------+-----+-----------+-------+---------+---------+------|
|  Murray    | Lacresha   | 11  | Black     | TX    | CV, F   | Murder  | ...  |
|  Adams     | Johnathan  | 12  | Caucasian | GA    | CV, P   | Murder  | ...  |
|  Harris    | Anthony    | 12  | Black     | OH    | CV      | Murder  | ...  |
|  Edmonds   | Tyler      | 13  | Caucasian | MS    |         | Murder  | ...  |
|  Handley   | Zachary    | 13  | Caucasian | PA    | A, CV   | Arson   | ...  |
|  Jimenez   | Thaddeus   | 13  | Hispanic  | IL    |         | Murder  | ...  |
|  Pacek     | Jerry      | 13  | Caucasian | PA    |         | Murder  | ...  |
|  Barr      | Jonathan   | 14  | Black     | IL    | CDC, CV | Murder  | ...  |
|  Brim      | Dominique  | 14  | Black     | MI    | F       | Assault | ...  |
|  Brown     | Timothy    | 14  | Black     | FL    |         | Murder  | ...  |
|------------+------------+-----+-----------+-------+---------+---------+------|

If you find it impossible to believe that an eleven year-old was convicted of murder, I encourage you to read the Registry’s description of the case.

Note

In the previous example we could have omitted the Table.limit() and passed a max_rows=10 to Table.print_table() instead. In this case they accomplish exactly the same goal.

What if we were more curious about the distribution of ages, rather than the highest or lowest? agate includes the Table.counts() and Table.bins() methods for counting data individually or by ranges. Let’s try binning the ages. Then, instead of using Table.print_table(), we’ll use Table.print_bars() to generate a simple, text bar chart.

binned_ages = table.bins('age', 10, 0, 100)
binned_ages.print_bars('age', 'count', width=80)
age        count
[0 - 10)       0 ▓
[10 - 20)    307 ▓░░░░░░░░░░░░░░░░░░░░░░░░
[20 - 30)    718 ▓░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
[30 - 40)    377 ▓░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
[40 - 50)    176 ▓░░░░░░░░░░░░░░
[50 - 60)     53 ▓░░░░
[60 - 70)     10 ▓░
[70 - 80)      0 ▓
[80 - 90)      1 ▓
[90 - 100]     0 ▓
None           9 ▓░
                 +---------------+--------------+--------------+---------------+
                 0              200            400            600            800

Notice that we specify we want 10 bins spanning the range 0 to 100. If these values are omitted agate will attempt to infer good defaults. We also specify that we want our bar chart to span a width of 80 characters. This can be adjusted to a suitable width for your terminal or document.

Note

If you use a monospaced font, such as Courier, you can copy and paste agate bar charts into emails or documents. No screenshots required.

Grouping and aggregating

Q: Which state has seen the most exonerations?

This question can’t be answered by operating on a single column. What we need is the equivalent of SQL’s GROUP BY. agate supports a full set of SQL-like operations on tables. Unlike SQL, agate breaks grouping and aggregation into two discrete steps.

First, we use Table.group_by() to group the data by state.

by_state = exonerations.group_by('state')

This takes our original Table and groups it into a TableSet, which contains one table per county. As mentioned much earlier in this tutorial, TableSet’s are instances of MappedSequence. That means that work very much like Column and Row.

Now we need to aggregate the total for each state. This works in a very similar way to how it did when we were aggregating columns of a single table, except that we’ll use the Length aggregation to count the total number of values in the table.

state_totals = by_state.aggregate([
    ('count', agate.Length())
])

sorted_totals = state_totals.order_by('count', reverse=True)

sorted_totals.print_table(max_rows=5)
|--------+--------|
|  state | count  |
|--------+--------|
|  TX    | 212    |
|  NY    | 202    |
|  CA    | 154    |
|  IL    | 153    |
|  MI    | 60     |
|  ...   | ...    |
|--------+--------|

You’ll notice we pass a sequence of tuples to TableSet.aggregate(). Each one includes two elements. The first is the new column name being created. The second is an instance of some Aggregation. Unsurpringly, in this case the results appear to be roughly proportional to population.

Q: What state has the longest median time in prison prior to exoneration?

This is a much more complicated question that’s going to pull together a lot of the features we’ve been using. We’ll repeat the computations we applied before, but this time we’re going to roll those computations up in state-by-state groups and then take the Median of each group. Then we’ll sort the data and see where people have been stuck in prison the longest.

with_years_in_prison = exonerations.compute([
    ('years_in_prison', agate.Change('convicted', 'exonerated'))
])

state_totals = with_years_in_prison.group_by('state')

medians = state_totals.aggregate([
    ('count', agate.Length()),
    ('median_years_in_prison', agate.Median('years_in_prison'))
])

sorted_medians = medians.order_by('median_years_in_prison', reverse=True)

sorted_medians.print_table(max_rows=5)
|--------+-------+-------------------------|
|  state | count | median_years_in_prison  |
|--------+-------+-------------------------|
|  DC    | 15    | 27                      |
|  NE    | 9     | 20                      |
|  ID    | 2     | 19                      |
|  VT    | 1     | 18                      |
|  LA    | 45    | 16                      |
|  ...   | ...   | ...                     |
|--------+-------+-------------------------|

DC? Nebraska? What accounts for these states having the longest times in prison before exoneration? I have no idea! Given that the group sizes are small, it would probably be wise to look for outliers.

As with Table.aggregate() and Table.compute(), the TableSet.aggregate() method takes a list of aggregations to perform. You can aggregate as many columns as you like in a single step and they will all appear in the output table.

Multi-dimensional aggregation

Before we wrap up, let’s try one more thing. I’ve already shown you that you can use TableSet to group instances of Table. However, you can also use a TableSet to group other TableSets. To put that another way, instances of TableSet can be nested.

The key to nesting data in this way is to use TableSet.group_by(). Before we used Table.group_by() to split data up into a group of tables. Now we’ll use TableSet.group_by() to further subdivide that data. Let’s look at a concrete example.

Q: Is there a collective relationship between race, age and time spent in prison prior to exoneration?

I’m not going to explain every stage of this analysis as most of it repeats patterns used previously. The key part to look for is the two separate uses of group_by:

# Filters rows without age data
only_with_age = with_years_in_prison.where(
    lambda r: r['age'] is not None
)

# Group by race
race_groups = only_with_age.group_by('race')

# Sub-group by age cohorts (20s, 30s, etc.)
race_and_age_groups = race_groups.group_by(
    lambda r: '%i0s' % (r['age'] // 10),
    key_name='age_group'
)

# Aggregate medians for each group
medians = race_and_age_groups.aggregate([
    ('count', agate.Length()),
    ('median_years_in_prison', agate.Median('years_in_prison'))
])

# Sort the results
sorted_groups = medians.order_by('median_years_in_prison', reverse=True)

# Print out the results
sorted_groups.print_table(max_rows=10)
|------------------+-----------+-------+-------------------------|
|  race            | age_group | count | median_years_in_prison  |
|------------------+-----------+-------+-------------------------|
|  Native American | 20s       | 2     | 21.5                    |
|                  | 20s       | 1     | 19                      |
|  Native American | 10s       | 2     | 15                      |
|  Native American | 30s       | 2     | 14.5                    |
|  Black           | 10s       | 188   | 14                      |
|  Black           | 20s       | 358   | 13                      |
|  Asian           | 20s       | 4     | 12                      |
|  Black           | 30s       | 156   | 10                      |
|  Caucasian       | 10s       | 76    | 8                       |
|  Caucasian       | 20s       | 255   | 8                       |
|  ...             | ...       | ...   | ...                     |
|------------------+-----------+-------+-------------------------|

That’s it–you made it through the tutorial! Now it’s your turn!

Where to go next

This tutorial only scratches the surface of agate’s features. For many more ideas on how to apply agate, check out the Cookbook, which includes dozens of examples of specific features of agate as well as recipes for substituting agate for Excel, SQL, R and more. Also check out the agate’s Extensions which add support for reading/writing SQL tables, performing statistical analysis and more.

Also, if you’re going to be doing data processing in Python you really ought to check out proof, a library for building data processing pipelines that are repeatable and self-documenting. It will make your code cleaner and save you tons of time.

Good luck in your reporting!