Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Selecting and Modifying Rows/Columns

import pandas as pd
import numpy as np

This subchapter will go into greater detail on how you can select/modify rows and columns in pandas.

First, let us load in the same dataset as the first subchapter in the Pandas chapter. As a reminder, this dataset has beer sales across 50 continental states in the US. It is sourced from Salience and Taxation: Theory and Evidence by Chetty, Looney, and Kroft (AER 2009), and it includes 7 columns:

  • st_name: the state abbreviation

  • year: the year the data was recorded

  • c_beer: the quantity of beer consumed, in thousands of gallons

  • beer_tax: the ad valorem tax, as a percentage

  • btax_dollars: the excise tax, represented in dollars per case (24 cans) of beer

  • population: the population of the state, in thousands

  • salestax: the sales tax percentage

df = pd.read_csv('data/beer_tax.csv')
df
Loading...

Extracting Rows/Columns that Satisfy a Given Condition

An important task we want to accomplish when looking at data is to filter out all rows that satisfy a certain condition. In other words, we want to extract all the rows where a certain thing is accomplished. Conditional selection helps us achieve exactly this, extracting all rows that satisfy a certain condition.

In the previous subchapter, we briefly gave an example of how we can use .loc to extract all rows that match certain values. To be more precise, when we pass in certain values for .loc and iloc to use, they generate an array of True/False values and use that to determine which values to include. This is very similar to our prior discussion on NumPy slicing.

For example, let’s say we want all the data for the state of Alabama. Then, we would want all rows where st_name is CA. We can generate a Series of True/False values corresponding to whether each row has a st_name of AL using the code below.

df['st_name'] == 'AL'
0 True 1 True 2 True 3 True 4 True ... 1703 False 1704 False 1705 False 1706 False 1707 False Name: st_name, Length: 1708, dtype: bool

Now, to extract all rows where the state is Alabama, we can simply pass in this boolean series into the .loc operator.

df.loc[df['st_name'] == 'AL',:].head(5) # only showing the first 5 rows
Loading...

Similarly, we can also use the context dependent [] operator.

df[df['st_name'] == 'AL'].tail(5) # only showing the last 5 rows
Loading...

We could also use the same technique to extract all columns that are integers (but not floats, so they do not contain decimals).

df.loc[:,df.dtypes == int]
Loading...

Alternatively, if we wanted to extract all the rows where the index is less than 30, we could similarly use the .iloc operator.

df.index < 30
array([ True, True, True, ..., False, False, False])
df.iloc[df.index < 30,:].tail(5) # only showing the last 5 rows
Loading...

As you can imagine, quickly and easily generating these boolean arrays/Series is very important for accessing our elements!

Logical Operators

What if we have multiple conditions we want to filter on? Let’s say we want to look at all the rows where beer_tax is less than 3 and btax_dollars is less than 0.5. We can generate boolean filters for both of those conditions.

df['beer_tax'] < 3
0 False 1 False 2 False 3 False 4 False ... 1703 True 1704 True 1705 True 1706 True 1707 True Name: beer_tax, Length: 1708, dtype: bool
df['btax_dollars'] < 0.5
0 False 1 False 2 False 3 False 4 False ... 1703 True 1704 True 1705 True 1706 True 1707 True Name: btax_dollars, Length: 1708, dtype: bool

How can we combine both of these boolean arrays? Well, since we’re interested in cases where both of these are true, we can use the & operator. The & operator is the logical way of expressing you only want the cases where elements from both the arrays are true.

The code is shown below. Note the parenthesis around both the boolean arrays - this is an absolutely crucial piece of syntax!

df[(df['beer_tax'] < 3) & (df['btax_dollars'] < 0.5)]
Loading...

Alternatively, if we were looking for the states where either beer_tax is less than 3 or btax_dollars is less than 0.5, we could use the logical | operator. The | operator is the logical way of expressing you only want the cases where an element from either array is true.

df[(df['beer_tax'] < 3) | (df['btax_dollars'] < 0.5)]
Loading...

What about cases where either beer_tax is less than 3 or btax_dollars is less than 0.5, but not both? Well, here we could use the ^ operator, which is the logical way of expressing you only want the cases where an element from either of the arrays is true, but both the elements are not true.

df[(df['beer_tax'] < 3) ^ (df['btax_dollars'] < 0.5)]
Loading...

Notice how the number of columns with the & operator plus the number of columns with the ^ operator sum to the number of columns with the | operator (628 + 516 = 1144).

Finally, what if you’re interested in the cases where something isn’t true? For example, what if you’re interested in all the cases where the st_name isn’t AL? Well, then you can use the ~ operator, which logically represents the inverse of a boolean array (so it converts all true values into false and all false values intro true).

df['st_name'] == 'AL'
0 True 1 True 2 True 3 True 4 True ... 1703 False 1704 False 1705 False 1706 False 1707 False Name: st_name, Length: 1708, dtype: bool
~(df['st_name'] == 'AL')
0 False 1 False 2 False 3 False 4 False ... 1703 True 1704 True 1705 True 1706 True 1707 True Name: st_name, Length: 1708, dtype: bool
df[~(df['st_name'] == 'AL')]
Loading...

Here’s a table summarizing all the different logical operators to combine boolean arrays.

SymbolUsageMeaning
~~pReturns the negation of p
|p | qp OR q
&p & qp AND q
^p ^ qp XOR q (exclusive or)

Multiple Labels

If you want to keep all rows where a certain column is in a list of given values, you can use the .isin() method. This is logically equivalent to using the | operator multiple times, but it’s easier. For example, below we get all rows corresponding to the states of California, Florida, Texas and Nebraska.

df['st_name'].isin(['CA','FL','TX','NE'])
1707 False 1706 False 1705 False 1704 False 1703 False ... 4 False 3 False 2 False 1 False 0 False Name: st_name, Length: 1708, dtype: bool
df[df['st_name'].isin(['CA','FL','TX','NE'])]
Loading...

Editing Rows

We can also use conditional selection to edit certain rows. For example, let’s say we found out we made an error with measuring beer_tax for the state of Alabama, and the true beer tax is actually 0.1 higher than what we’ve measured. We can easily use boolean filtering to select the columns where we’ve made this mistake and fix them!

df.loc[df['st_name']=='AL', 'beer_tax']
0 72.341130 1 69.304600 2 67.149190 3 63.217026 4 56.933796 5 52.171677 6 49.329280 7 46.317430 8 43.049637 9 38.661655 10 34.063545 11 30.878284 12 29.086386 13 28.181087 14 27.014784 15 26.085838 16 25.609820 17 24.708065 18 23.726425 19 22.635775 20 21.475412 21 20.608194 22 20.005960 23 19.424470 24 18.939516 25 18.417560 26 17.889332 27 17.488075 28 17.219852 29 16.847755 30 16.299862 31 15.848877 32 15.602203 33 15.254544 Name: beer_tax, dtype: float64
df.loc[df['st_name']=='AL', 'beer_tax'] + 0.1
0 72.441130 1 69.404600 2 67.249190 3 63.317026 4 57.033796 5 52.271677 6 49.429280 7 46.417430 8 43.149637 9 38.761655 10 34.163545 11 30.978284 12 29.186386 13 28.281087 14 27.114784 15 26.185838 16 25.709820 17 24.808065 18 23.826425 19 22.735775 20 21.575412 21 20.708194 22 20.105960 23 19.524470 24 19.039516 25 18.517560 26 17.989332 27 17.588075 28 17.319852 29 16.947755 30 16.399862 31 15.948877 32 15.702203 33 15.354544 Name: beer_tax, dtype: float64
df.loc[df['st_name']=='AL', 'beer_tax'] = df.loc[df['st_name']=='AL', 'beer_tax'] + 0.1
df
Loading...

It’s that simple! However, since our data was (presumably) correctly measured, let us revert this change.

df.loc[df['st_name']=='AL', 'beer_tax'] = df.loc[df['st_name']=='AL', 'beer_tax'] - 0.1
df
Loading...

Modifying Columns and Indices

Now, we move on to modifying the columns in our DataFrames. The first task we will consider is how to add a new column.

Adding Columns

Let’s say we want to add a new column where we calculate the total percentage taxed by combining the beer_tax and salestax. Both of these % taxes compound, so we will need to be careful with the math! Below, we calculate a series with the sum of these two taxes, as a percent.

((df['beer_tax'] / 100 + 1) * (df['salestax'] / 100 + 1) - 1) * 100
0 79.234775 1 76.076784 2 73.835158 3 69.745707 4 63.211148 ... 1703 4.332690 1704 4.321871 1705 4.312965 1706 4.308094 1707 4.301229 Length: 1708, dtype: float64

Now, we want to set this series equal to a new column named total_percent_tax in our dataframe. The syntax for doing this follows df[New_Column_Name] = Series/Array_Name, and is shown below.

df['total_percent_tax'] = ((df['beer_tax'] / 100 + 1) * (df['salestax'] / 100 + 1) - 1) * 100
df.head()
Loading...

Voila, our new column has been added!

Renaming Columns

Looking at the table above, c_beer feels like a confusing name. Instead, let us change to beer_consumption, a more informative name! To do so, we can use the DataFrame rename method, as shown below.

df.rename(columns={"c_beer":"beer_consumption"})
Loading...

As we can see, the columns parameter of the rename method takes in a dictionary where all the keys are the old column names and all the values are the new column names.

Note, however, that the original DataFrame df has not changed yet. Instead, pandas methods just return a copy of the modified DataFrame for you to see. To change the original dataframe, you can either manually assign the new DataFrame to df or set the inplace parameter of the rename method to True.

df # original dataframe has not changed
Loading...
df = df.rename(columns={"c_beer":"beer_consumption"}) # you can manually reassign the `df` variable
df
Loading...
df.rename(columns={"c_beer":"beer_consumption"}, inplace = True) # or you can use the inplace method
df
Loading...

Dropping Columns

Finally, let’s say we want to drop a column. For this, we can use the DataFrame drop method. For example, we drop the salestax column below.

df.drop(columns=["salestax"])
Loading...

Once again, the drop method has a columns parameter that takes in a list of the columns to drop (no need for a dictionary here as there are no key-value pairs).

However, you can also simply pass in a list of the columns to drop and then use the axis parameter to specify they represent column names, as done below. We encourage you to read the documentation if you’re interested in further detail!

df.drop(["salestax","beer_tax"], axis = 'columns')
Loading...

Finally, once again the drop parameter does not modify the original table. To modify the original table, you can once again reassign it to the original table or use the inplace method.

df = df.drop(columns=["salestax"]) # reassigning to df
df
Loading...
df.drop(["salestax","beer_tax"], axis = 'columns', inplace = True) # using inplace parameter
df
Loading...

Modifying the Index

To modify the index of a DataFrame, there are three main methods, as discussed below.

.set_index() Sets the index of the DataFrame to a particular column.

df.set_index('st_name', inplace = True)
Loading...

.reset_index() converts the current index into a column and makes the new index as a default numbering from 0 to the very last row in the DataFrame.

df.reset_index()
Loading...

Finally, you can manually set the index equal to something else by simply reassigning the DataFrame/Series attribute.

df.index = np.arange(len(df)-1, -1, -1)
df
Loading...
References
  1. Chetty, R., Looney, A., & Kroft, K. (2009). Salience and Taxation: Theory and Evidence. American Economic Review, 99(4), 1145–1177. 10.1257/aer.99.4.1145