Install SQL packages:
# !conda install -y psycopg2
# !conda install -y postgresql
# !pip install ipython-sql
# !pip install sqlalchemy
Standard imports + sqlalchemy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sqlalchemy
%matplotlib inline
%load_ext sql
Establish a database connection to the Postgres database running on my machine localhost
using the schema ds100
postgresql_uri = "postgres://jegonzal:@localhost:5432/ds100"
%sql $postgresql_uri
Note that we don't need to specify the database URI with each %%sql
command. If one is not provided that the previous connection is used.
%%sql
-- Drop the table if it already exists
DROP TABLE IF EXISTS students;
-- Create the table students
CREATE TABLE students(
name TEXT PRIMARY KEY,
gpa FLOAT CHECK (gpa >= 0.0 and gpa <= 4.0),
age INTEGER,
dept TEXT,
gender CHAR);
-- Populate the table of students
INSERT INTO students VALUES
('Sergey Brin', 2.8, 40, 'CS', 'M'),
('Danah Boyd', 3.9, 35, 'CS', 'F'),
('Bill Gates', 1.0, 60, 'CS', 'M'),
('Hillary Mason', 4.0, 35, 'DATASCI', 'F'),
('Mike Olson', 3.7, 50, 'CS', 'M'),
('Mark Zuckerberg', 4.0, 30, 'CS', 'M'),
('Cheryl Sandberg', 4.0, 47, 'BUSINESS', 'F'),
('Susan Wojcicki', 4.0, 46, 'BUSINESS', 'F'),
('Marissa Meyer', 4.0, 45, 'BUSINESS', 'F');
It is often assumed that when working with a database all relations (tables) must come from outside or be derived from other sources of data. It is possible to construct tables in SQL.
Sometimes it's useful to auto-generate data in queries, rather than examine data in the database. This is nice for testing, but also can be useful to play some computational tricks as you'll see in your homework.
SQL has a simple scalar function called random
that returns a random value between 0.0 and 1.0. You can use this if you need to generate a column of random numbers. (The PostgreSQL manual doesn't promise much about the statistical properties of this random number generator.)
Let's roll a 6-sided die for each of the students
%%sql
SELECT *, ROUND(RANDOM() * 6) as roll_dice
FROM students;
Is this a good implementation of a fair 6 sided die?
SELECT *, ROUND(RANDOM() * 6) as roll_dice
FROM students;
Quiz:
http://bit.ly/ds100nodice
Suppose we want to generate a whole bunch of random numbers, not tied to any particular stored table -- can we do that in SQL?
SQL has a notion of table-valued functions: functions that return tables, and hence can be used in a FROM
clause of a query. The standard table-valued function is called generate_series
, and it's much like numpy's arange
:
%%sql
SELECT *
FROM GENERATE_SERIES(1, 5);
%%sql
SELECT *
FROM GENERATE_SERIES(1, 10, 2);
Let's test the distribution of our earlier generator:
%%sql
SELECT ROUND(6*RANDOM()) AS rando, COUNT(*)
FROM GENERATE_SERIES(1, 100000) AS flip(trial)
GROUP BY rando
ORDER BY count
And if we want integers, we can use a PostgreSQL typecast operator (postfix ::<type>
):
%%sql
-- NOTE WE ALSO TAKE THE CEIL
-- What would happen if we did not?
SELECT CEIL(6*RANDOM())::INTEGER AS rando, COUNT(*)
FROM generate_series(1, 100000) AS flip(trial)
GROUP BY rando
ORDER BY count
Now suppose we want to populate a "matrix" relation my_matrix(r, c, v)
full of random values. Consider the following numpy code
import numpy as np
np.random.seed(43)
# normally distributed random numbers, mean 0 variance 1
my_matrix = np.random.randint(1,6, (3,2)).astype('float')
my_matrix
How could we store the above matrix as a table?
Building the table in Numpy
my_matrix.flatten()
# Advanced numpy (you don't need to know this ...)
(col_id, row_id) = np.meshgrid(np.arange(2), np.arange(3))
mat_a = pd.DataFrame(
np.vstack([row_id.flatten().T, col_id.flatten(), my_matrix.flatten()]).T,
columns=['r', 'c', 'v'])
mat_a
%%sql
select * from mat_A
engine = sqlalchemy.create_engine(postgresql_uri)
with engine.connect() as conn:
conn.execute("DROP TABLE IF EXISTS mat_a")
mat_a.to_sql("mat_a", conn, index=False)
%%sql
SELECT * FROM mat_a
In this relational version we need to explicitly generate the r
and c
values. We can do this via SQL's built-in cartesian product!
%%sql
SELECT rows.r, columns.c, CEIL(6*RANDOM())::INTEGER AS v
FROM generate_series(0,2) AS rows(r),
generate_series(0,1) AS columns(c);
We may want to store a matrix as a table—in which case we should set up the schema properly to ensure that it remains a legal matrix.
%%sql
DROP TABLE IF EXISTS my_matrix;
CREATE TABLE my_matrix(r INTEGER, c INTEGER, val FLOAT, PRIMARY KEY(r, c));
INSERT INTO my_matrix
SELECT rows.r, columns.c, CEIL(6*RANDOM())::INTEGER AS v
FROM generate_series(0,2) AS rows(r),
generate_series(0,1) AS columns(c);
%%sql
SELECT * FROM my_matrix;
A few take-aways from the previous cell:
my_matrix
reflects the fact that val
is a function of the row (x
) and column (y
) IDs.INSERT
statement, which contains a SELECT
query rather than the VALUES
we saw before. You might want to experiment and see what would happen if the SELECT
query produces a different schema than my_matrix
: try having it produce too few columns, too many columns, columns in different orders, etc.INSERT...SELECT
statement, notice the definition of output column names via the AS
in the SELECT
clause. Is that necessary here?INSERT...SELECT
statement, notice the definition of table and column names in the FROM
clause via AS
, and the way they get referenced in the SELECT
clause. Do we need the tablenames specified in the SELECT
clause? Try it and see!Sometimes we may want a custom scalar function that isn't built into SQL. Some database systems allow you to register your own user-defined functions (UDFs) in one or more programming languages. Conveniently, PostgreSQL allows us to register user-defined functions written in Python. Be aware of two things:
Calling Python for each row in a query is quite a bit slower than using the pre-compiled built-in functions in SQL ... this is akin to the use of Python loops instead of numpy
calls. If you can avoid using Python UDFs you should do so to get better performance.
Python is a full-feature programming language with access to your operating system's functionality, which means it can reach outside of the scope of the query and wreak havoc, including running arbitrary UNIX commands. (PostgreSQL refers to this as an untrusted
language.) Be very careful with the Python UDFs you use in your Postgres queries! If you want to be safer write UDFs in a trusted language. PostgreSQL has a number of other languages to choose from, including Java and even R!.
First we tell PostgreSQL we want to use the plpythonu package (so named because of "pl" for "programming language", "u" for "untrusted"):
%%sql
CREATE EXTENSION IF NOT EXISTS plpythonu;
Now let's write some trivial Python code and register it as a UDF using the create function
command. Since SQL is a typed language, we need to specify the SQL types for the input and output to our function, in addition to the code (within $$ delimiters) and the language:
%%sql
DROP FUNCTION IF EXISTS fib(x INTEGER);
CREATE FUNCTION fib(x INTEGER) RETURNS INTEGER
AS $$
def fib(x):
if x < 2:
return x
else:
return fib(x-1) + fib(x-2)
return fib(x)
$$ LANGUAGE plpythonu;
%%sql $postgresql_uri
SELECT x, fib(x)
FROM generate_series(1,10) AS row(x);
It is possible to create transactions that isolate changes. This is done by starting a transaction with BEGIN
. We can then proceed to make changes to the database. During this time others will not be able to see our changes. Until we end the transactions by saying ROLLBACK
or COMMIT
:
BEGIN;
UPDATE students SET gpa = 3.0 WHERE name = 'Bill Gates';
SELECT * FROM students;
ROLLBACK;
SELECT * FROM students;
Try running this in the postgres shell...
Statistics doesn't deal with individuals, it deals with groups: distributions, populations, samples and the like. As such, computing statistics in SQL focuses heavily on aggregation functions.
All SQL systems have simple descriptive statistics built in as aggregation functions:
min, max
count
sum
avg
stddev
and variance
, the sample standard deviation and variance.PostgreSQL offers many more. Some handy ones include
stddev_pop
and var_pop
: the population standard deviation and variance, which you should use rather than stddev
and variance
if you know your data is the full population, not a sample.covar_samp
and covar_pop
: sample and population covariancecorr
, Pearson's correlation coefficientYou'll notice that a number of handy statistics are missing from this list, including the median and quartiles. That's because those are order statistics: they are defined based on an ordering of the values in a column.
SQL provides for this by allowing what it calls "ordered set functions", which require a WITHIN GROUP (ORDER BY <columns>)
clause to accompany the order-statistic aggregate. For example, to compute the 25th percentile, 50th percentile (median) and 75th percentile in SQL, we can use the following:
%%sql
SELECT
percentile_cont(0.5) WITHIN GROUP (ORDER BY x)
FROM generate_series(1,10) AS data(x);
There are two versions of the percentile function:
percentile_cont
inuous : interpolatespercentile_disc
rete : returns an entry from the tableWhat will the following expressions return?
%%sql $postgresql_uri
SELECT
percentile_disc(0.5) WITHIN GROUP (ORDER BY x)
FROM generate_series(1,10) AS data(x);
We can compute the edges and middle of the box in a box plot:
%%sql $postgresql_uri
SELECT
percentile_disc(0.25) WITHIN GROUP (ORDER BY x) as lower_quartile,
percentile_disc(0.5) WITHIN GROUP (ORDER BY x) as median,
percentile_disc(0.75) WITHIN GROUP (ORDER BY x) as upper_quartile
FROM generate_series(1,10) AS data(x);
psql
¶In a separate notebook (load_fec.ipynb
) you'll find the commands to load publicly-available campaign finance data from the Federal Election Commission into a PostgreSQL database.
To see what we have in the database, it's simplest to use the PostgreSQL shell command psql
to interact with the database. You can run man psql
to learn more about it. A few handy tips:
psql
supports some useful non-SQL "meta-"commands, which you access via backslash (\
). To find out about them, run psql
in a bash shell, and at the prompt you can type \?
.psql
has builtin documentation for SQL. To see that, at the psql
prompt type \help
.psql
is an interactive SQL shell, so not suitable for use inside a Jupyter notebook. If you want to invoke it within a Jupyter notebook, you should use !psql -c <SQL statement>
-- the -c
flag tells psql to run the SQL statement and then exit:Let's see what tables we have our database after loading the FEC data:
!psql ds100 -c "\d"
And let's have a look at the individual
table's schema:
!psql ds100 -c "\d individual"
If you are curious about the meaning of these columns check out the FEC data description
How big is this table?
%%sql
SELECT COUNT(*)
FROM individual
LIMIT
and sampling¶This is not the first topic usually taught in SQL, but it's extremely useful for exploration.
OK, now we have some serious data loaded and we're ready to explore it.
Database tables are often big--hence the use of a database system. When browsing them at first, we may want to look at exemplary rows: e.g., an arbitrary number of rows, or a random sample of the rows.
To look at all of the data in the individual
table, we would simply write:
select * from individual;
But that would return 20,347,829 rows into our Jupyter notebook's memory, and perhaps overflow the RAM in your computer. Instead, we could limit the size of the output to the first 3 rows as follows:
%%sql
SELECT *
FROM individual
LIMIT 4;
limit
clause:¶As data scientists, we should be concerned about spending much time looking at a biased subset of our data. Instead, we might want an i.i.d. random sample of the rows in the table. There are various methods for sampling from a table. A simple one built into many database systems including PostgreSQL is Bernoulli sampling, in which the decision to return each row is made randomly and independently. As a metaphor, the database engine "flips a coin" for each row to decide whether to return it. We can influence the sampling rate by choosing the probability of a "true" result of the coinflip.
This is done on a per-table basis in the FROM
clause of the query like so:
%%sql
SELECT *
FROM individual TABLESAMPLE BERNOULLI(.00001);
To learn more about the TABLESAMPLE
clause checkout out the select docs. Note that there is a second sampling method called block sampling which is a lot like cluster sampling at the level of pages on disk!
%%sql
SELECT *
FROM individual TABLESAMPLE BERNOULLI(.00001) REPEATABLE(42);
Three things to note relative to our previous limit
construct:
For these reasons, if we want a proper i.i.d sample, it's a good idea to compute a nice-sized sample and store it, keeping it reasonably large for more general use. Since we will not be updating and rows in our individual
table, we can do this without worrying that the sample will get "out of date" with respect to the context of individual
.
We can use the CREATE TABLE AS SELECT ...
(a.k.a. CTAS) pattern to do create a table that saves the output of a query:
%%sql $postgresql_uri
DROP TABLE IF EXISTS indiv_sample;
CREATE TABLE indiv_sample AS
SELECT *
FROM individual TABLESAMPLE BERNOULLI(.1) REPEATABLE(42);
Here is an alternative method to construct a random sample of a fixed size. Note that this is not as efficient and will take several minutes to complete.
CREATE TABLE indiv_sample2 AS
SELECT *, RANDOM() AS u
FROM individual
ORDER BY u
LIMIT 20000;
%%sql
SELECT *, RANDOM() AS u
FROM individual
ORDER BY u
LIMIT 5;
# %%sql
# SELECT SETSEED(0.5);
# DROP TABLE IF EXISTS indiv_sample2;
# CREATE TABLE indiv_sample2 AS
# SELECT *, RANDOM() AS u
# FROM individual
# ORDER BY u
# LIMIT 20000;
%%sql
SELECT COUNT(*) FROM indiv_sample2
OK, we already had a peek at the individual
table. Now let's look at specific attributes (columns) relates to who is donating how much.
In addition to referencing the columns of individual
in the select
clause, we can also derive new columns by writing field-level (so-called "scalar") functions. Typically we reference some table columns in those functions.
In our case, let's compute the log of transaction_amt
for subsequent plotting. SQL comes with many typical functions you can use in this way, and PostgreSQL is particularly rich on this front; see the PostgreSQL manual for details.
We'll look at indiv_sample
rather than individual
while we're just exploring.
%%sql
SELECT name, state, cmte_id,
transaction_amt, log(transaction_amt)
FROM indiv_sample
LIMIT 10;
We can combine SQL with python in the following way:
query = """
SELECT transaction_amt AS amt
FROM indiv_sample
WHERE transaction_amt > 0;
"""
result = %sql $query
_ = sns.distplot(result.DataFrame()['amt'])
query = """
SELECT LOG(transaction_amt) AS log_amt
FROM indiv_sample
WHERE transaction_amt > 0;
"""
result = %sql $query
df = result.DataFrame()['log_amt']
sns.distplot(df.astype('float'))
scales = np.array([1, 10, 20, 100, 500, 1000, 5000])
_ = plt.xticks(np.log10(scales), scales)
query = """
SELECT transaction_amt AS amt
FROM indiv_sample
WHERE transaction_amt > 5000;
"""
result = %sql $query
_ = sns.distplot(result.DataFrame()['amt'], rug=True)
query = """
SELECT transaction_amt AS amt
FROM individual
WHERE transaction_amt > 5000;
"""
result = %sql $query
_ = sns.distplot(result.DataFrame()['amt'])
query = """
SELECT log(transaction_amt) AS log_amt
FROM individual
WHERE transaction_amt > 5000;
"""
result = %sql $query
sns.distplot(result.DataFrame()['log_amt'])
scales = np.array([5000, 20000, 100000])
_ = plt.xticks(np.log10(scales), scales)
query = """
SELECT log(transaction_amt) AS log_amt
FROM individual
WHERE transaction_amt > 1000000;
"""
result = %sql $query
sns.distplot(result.DataFrame()['log_amt'], rug=True)
scales = np.array([1000000, 5000000, 50000000])
_ = plt.xticks(np.log10(scales), scales)
CASE
statements: SQL conditionals in the FROM
clause¶What about smaller donations?
# %%sql $postgresql_uri
# SELECT name, state, cmte_id,
# transaction_amt, LOG(transaction_amt)
# FROM indiv_sample
# WHERE transaction_amt < 10
# LIMIT 10;
Uh oh, log is not defined for numbers <= 0! We need a conditional statement in the select
clause to decide what function to call. We can use SQL's case
construct for that.
%%sql $postgresql_uri
SELECT name, state, cmte_id, transaction_amt,
CASE WHEN transaction_amt > 0 THEN log(transaction_amt)
WHEN transaction_amt = 0 THEN 0
ELSE -1*(log(abs(transaction_amt)))
END AS log_magnitude
FROM indiv_sample
WHERE transaction_amt < 10
LIMIT 10;
query = """
SELECT transaction_amt,
CASE WHEN transaction_amt > 0 THEN log(transaction_amt)
WHEN transaction_amt = 0 THEN 0
ELSE -1*(log(abs(transaction_amt)))
END AS log_amt
FROM indiv_sample
WHERE transaction_amt < 10
"""
result = %sql $query
sns.distplot(result.DataFrame()['log_amt'])
# scales = np.array([1000000, 5000000, 50000000])
# _ = plt.xticks(np.log10(scales), scales)
%%sql
SELECT transaction_amt, cmte_id, transaction_dt, name, city, state, memo_text, occupation
FROM individual
ORDER BY transaction_amt DESC
LIMIT 10
%%sql
SELECT transaction_amt, cmte_id, transaction_dt, name, city, state, memo_text, occupation
FROM individual
ORDER BY transaction_amt
LIMIT 10
%%sql
SELECT name, SUM(transaction_amt) AS total_amt
FROM individual
GROUP BY name
ORDER BY total_amt DESC
LIMIT 10
WHERE
¶%%sql
SELECT name, SUM(transaction_amt) AS total_amt
FROM individual
WHERE city = 'SAN FRANCISCO'
GROUP BY name
ORDER BY total_amt DESC
LIMIT 20;
%%sql
SELECT name, SUM(transaction_amt) AS total_amt
FROM individual
WHERE city = 'BERKELEY'
GROUP BY name
ORDER BY total_amt DESC
LIMIT 20;
Up to now we've looked at a single query at a time. SQL also allows us to nest queries in various ways. In this section we look at the cleaner examples of how to do this in SQL: views and Common Table Expressions (CTEs).
In earlier examples, we created new tables and populated them from the result of queries over stored tables. There are two main drawbacks of that approach that may concern us in some cases:
For this reason, SQL provides a notion of logical views: these are basically named queries that are re-evaluated upon each reference.
The syntax is straightforward:
CREATE VIEW <name> AS
<SELECT statement>;
The resulting view <name>
can be used in an SELECT
query, but not in an INSERT
, DELETE
or UPDATE
query!
As an example, we might want a view that stores just some summary statistics of transaction_amt
s for each date:
%%sql $postgresql_uri
DROP VIEW IF EXISTS date_stats;
CREATE VIEW date_stats AS
SELECT
transaction_dt AS day,
min(transaction_amt),
avg(transaction_amt),
stddev(transaction_amt),
max(transaction_amt)
FROM individual
GROUP BY transaction_dt
ORDER BY day;
%%sql
SELECT * from date_stats limit 5;
Notice that this did not create a table:
!psql ds100 -c "\dt"
Instead it created a view:
!psql ds100 -c "\dv"
We can list more about the view using the \d+
option:
!psql ds100 -c "\d+ date_stats"
Let's create a random table and we will even seed the random number generator.
%%sql $postgresql_uri
SELECT setseed(0.3);
DROP VIEW IF EXISTS rando;
CREATE VIEW rando(rownum, rnd) AS
SELECT rownum, round(random())::INTEGER
FROM generate_series(1,50) AS ind(rownum)
What is the sum of the rows in Random:
%%sql $postgresql_uri
SELECT SUM(rnd) FROM rando;
What was that value again?
%%sql $postgresql_uri
SELECT SUM(rnd) FROM rando;
</br></br></br>
The value changes with each invocation.
Views can help:
Problem:
temp1
, temp1_joey
, temp1_joey_fixed
, ... We need a mechanism to decompose query into views for the scope of a single query.
WITH
)¶Think of these as a view that exists only during the query.
If we're only going to use a view within a single query, it is a little inelegant to CREATE
it, and then have to DROP
it later to recycle the view name.
Common Table Expressions (CTEs) are like views that we use on-the-fly. (If you know about lambdas in Python, you can think of CTEs as lambda views.) The syntax for CTEs is to use a WITH
clause in front of the query:
WITH <name> [(renamed columns)] AS
(<SELECT statement>)
[, <name2> AS (<SELECT statement>)...]
If you need multiple CTEs, you separate them with commas. We can rewrite our query above without a view as follows:
%%sql $postgresql_uri
WITH per_day_stats AS (
SELECT
to_date(transaction_dt, 'MMDDYYYY') as day, -- Date Parsing
min(transaction_amt),
avg(transaction_amt),
stddev(transaction_amt),
max(transaction_amt)
FROM indiv_sample
GROUP BY transaction_dt
)
SELECT day, stddev, max - min AS spread
FROM per_day_stats
WHERE stddev IS NOT NULL
ORDER by stddev DESC
LIMIT 5
Suppose now we want to determine which committees received the most money
%%sql $postgresql_uri
SELECT cmte_id, SUM(transaction_amt) AS total_amt
FROM individual
GROUP BY cmte_id
ORDER BY total_amt DESC
LIMIT 10
!psql ds100 -c "\d"
!psql ds100 -c "\d cm"
We can join the committee description to get the names of the committees that received the most funds.
%%sql $postgresql_uri
WITH indv2cm AS
(
SELECT cmte_id, SUM(transaction_amt) AS total_amt
FROM individual
GROUP BY cmte_id
ORDER BY total_amt DESC
)
SELECT cm.cmte_nm, indv2cm.total_amt
FROM cm, indv2cm
WHERE cm.cmte_id = indv2cm.cmte_id
ORDER BY indv2cm.total_amt DESC
LIMIT 10
!psql ds100 -c "\d"
!psql ds100 -c "\d cn"
!psql ds100 -c "\d ccl"
%%sql
SELECT cn.cand_name, SUM(indiv.transaction_amt) AS total_amt
FROM individual AS indiv, ccl, cn
WHERE indiv.cmte_id = ccl.cmte_id AND
ccl.cand_id = cn.cand_id
GROUP BY cn.cand_name
ORDER BY total_amt DESC
LIMIT 10
%%sql
SELECT cn.cand_name, SUM(indiv.transaction_amt) AS total_amt
FROM individual AS indiv, ccl, cn
WHERE indiv.cmte_id = ccl.cmte_id AND
ccl.cand_id = cn.cand_id AND
indiv.state = 'CA'
GROUP BY cn.cand_name
ORDER BY total_amt DESC
LIMIT 10
%%sql
SELECT cn.cand_name, SUM(indiv.transaction_amt) AS total_amt
FROM individual AS indiv, ccl, cn
WHERE indiv.cmte_id = ccl.cmte_id AND
ccl.cand_id = cn.cand_id AND
indiv.state = 'FL'
GROUP BY cn.cand_name
ORDER BY total_amt DESC
LIMIT 10
%%sql
SELECT cn.cand_name, SUM(indiv.transaction_amt) AS total_amt
FROM individual AS indiv, ccl, cn
WHERE indiv.cmte_id = ccl.cmte_id AND
ccl.cand_id = cn.cand_id AND
indiv.state = 'TX'
GROUP BY cn.cand_name
ORDER BY total_amt DESC
LIMIT 10
%%sql
SELECT cm.cmte_nm, SUM(transaction_amt) AS total_amt
FROM pas, cm
WHERE pas.cmte_id = cm.cmte_id
GROUP BY cm.cmte_nm
ORDER BY total_amt DESC
LIMIT 5
%%sql
SELECT cn.cand_name, SUM(transaction_amt) AS total_amt
FROM pas, cn
WHERE pas.cand_id = cn.cand_id
GROUP BY cn.cand_name
ORDER BY total_amt DESC
LIMIT 5