In this notebook we walk through basic SQL queries and review how to interact with SQL from Python. We will be using the core sqlalchemy library as an abstraction layer on-top of the underlying database management system. We will not be using the sophisticated object relational management features available in sqlalchemy.
import sqlalchemy
For the first part of this assignment we will connect to a local SQLite database. SQLite is a simple file oriented database management "library." In the following we launch a SQLite database stored in the data100.db
file in the local folder.
# Delete the database if it already exists.
from pathlib import Path
dbfile = Path("data100.db")
if dbfile.exists():
dbfile.unlink()
sqlite_uri = "sqlite:///data100.db"
sqlite_engine = sqlalchemy.create_engine(sqlite_uri)
On the first call there should be no tables present in the file.
sqlite_engine.table_names()
To start, we are going to define a toy relation (a.k.a. table), populate it with some toy data, and work through some basic SQL. First, let's create the table of students.
The table contains columns:
name
: Text and unique (Primary Key)gpa
: Float with range constraintsage
: Integerdept
: Textsex
: Char (e.g., 'M', 'F')Note that each column has a fixed data type.
The DBMS will enforce these types as data is inserted.
Note also the definition of a primary key, as we discussed in the EDA lecture.
The DBMS will enforce the uniqueness of values in the key columns.
sql_expr = """
CREATE TABLE students(
name TEXT PRIMARY KEY,
gpa FLOAT CHECK (gpa >= 0.0 and gpa <= 4.0),
age INTEGER,
dept TEXT,
gender CHAR);
"""
result = sqlite_engine.execute(sql_expr)
The query returns a sqlalchemy ResultProxy
object:
result?
We can check to see if the result object contains any data:
result.returns_rows
Now let's manually insert some values into the table. Note that:
'
character.create table
statement! sql_expr = """
INSERT INTO students VALUES
('Sergey Brin', 2.8, 45, 'CS', 'M'),
('Danah Boyd', 3.9, 40, 'CS', 'F'),
('Bill Gates', 1.0, 63, 'CS', 'M'),
('Hillary Mason', 4.0, 39, 'DATASCI', 'F'),
('Mike Olson', 3.7, 53, 'CS', 'M'),
('Mark Zuckerberg', 3.8, 34, 'CS', 'M'),
('Cheryl Sandberg', 3.6, 49, 'BUSINESS', 'F'),
('Susan Wojcicki', 3.8, 50, 'BUSINESS', 'F'),
('Marissa Meyer', 2.6, 43, 'BUSINESS', 'F')
;
"""
result = sqlite_engine.execute(sql_expr)
Again we see that this query returns nothing:
result.returns_rows
Now that we have populated the table we can construct a query to extract the results.
sql_expr = """
SELECT * FROM students;
"""
result = sqlite_engine.execute(sql_expr)
result.returns_rows
The query returns a persistent connection in the form of a cursor
which can be used to read data from the database.
[r for r in result.cursor]
However, as we read the cursor we advance it and so it can only be used once:
[r for r in result.cursor]
You can also use Pandas to query the database. Here we pass the engine (or a connection) into the pandas.read_sql
function:
import pandas as pd
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
What happens if we try to insert another record with the same primary key (name
)?
sql_expr = """
INSERT INTO students VALUES ('Bill Gates', 4.0, 60, 'BUSINESS', 'M')
"""
try:
result = sqlite_engine.execute(sql_expr)
except Exception as e:
print(e)
Notice in the above block of code we use try:
and except Exception as e:
. This accomplishes two goals:
We can also populate the database using Pandas. In the following we first obtain the Tips dataset from the seaborn visualization library as a dataframe:
import seaborn as sns
tips_df = sns.load_dataset("tips")
tips_df.head()
We can then use the pandas.to_sql
command to put the data in our SQLlite database:
tips_df.to_sql("tips", sqlite_engine)
We can see that a new table has been added:
sqlite_engine.table_names()
We can also query the table:
sql_expr = """
SELECT * FROM tips
"""
pd.read_sql(sql_expr, sqlite_engine)
There is no mechanism in standard SQL to access the schema associated with each database management systems. However sqlalchemy provides a simple abstraction layer. I found the following SQL Alchemy Quick Reference Sheet to be very helpful.
inspector = sqlalchemy.inspect(sqlite_engine)
inspector.get_table_names()
# Get column information
for col in inspector.get_columns("students"):
print(col)
We can delete rows using the DELETE statement.
sql_expr = """
DELETE FROM students
WHERE LOWER(name) = 'sergey brin'
"""
sqlite_engine.execute(sql_expr)
Notice we can rerun the above command multiple times. Why?
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Restoring Sergey
sql_expr = """
INSERT INTO students VALUES
('Sergey Brin', 2.8, 45, 'CS', 'M');
"""
sqlite_engine.execute(sql_expr)
The above computation cannot be run more than once:
try:
sql_expr = """
INSERT INTO students VALUES
('Sergey Brin', 2.8, 45, 'CS', 'M');
"""
sqlite_engine.execute(sql_expr)
except Exception as e:
print(e)
Notice that the order of the records has change. We actually have no guarantee on where Sergey is inserted in the database.
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
What is Bill Gates' GPA?
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Wow, Bill has a low GPA let's lend him a hand.
sql_expr = """
UPDATE students
SET gpa = 1.0 + gpa
WHERE LOWER(name) = 'bill gates';
"""
sqlite_engine.execute(sql_expr)
And let's check the table now:
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
In the update
statement we decide which rows get updated based entirely on the values in each row, as checked by the where
clause. There is no notion of any information outside the values in the row--e.g. there are no "object identifiers" or "row numbers"... everything is just the data and only the data.
The integrity constraints we imposed earlier can be used to improve data quality.
Suppose we try to update everyone's GPA in CS to be one point higher.
try:
sql_expr = """
UPDATE students
SET gpa = 1.0 + gpa
WHERE dept = 'CS';
"""
sqlite_engine.execute(sql_expr)
except Exception as e:
print(e)
The above code fails. Why? (check the gpa.)
The drop command is used to remove tables from the database (be carefull!):
sql_expr = """
DROP TABLE tips;
"""
sqlite_engine.execute(sql_expr)
Notice that the tips
table no longer exists:
sqlite_engine.table_names()
Now let's start looking at some slightly more interesting queries. The canonical SQL query block includes the following clauses, in the order they appear. Square brackets indicate optional clauses.
SELECT ...
FROM ...
[WHERE ...]
[GROUP BY ...]
[HAVING ...]
[ORDER BY ...]
[LIMIT ...];
Query blocks can reference one or more tables, and be nested in various ways. Before we worry about multi-table queries or nested queries, we'll work our way through examples that exercise all of these clauses on a single table.
SELECT
and WHERE
Clauses¶The list that follows SELECT
determines which columns to include in the output.
sql_expr = """
SELECT name
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
The WHERE
clause determines which rows of to include by specifying a predicate (boolean expression). Rows (tuples) that satisfy this expression are returned.
sql_expr = """
SELECT name, gpa
FROM students
WHERE dept = 'CS'
"""
pd.read_sql(sql_expr, sqlite_engine)
We can use the OR and AND keywords to combine predicates.
sql_expr = """
SELECT name, gpa, dept
FROM students
WHERE dept = 'CS' OR name < 'E';
"""
pd.read_sql(sql_expr, sqlite_engine)
And of course we can specify both rows and columns explicitly. If we have a primary key, we can filter things down to even the cell level via a select
list of one column, and a where
clause checking equality on the primary key columns:
sql_expr = """
SELECT gpa
FROM students
WHERE name = 'Bill Gates';
"""
pd.read_sql(sql_expr, sqlite_engine)
Note that even this "single-celled" response still has a uniform data type of a relation.
SQL expressions take in tables and always produce tables. This is unlike pandas dataframe operations where some dataframe operations return a Series or even a single value.
SQL treats query results as a multiset, i.e. a relation can have tuples that are identical ot each other. Sometimes, however, we don't want to keep the duplicates, we want to eliminate them. This is done simply by adding the keyword DISTINCT
after the SELECT
statement:
sql_expr = """
SELECT DISTINCT dept
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Which rows are used when taking the distinct entries? Does it really matter?
Note that distinct operates at the level of the tuple, not at the level of the column.
sql_expr = """
SELECT DISTINCT dept, gender
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
SQL has a wide range of functions that can be applied to each attribute in the select list. Notice that we can alias (name) the columns with AS
. The complete list of built in PostreSQL functions is available here.
sql_expr = """
SELECT UPPER(name), LOWER(dept) AS d, gpa/4.0 AS GPA_ratio
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Unfortunately, sqlite has very limited functionality. However, most modern full featured database management systems will offer a rich set of transformations.
SQL also provides a number of useful aggregation functions, for example AVG
.
sql_expr = """
SELECT AVG(age)
FROM students
WHERE dept = 'CS'
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT AVG(age)
FROM students
"""
pd.read_sql(sql_expr, sqlite_engine)
Other aggregation functions include .SUM
, COUNT
, MAX
, and MIN
sql_expr = """
SELECT SUM(age)
FROM students
"""
pd.read_sql(sql_expr, sqlite_engine)
The COUNT
function, which counts the number of non-null tuples, can take *
as an argument.
sql_expr = """
SELECT COUNT(*)
FROM students
"""
pd.read_sql(sql_expr, sqlite_engine)
Now that you can slice and dice tables into columns, rows and cells, you have enough knowledge to poke around in a database. Let's move on to skills that you'll need as a data scientist.
GROUP BY aggregation in SQL is a lot like the group by in Pandas. SQL provides a family of [aggregate functions] for use in the select
clause. In the simplest form, queries with aggregates in the select
clause generate a single row of output, with each aggregate function performing a summary of all the rows of input. You can have many aggregate functions in your select
clause:
A list of built-in aggregate functions in PostgreSQL is here. In our case, the query we are looking for is as follows.
In the following we compute the average age of students in each department.
sql_expr = """
SELECT dept, AVG(age) as avg_age
FROM students
GROUP BY dept
"""
pd.read_sql(sql_expr, sqlite_engine)
We can also include the number of students in each department using the COUNT
aggregation function.
sql_expr = """
SELECT dept, AVG(age) as avg_age, COUNT(*) as count
FROM students
GROUP BY dept
"""
pd.read_sql(sql_expr, sqlite_engine)
Suppose we wanted to try to retrieve only groups with more than 2 students. Cannot use a WHERE clause for this (will cause an error).
sql_expr = """
SELECT dept, AVG(age) as avg_age
FROM students
WHERE COUNT(age) > 2
GROUP BY dept
"""
pd.read_sql(sql_expr, sqlite_engine)
We can use the HAVING
clause to apply a predicate to groups.
sql_expr = """
SELECT dept, AVG(age) as avg_age, COUNT(*) as cnt
FROM students
GROUP BY dept
HAVING COUNT(*) >= 2
"""
pd.read_sql(sql_expr, sqlite_engine)
The ORDER BY clause in SQL allows you to order your output rows, in either ascending (ASC) or descending (DESC) order of the values in columns. For example:
sql_expr = """
SELECT name, gpa, age
FROM students
WHERE name > 'E'
ORDER BY gpa, name;
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT name, gpa, age
FROM students
WHERE name > 'E'
ORDER BY gpa DESC, name ASC;
"""
pd.read_sql(sql_expr, sqlite_engine)
The limit clause limits the number of elements returned.
sql_expr = """
SELECT * FROM students
LIMIT 3
"""
pd.read_sql(sql_expr, sqlite_engine)
Why do we use the LIMIT
clause?
Often the database we are querying is massive and retrieving the entire table as we are debugging the query can be costly in time and system resources. However, we should avoid using LIMIT
when constructing a sample of the data.
Which elements are returned?
While this depends on the order of elements which could be arbitrary beyond anything specified by the ORDER BY
clauses.
Is this a random sample?
No, be careful!
Try to figure out what the query below will compute
sql_expr = """
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*) as count
FROM students
WHERE gender = 'F'
GROUP BY dept
HAVING COUNT(*) >2
"""
pd.read_sql(sql_expr, sqlite_engine)
Try to write a SQL statement that returns the average percent tip for diners who came on the weekends, grouped by table size and ordered by increasing tip percentage.
tips_df = sns.load_dataset("tips")
tips_df.to_sql("tips", sqlite_engine)
sql_expr = """
SELECT * from tips
LIMIT 5
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT sex, smoker, avg(tip/total_bill) as pct
FROM tips
WHERE day = 'Sun' OR day = 'Sat'
GROUP BY sex, smoker
ORDER BY pct
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
CREATE TABLE Sailors (
sid INTEGER,
sname CHAR(20),
rating INTEGER,
age REAL,
PRIMARY KEY (sid));
"""
result = sqlite_engine.execute(sql_expr)
sql_expr = """
CREATE TABLE Reserves (
sid INTEGER,
bid INTEGER,
day DATE,
PRIMARY KEY (sid, bid, day),
FOREIGN KEY (sid) REFERENCES Sailors,
FOREIGN KEY (bid) REFERENCES Boats);
"""
result = sqlite_engine.execute(sql_expr)
sql_expr = """
CREATE TABLE Boats (
bid INTEGER,
bname CHAR (20),
color CHAR(10),
PRIMARY KEY (bid));
"""
result = sqlite_engine.execute(sql_expr)
sql_expr = """
INSERT INTO Sailors VALUES
(1, 'Popeye', 10, 22),
(2, 'OliveOyl', 11, 39),
(3, 'Garfield', 1, 27),
(4, 'Bob', 5, 19),
(5, 'SpongeBob', 11, 2);
"""
result = sqlite_engine.execute(sql_expr)
sql_expr = """
INSERT INTO Boats VALUES
(101, 'Nina', 'red'),
(102, 'Pinta', 'green'),
(103, 'Santa Maria', 'blue');
"""
result = sqlite_engine.execute(sql_expr)
sql_expr = """
INSERT INTO Reserves VALUES
(1, 102, '9/12/2017'),
(2, 102, '9/13/2017'),
(1, 101, '10/01/2017');
"""
result = sqlite_engine.execute(sql_expr)
sql_expr = """
SELECT * from SAILORS
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT * from RESERVES
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT s.sid, s.sname, r.bid
FROM Sailors s, Reserves r
WHERE s.sid = r.sid
AND s.age > 20;"""
pd.read_sql(sql_expr, sqlite_engine)