In this notebook we walk through basic SQL queries and review how to interact with SQL from Python. We will be using the core sqlalchemy library as an abstraction layer on-top of the underlying database management system. We will not be using the sophisticated object relational management features available in sqlalchemy.
import sqlalchemy
For the first part of this assignment we will connect to a local SQLite database. SQLite is a simple file oriented database management "library." In the following we launch a SQLite database stored in the data100.db
file in the local folder.
# Delete the database if it already exists.
from pathlib import Path
dbfile = Path("data100.db")
if dbfile.exists():
dbfile.unlink()
sqlite_uri = "sqlite:///data100.db"
sqlite_engine = sqlalchemy.create_engine(sqlite_uri)
On the first call there should be no tables present in the file.
sqlite_engine.table_names()
To start, we are going to define a toy relation (a.k.a. table), populate it with some toy data, and work through some basic SQL. First, let's create the table of students.
The table contains columns:
name
: Text and unique (Primary Key)gpa
: Float with range constraintsage
: Integerdept
: Textsex
: Char (e.g., 'M', 'F')Note that each column has a fixed data type.
The DBMS will enforce these types as data is inserted.
Note also the definition of a primary key, as we discussed in the EDA lecture.
The DBMS will enforce the uniqueness of values in the key columns.
sql_expr = """
CREATE TABLE students(
name TEXT PRIMARY KEY,
gpa FLOAT CHECK (gpa >= 0.0 and gpa <= 4.0),
age INTEGER,
dept TEXT,
gender CHAR);
"""
result = sqlite_engine.execute(sql_expr)
The query returns a sqlalchemy ResultProxy
object:
result?
We can check to see if the result object contains any data:
result.returns_rows
Now let's manually insert some values into the table. Note that:
'
character.create table
statement! sql_expr = """
INSERT INTO students VALUES
('Sergey Brin', 2.8, 40, 'CS', 'M'),
('Danah Boyd', 3.9, 35, 'CS', 'F'),
('Bill Gates', 1.0, 60, 'CS', 'M'),
('Hillary Mason', 4.0, 35, 'DATASCI', 'F'),
('Mike Olson', 3.7, 50, 'CS', 'M'),
('Mark Zuckerberg', 4.0, 30, 'CS', 'M'),
('Sheryl Sandberg', 4.0, 47, 'BUSINESS', 'F'),
('Susan Wojcicki', 4.0, 46, 'BUSINESS', 'F'),
('Marissa Meyer', 4.0, 45, 'BUSINESS', 'F');
"""
result = sqlite_engine.execute(sql_expr)
Again we see that this query returns nothing:
result.returns_rows
Now that we have populated the table we can construct a query to extract the results.
sql_expr = """
SELECT * FROM students;
"""
result = sqlite_engine.execute(sql_expr)
result.returns_rows
The query returns a persistent connection in the form of a cursor
which can be used to read data from the database.
[r for r in result.cursor]
However, as we read the cursor we advance it and so it can only be used once:
[r for r in result.cursor]
You can also use Pandas to query the database. Here we pass the engine (or a connection) into the pandas.read_sql
function:
import pandas as pd
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
What happens if we try to insert another record with the same primary key (name
)?
sql_expr = """
INSERT INTO students VALUES ('Bill Gates', 4.0, 60, 'BUSINESS', 'M')
"""
try:
result = sqlite_engine.execute(sql_expr)
except Exception as e:
print(e)
Notice in the above block of code we use try:
and except Exception as e:
. This accomplishes two goals:
We can also populate the database using Pandas. In the following we first obtain the Tips dataset from the seaborn visualization library as a dataframe:
import seaborn as sns
tips_df = sns.load_dataset("tips")
tips_df.head()
We can then use the pandas.to_sql
command to put the data in our SQLlite database:
tips_df.to_sql("tips", sqlite_engine)
We can see that a new table has been added:
sqlite_engine.table_names()
We can also query the table:
sql_expr = """
SELECT * FROM tips
WHERE size > 4;
"""
pd.read_sql(sql_expr, sqlite_engine)
There is no mechanism in standard SQL to access the schema associated with each database management systems. However sqlalchemy provides a simple abstraction layer. I found the following SQL Alchemy Quick Reference Sheet to be very helpful.
inspector = sqlalchemy.inspect(sqlite_engine)
inspector.get_table_names()
# Get column information
for col in inspector.get_columns("students"):
print(col)
The drop command is used to remove tables from the database (be carefull!):
sql_expr = """
DROP TABLE tips;
"""
sqlite_engine.execute(sql_expr)
Notice that the tips
table no longer exists:
sqlite_engine.table_names()
What is Bill Gates' GPA?
sql_expr = """
SELECT * FROM students
WHERE name LIKE '%Bill%' -- SQL like regular expression
"""
pd.read_sql(sql_expr, sqlite_engine)
Wow, Bill has a low GPA let's lend him a hand.
sql_expr = """
UPDATE students
SET gpa = 1.0 + gpa
WHERE LOWER(name) = 'bill gates';
"""
sqlite_engine.execute(sql_expr)
And let's check the table now:
sql_expr = """
SELECT * FROM students
WHERE name LIKE '%Bill%' -- SQL like regular expression
"""
pd.read_sql(sql_expr, sqlite_engine)
In the update
statement we decide which rows get updated based entirely on the values in each row, as checked by the where
clause. There is no notion of any information outside the values in the row--e.g. there are no "object identifiers" or "row numbers"... everything is just the data and only the data.
The integrity constraints we imposed earlier can be used to improve data quality.
Suppose Mark logged into the database and tried to give himself a 5.0?
try:
sql_expr = """
UPDATE students
SET gpa = 5.0
WHERE LOWER(name) LIKE '%zuck%';
"""
sqlite_engine.execute(sql_expr)
except Exception as e:
print(e)
The above code fails. Why? (check the gpa.)
We can delete rows in much the same way we update rows:
sql_expr = """
DELETE FROM students
WHERE name = 'Sergey Brin'
"""
sqlite_engine.execute(sql_expr)
Notice we can rerun the above command multiple times. Why?
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Restoring Sergey
sql_expr = """
INSERT INTO students VALUES
('Sergey Brin', 4.0, 40, 'CS', 'M');
"""
sqlite_engine.execute(sql_expr)
The above computation cannot be run more than once:
try:
sql_expr = """
INSERT INTO students VALUES
('Sergey Brin', 4.0, 40, 'CS', 'M');
"""
sqlite_engine.execute(sql_expr)
except Exception as e:
print(e)
Notice that the order of the records has change. We actually have no guarantee on where Sergey is inserted in the database.
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Now let's start looking at some slightly more interesting queries. The canonical SQL query block includes the following clauses, in the order they appear. Square brackets indicate optional clauses.
SELECT ...
FROM ...
[WHERE ...]
[GROUP BY ...]
[HAVING ...]
[ORDER BY ...]
[LIMIT ...];
Query blocks can reference one or more tables, and be nested in various ways. Before we worry about multi-table queries or nested queries, we'll work our way through examples that exercise all of these clauses on a single table.
SELECT
LIST¶The SELECT
list determines which columns to include in the output.
sql_expr = """
SELECT name
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
SQL has a wide range of functions that can be applied to each attribute in the select list. Notice that we can alias (name) the columns with AS
. The complete list of built in PostreSQL functions is available here.
sql_expr = """
SELECT UPPER(name) AS n, LOWER(dept) as d, gpa/4.0 AS GPA_ratio
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Unfortunately, sqlite has very limited functionality. However, most modern full featured database management systems will offer a rich set of transformations.
As we know, SQL is a multiset logic, preserving the meaning of the number of duplicates in query results. Sometimes, however, we don't want to keep the duplicates, we want to eliminate them. This is done simply by adding the keyword DISTINCT
after the SELECT
statement:
sql_expr = """
SELECT DISTINCT dept
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Which rows are used when taking the distinct entries? Does it really matter?
WHERE
Clause¶The WHERE
clause determines which rows of to include by specifying a predicate (boolean expression). Rows (tuples) that satisfy this expression are returned.
sql_expr = """
SELECT name, gpa
FROM students
WHERE dept = 'CS'
"""
pd.read_sql(sql_expr, sqlite_engine)
And of course we can specify both rows and columns explicitly. If we have a primary key, we can filter things down to even the cell level via a select
list of one column, and a where
clause checking equality on the primary key columns:
sql_expr = """
SELECT gpa
FROM students
WHERE name = 'Bill Gates';
"""
pd.read_sql(sql_expr, sqlite_engine)
Note that even this "single-celled" response still has a uniform data type of a relation.
SQL expressions take in tables and always produce tables. How does this compare to Pandas?
Now that you can slice and dice tables into columns, rows and cells, you have enough knowledge to poke around in a database. Let's move on to skills that you'll need as a data scientist.
GROUP BY aggregation in SQL is a lot like the group by in Pandas. SQL provides a family of [aggregate functions] for use in the select
clause. In the simplest form, queries with aggregates in the select
clause generate a single row of output, with each aggregate function performing a summary of all the rows of input. You can have many aggregate functions in your select
clause:
A list of built-in aggregate functions in PostgreSQL is here. In our case, the query we are looking for is as follows.
In the following we compute the average GPA as well as the number of students in each department:
sql_expr = """
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*) as count
FROM students
GROUP BY dept
"""
pd.read_sql(sql_expr, sqlite_engine)
We can use the HAVING
clause to apply a predicate to groups.
sql_expr = """
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*)
FROM students
GROUP BY dept
HAVING COUNT(*) >= 2
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*) as count
FROM students
WHERE gender = 'F'
GROUP BY dept
HAVING COUNT(*) >= 2
"""
pd.read_sql(sql_expr, sqlite_engine)
As a nicety, SQL allows you to order your output rows, in either ascending (ASC) or descending (DESC) order of the values in columns. For example:
sql_expr = """
SELECT *
FROM students
ORDER BY gpa;
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT *
FROM students
ORDER BY gpa, age;
"""
pd.read_sql(sql_expr, sqlite_engine)
sql_expr = """
SELECT *
FROM students
ORDER BY gpa DESC, age ASC;
"""
pd.read_sql(sql_expr, sqlite_engine)
The limit clause limits the number of elements returned.
sql_expr = """
SELECT * FROM students
LIMIT 3
"""
pd.read_sql(sql_expr, sqlite_engine)
Why do we use the LIMIT
clause?
Often the database we are querying is massive and retrieving the entire table as we are debugging the query can be costly in time and system resources. However, we should avoid using LIMIT
when constructing a sample of the data.
Which elements are returned?
While this depends on the order of elements which could be arbitrary beyond anything specified by the ORDER BY
clauses.
Is this a random sample?
No, be careful!
... it's funny, believe me.