Introduction to SQL

In this notebook we walk through basic SQL queries and review how to interact with SQL from Python. We will be using the core sqlalchemy library as an abstraction layer on-top of the underlying database management system. We will not be using the sophisticated object relational management features available in sqlalchemy.

In [1]:
import sqlalchemy

Connecting to the Database

For the first part of this assignment we will connect to a local SQLite database. SQLite is a simple file oriented database management "library." In the following we launch a SQLite database stored in the data100.db file in the local folder.

In [2]:
# Delete the database if it already exists.
from pathlib import Path
dbfile = Path("data100.db")
if dbfile.exists():
    dbfile.unlink()

sqlite_uri = "sqlite:///data100.db"
sqlite_engine = sqlalchemy.create_engine(sqlite_uri)

On the first call there should be no tables present in the file.

In [3]:
sqlite_engine.table_names()
Out[3]:
[]

Creating a Table

To start, we are going to define a toy relation (a.k.a. table), populate it with some toy data, and work through some basic SQL. First, let's create the table of students.

The table contains columns:

  • name: Text and unique (Primary Key)
  • gpa: Float with range constraints
  • age: Integer
  • dept: Text
  • sex: Char (e.g., 'M', 'F')

Note that each column has a fixed data type.

The DBMS will enforce these types as data is inserted.

Note also the definition of a primary key, as we discussed in the EDA lecture.

The DBMS will enforce the uniqueness of values in the key columns.

In [4]:
sql_expr = """
CREATE TABLE students(
    name TEXT PRIMARY KEY, 
    gpa FLOAT CHECK (gpa >= 0.0 and gpa <= 4.0), 
    age INTEGER, 
    dept TEXT, 
    gender CHAR);
"""
result = sqlite_engine.execute(sql_expr)

The query returns a sqlalchemy ResultProxy object:

In [5]:
result?

We can check to see if the result object contains any data:

In [6]:
result.returns_rows
Out[6]:
False

Inserting Values into the Table

Now let's manually insert some values into the table. Note that:

  • strings in SQL must be quoted with a single quote ' character.
  • insertions need to have values in the same order as the columns in the create table statement!
In [7]:
sql_expr = """
INSERT INTO students VALUES 
  ('Sergey Brin', 2.8, 40, 'CS', 'M'),
  ('Danah Boyd', 3.9, 35, 'CS', 'F'),
  ('Bill Gates', 1.0, 60, 'CS', 'M'),
  ('Hillary Mason', 4.0, 35, 'DATASCI', 'F'),
  ('Mike Olson', 3.7, 50, 'CS', 'M'),
  ('Mark Zuckerberg', 4.0, 30, 'CS', 'M'),
  ('Sheryl Sandberg', 4.0, 47, 'BUSINESS', 'F'),
  ('Susan Wojcicki', 4.0, 46, 'BUSINESS', 'F'),
  ('Marissa Meyer', 4.0, 45, 'BUSINESS', 'F');

"""
result = sqlite_engine.execute(sql_expr)

Again we see that this query returns nothing:

In [8]:
result.returns_rows
Out[8]:
False

Querying the Table

Now that we have populated the table we can construct a query to extract the results.

In [9]:
sql_expr = """
SELECT * FROM students;
"""
result = sqlite_engine.execute(sql_expr)
In [10]:
result.returns_rows
Out[10]:
True

Iterating the Cursor

The query returns a persistent connection in the form of a cursor which can be used to read data from the database.

In [11]:
[r for r in result.cursor]
Out[11]:
[('Sergey Brin', 2.8, 40, 'CS', 'M'),
 ('Danah Boyd', 3.9, 35, 'CS', 'F'),
 ('Bill Gates', 1.0, 60, 'CS', 'M'),
 ('Hillary Mason', 4.0, 35, 'DATASCI', 'F'),
 ('Mike Olson', 3.7, 50, 'CS', 'M'),
 ('Mark Zuckerberg', 4.0, 30, 'CS', 'M'),
 ('Sheryl Sandberg', 4.0, 47, 'BUSINESS', 'F'),
 ('Susan Wojcicki', 4.0, 46, 'BUSINESS', 'F'),
 ('Marissa Meyer', 4.0, 45, 'BUSINESS', 'F')]

However, as we read the cursor we advance it and so it can only be used once:

In [12]:
[r for r in result.cursor]
Out[12]:
[]

Using Pandas to Query the Database

You can also use Pandas to query the database. Here we pass the engine (or a connection) into the pandas.read_sql function:

In [13]:
import pandas as pd

sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[13]:
name gpa age dept gender
0 Sergey Brin 2.8 40 CS M
1 Danah Boyd 3.9 35 CS F
2 Bill Gates 1.0 60 CS M
3 Hillary Mason 4.0 35 DATASCI F
4 Mike Olson 3.7 50 CS M
5 Mark Zuckerberg 4.0 30 CS M
6 Sheryl Sandberg 4.0 47 BUSINESS F
7 Susan Wojcicki 4.0 46 BUSINESS F
8 Marissa Meyer 4.0 45 BUSINESS F

Primary Key Integrity Constraint

What happens if we try to insert another record with the same primary key (name)?

In [14]:
sql_expr = """
INSERT INTO students VALUES ('Bill Gates', 4.0, 60, 'BUSINESS', 'M')
"""
try:
    result = sqlite_engine.execute(sql_expr)
except Exception as e:
    print(e)
(sqlite3.IntegrityError) UNIQUE constraint failed: students.name [SQL: "\nINSERT INTO students VALUES ('Bill Gates', 4.0, 60, 'BUSINESS', 'M')\n"] (Background on this error at: http://sqlalche.me/e/gkpj)

Notice in the above block of code we use try: and except Exception as e:. This accomplishes two goals:

  1. This syntax catches the exception and prevents the notebook from terminating when the error occurs (we are expecting this error!)
  2. This syntax also hides the full stack trace and only shows us the important message containing the final error.

Saving a Dataframe to a Database

We can also populate the database using Pandas. In the following we first obtain the Tips dataset from the seaborn visualization library as a dataframe:

In [15]:
import seaborn as sns

tips_df = sns.load_dataset("tips")
tips_df.head()
Out[15]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4

We can then use the pandas.to_sql command to put the data in our SQLlite database:

In [16]:
tips_df.to_sql("tips", sqlite_engine)

We can see that a new table has been added:

In [17]:
sqlite_engine.table_names()
Out[17]:
['students', 'tips']

We can also query the table:

In [18]:
sql_expr = """
SELECT * FROM tips
WHERE size > 4;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[18]:
index total_bill tip sex smoker day time size
0 125 29.80 4.20 Female No Thur Lunch 6
1 141 34.30 6.70 Male No Thur Lunch 6
2 142 41.19 5.00 Male No Thur Lunch 5
3 143 27.05 5.00 Female No Thur Lunch 6
4 155 29.85 5.14 Female No Sun Dinner 5
5 156 48.17 5.00 Male No Sun Dinner 6
6 185 20.69 5.00 Male No Sun Dinner 5
7 187 30.46 2.00 Male Yes Sun Dinner 5
8 216 28.15 3.00 Male Yes Sat Dinner 5

Exploring the Schema

There is no mechanism in standard SQL to access the schema associated with each database management systems. However sqlalchemy provides a simple abstraction layer. I found the following SQL Alchemy Quick Reference Sheet to be very helpful.

In [19]:
inspector = sqlalchemy.inspect(sqlite_engine)
inspector.get_table_names()
Out[19]:
['students', 'tips']
In [20]:
# Get column information
for col in inspector.get_columns("students"):
    print(col)
{'name': 'name', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 1}
{'name': 'gpa', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}
{'name': 'age', 'type': INTEGER(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}
{'name': 'dept', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}
{'name': 'gender', 'type': CHAR(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}

Dropping Tables

The drop command is used to remove tables from the database (be carefull!):

In [21]:
sql_expr = """
DROP TABLE tips;
"""
sqlite_engine.execute(sql_expr)
Out[21]:
<sqlalchemy.engine.result.ResultProxy at 0x1a1024cba8>

Notice that the tips table no longer exists:

In [22]:
sqlite_engine.table_names()
Out[22]:
['students']

UPDATE values

What is Bill Gates' GPA?

In [23]:
sql_expr = """
SELECT * FROM students
WHERE name LIKE  '%Bill%' -- SQL like regular expression
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[23]:
name gpa age dept gender
0 Bill Gates 1.0 60 CS M

Wow, Bill has a low GPA let's lend him a hand.

In [24]:
sql_expr = """
UPDATE students
SET gpa = 1.0 + gpa
WHERE LOWER(name) = 'bill gates';
"""
sqlite_engine.execute(sql_expr)
Out[24]:
<sqlalchemy.engine.result.ResultProxy at 0x1a1024c828>

And let's check the table now:

In [25]:
sql_expr = """
SELECT * FROM students
WHERE name LIKE  '%Bill%' -- SQL like regular expression
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[25]:
name gpa age dept gender
0 Bill Gates 2.0 60 CS M

Important Note

In the update statement we decide which rows get updated based entirely on the values in each row, as checked by the where clause. There is no notion of any information outside the values in the row--e.g. there are no "object identifiers" or "row numbers"... everything is just the data and only the data.

Integrity Constraints

The integrity constraints we imposed earlier can be used to improve data quality.

Suppose Mark logged into the database and tried to give himself a 5.0?

In [26]:
try:
    sql_expr = """
        UPDATE students
        SET gpa = 5.0
        WHERE LOWER(name) LIKE '%zuck%';
        """
    sqlite_engine.execute(sql_expr)
except Exception as e:
    print(e)
(sqlite3.IntegrityError) CHECK constraint failed: students [SQL: "\n        UPDATE students\n        SET gpa = 5.0\n        WHERE LOWER(name) LIKE '%zuck%';\n        "] (Background on this error at: http://sqlalche.me/e/gkpj)

The above code fails. Why? (check the gpa.)

Deleting Records

We can delete rows in much the same way we update rows:

In [27]:
sql_expr = """
DELETE FROM students 
    WHERE name = 'Sergey Brin'
"""
sqlite_engine.execute(sql_expr)
Out[27]:
<sqlalchemy.engine.result.ResultProxy at 0x10d64ed68>

Notice we can rerun the above command multiple times. Why?

In [28]:
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[28]:
name gpa age dept gender
0 Danah Boyd 3.9 35 CS F
1 Bill Gates 2.0 60 CS M
2 Hillary Mason 4.0 35 DATASCI F
3 Mike Olson 3.7 50 CS M
4 Mark Zuckerberg 4.0 30 CS M
5 Sheryl Sandberg 4.0 47 BUSINESS F
6 Susan Wojcicki 4.0 46 BUSINESS F
7 Marissa Meyer 4.0 45 BUSINESS F

Restoring Sergey

In [29]:
sql_expr = """
INSERT INTO students VALUES
    ('Sergey Brin', 4.0, 40, 'CS', 'M');
"""
sqlite_engine.execute(sql_expr)
Out[29]:
<sqlalchemy.engine.result.ResultProxy at 0x1a1025fb00>

The above computation cannot be run more than once:

In [30]:
try:
    sql_expr = """
    INSERT INTO students VALUES
        ('Sergey Brin', 4.0, 40, 'CS', 'M');
    """
    sqlite_engine.execute(sql_expr)
except Exception as e:
    print(e)
(sqlite3.IntegrityError) UNIQUE constraint failed: students.name [SQL: "\n    INSERT INTO students VALUES\n        ('Sergey Brin', 4.0, 40, 'CS', 'M');\n    "] (Background on this error at: http://sqlalche.me/e/gkpj)

Notice that the order of the records has change. We actually have no guarantee on where Sergey is inserted in the database.

In [31]:
sql_expr = """
SELECT * FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[31]:
name gpa age dept gender
0 Danah Boyd 3.9 35 CS F
1 Bill Gates 2.0 60 CS M
2 Hillary Mason 4.0 35 DATASCI F
3 Mike Olson 3.7 50 CS M
4 Mark Zuckerberg 4.0 30 CS M
5 Sheryl Sandberg 4.0 47 BUSINESS F
6 Susan Wojcicki 4.0 46 BUSINESS F
7 Marissa Meyer 4.0 45 BUSINESS F
8 Sergey Brin 4.0 40 CS M

SELECT Queries

Now let's start looking at some slightly more interesting queries. The canonical SQL query block includes the following clauses, in the order they appear. Square brackets indicate optional clauses.

SELECT ...
  FROM ...
[WHERE ...]
[GROUP BY ...]
[HAVING ...]
[ORDER BY ...]
[LIMIT ...];

Query blocks can reference one or more tables, and be nested in various ways. Before we worry about multi-table queries or nested queries, we'll work our way through examples that exercise all of these clauses on a single table.

The SELECT LIST

The SELECT list determines which columns to include in the output.

In [32]:
sql_expr = """
SELECT name
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[32]:
name
0 Bill Gates
1 Danah Boyd
2 Hillary Mason
3 Marissa Meyer
4 Mark Zuckerberg
5 Mike Olson
6 Sergey Brin
7 Sheryl Sandberg
8 Susan Wojcicki

Functions in the Selection List

SQL has a wide range of functions that can be applied to each attribute in the select list. Notice that we can alias (name) the columns with AS. The complete list of built in PostreSQL functions is available here.

In [33]:
sql_expr = """
SELECT UPPER(name) AS n, LOWER(dept) as d, gpa/4.0 AS GPA_ratio
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[33]:
n d GPA_ratio
0 DANAH BOYD cs 0.975
1 BILL GATES cs 0.500
2 HILLARY MASON datasci 1.000
3 MIKE OLSON cs 0.925
4 MARK ZUCKERBERG cs 1.000
5 SHERYL SANDBERG business 1.000
6 SUSAN WOJCICKI business 1.000
7 MARISSA MEYER business 1.000
8 SERGEY BRIN cs 1.000

Unfortunately, sqlite has very limited functionality. However, most modern full featured database management systems will offer a rich set of transformations.

Selecting Distinct Rows

As we know, SQL is a multiset logic, preserving the meaning of the number of duplicates in query results. Sometimes, however, we don't want to keep the duplicates, we want to eliminate them. This is done simply by adding the keyword DISTINCT after the SELECT statement:

In [34]:
sql_expr = """
SELECT DISTINCT dept
FROM students;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[34]:
dept
0 CS
1 DATASCI
2 BUSINESS

Which rows are used when taking the distinct entries? Does it really matter?

The WHERE Clause

The WHERE clause determines which rows of to include by specifying a predicate (boolean expression). Rows (tuples) that satisfy this expression are returned.

In [35]:
sql_expr = """
SELECT name, gpa
FROM students
WHERE dept = 'CS'
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[35]:
name gpa
0 Danah Boyd 3.9
1 Bill Gates 2.0
2 Mike Olson 3.7
3 Mark Zuckerberg 4.0
4 Sergey Brin 4.0

And of course we can specify both rows and columns explicitly. If we have a primary key, we can filter things down to even the cell level via a select list of one column, and a where clause checking equality on the primary key columns:

In [36]:
sql_expr = """
SELECT gpa
FROM students
WHERE name = 'Bill Gates';
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[36]:
gpa
0 2.0

SQL is Closed Over Tables!

Note that even this "single-celled" response still has a uniform data type of a relation.

SQL expressions take in tables and always produce tables. How does this compare to Pandas?

Now that you can slice and dice tables into columns, rows and cells, you have enough knowledge to poke around in a database. Let's move on to skills that you'll need as a data scientist.

Group By Aggregation

GROUP BY aggregation in SQL is a lot like the group by in Pandas. SQL provides a family of [aggregate functions] for use in the select clause. In the simplest form, queries with aggregates in the select clause generate a single row of output, with each aggregate function performing a summary of all the rows of input. You can have many aggregate functions in your select clause:

A list of built-in aggregate functions in PostgreSQL is here. In our case, the query we are looking for is as follows.

In the following we compute the average GPA as well as the number of students in each department:

In [37]:
sql_expr = """
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*) as count
FROM students
GROUP BY dept
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[37]:
dept avg_gpa count
0 BUSINESS 4.00 3
1 CS 3.52 5
2 DATASCI 4.00 1

We can use the HAVING clause to apply a predicate to groups.

In [38]:
sql_expr = """
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*)
FROM students
GROUP BY dept
HAVING COUNT(*) >= 2
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[38]:
dept avg_gpa COUNT(*)
0 BUSINESS 4.00 3
1 CS 3.52 5
In [39]:
sql_expr = """
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*) as count
FROM students
WHERE gender = 'F'
GROUP BY dept
HAVING COUNT(*) >= 2
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[39]:
dept avg_gpa count
0 BUSINESS 4.0 3

Ordering the output

As a nicety, SQL allows you to order your output rows, in either ascending (ASC) or descending (DESC) order of the values in columns. For example:

In [40]:
sql_expr = """
SELECT *
FROM students
ORDER BY gpa;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[40]:
name gpa age dept gender
0 Bill Gates 2.0 60 CS M
1 Mike Olson 3.7 50 CS M
2 Danah Boyd 3.9 35 CS F
3 Hillary Mason 4.0 35 DATASCI F
4 Mark Zuckerberg 4.0 30 CS M
5 Sheryl Sandberg 4.0 47 BUSINESS F
6 Susan Wojcicki 4.0 46 BUSINESS F
7 Marissa Meyer 4.0 45 BUSINESS F
8 Sergey Brin 4.0 40 CS M
In [41]:
sql_expr = """
SELECT *
FROM students
ORDER BY gpa, age;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[41]:
name gpa age dept gender
0 Bill Gates 2.0 60 CS M
1 Mike Olson 3.7 50 CS M
2 Danah Boyd 3.9 35 CS F
3 Mark Zuckerberg 4.0 30 CS M
4 Hillary Mason 4.0 35 DATASCI F
5 Sergey Brin 4.0 40 CS M
6 Marissa Meyer 4.0 45 BUSINESS F
7 Susan Wojcicki 4.0 46 BUSINESS F
8 Sheryl Sandberg 4.0 47 BUSINESS F
In [42]:
sql_expr = """
SELECT *
FROM students
ORDER BY gpa DESC, age ASC;
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[42]:
name gpa age dept gender
0 Mark Zuckerberg 4.0 30 CS M
1 Hillary Mason 4.0 35 DATASCI F
2 Sergey Brin 4.0 40 CS M
3 Marissa Meyer 4.0 45 BUSINESS F
4 Susan Wojcicki 4.0 46 BUSINESS F
5 Sheryl Sandberg 4.0 47 BUSINESS F
6 Danah Boyd 3.9 35 CS F
7 Mike Olson 3.7 50 CS M
8 Bill Gates 2.0 60 CS M

LIMIT Clause

The limit clause limits the number of elements returned.

In [43]:
sql_expr = """
SELECT * FROM students 
LIMIT 3
"""
pd.read_sql(sql_expr, sqlite_engine)
Out[43]:
name gpa age dept gender
0 Danah Boyd 3.9 35 CS F
1 Bill Gates 2.0 60 CS M
2 Hillary Mason 4.0 35 DATASCI F

Why do we use the LIMIT clause?

Often the database we are querying is massive and retrieving the entire table as we are debugging the query can be costly in time and system resources. However, we should avoid using LIMIT when constructing a sample of the data.

Which elements are returned?

While this depends on the order of elements which could be arbitrary beyond anything specified by the ORDER BY clauses.

Is this a random sample?

No, be careful!

If this is funny you are Getting IT

Boby Drop tables

... it's funny, believe me.