import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly.graph_objs as go
from scipy.optimize import minimize
import sklearn.linear_model as lm
# plt.rcParams['figure.figsize'] = (4, 4)
plt.rcParams['figure.dpi'] = 150
plt.rcParams['lines.linewidth'] = 3
sns.set()
In this lecture, we will look at data from the 2017-18 NBA season.
df = pd.read_csv('nba.csv')
df.head()
We are eventually going to want to perform binary classification, which is where we predict a 1 or 0. A reasonable thing to want to do given this data is to predict whether or not a team wins. Right now, the WL
column consists of "W"
and "L"
.
df['WL']
Let's fix that, so that wins are encoded as 1
and losses are encoded as 0
.
df["WON"] = df["WL"]
df["WON"] = df["WON"].replace("W", 1)
df["WON"] = df["WON"].replace("L", 0)
df.head(5)
There is a row for each team and each game in this dataset. It contains the FG_PCT
(field goal percentage) for each team per game.
df['FG_PCT']
Let's try and get the field goal percentage difference between two teams in a single game. We will then try and use this value to predict whether or not a team wins, given their field goal percentage difference.
This data cleaning and EDA is not the point of this lecture, but you may want to come back to this and try and understand it.
one_team = df.groupby("GAME_ID").first()
opponent = df.groupby("GAME_ID").last()
games = one_team.merge(opponent, left_index = True, right_index = True, suffixes = ["", "_OPP"])
games["FG_PCT_DIFF"] = games["FG_PCT"] - games["FG_PCT_OPP"]
games['WON'] = games['WL'].replace('L', 0).replace('W', 1)
games = games[['TEAM_NAME', 'MATCHUP', 'WON', 'FG_PCT_DIFF']]
games.head()
Let's start by looking at a sns.jointplot
of FG_PCT_DIFF
and WON
.
sns.jointplot(data = games, x = "FG_PCT_DIFF", y = "WON");
A reasonable thing to do here might be to model the probability of winning, given FG_PCT_DIFF
.
We already know how to use ordinary least squares, right? Why not use it here?
We'll also jitter the data, to get a better picture of what it looks like. But the line of best fit that's being drawn is on top of the original, non-jittered data.
sns.jointplot(data = games, x = "FG_PCT_DIFF", y = "WON",
y_jitter = 0.1,
kind="reg",
ci=False,
joint_kws={'line_kws':{'color':'green'}});
The green line drawn is a valid model. It is the line that minimizes MSE for this set of $x$ (FG_PCT_DIFF
) and $y$ (WON
) data.
But there are some issues:
games2 = games.copy()
games2.iloc[0] = ['hello', 'hello', 1, 120]
sns.jointplot(data = games2, x = "FG_PCT_DIFF", y = "WON",
y_jitter = 0.1,
kind="reg",
ci=False,
joint_kws={'line_kws':{'color':'green'}});
We need a better model. Let's try and replicate the graph of averages from Lecture 12, on Simple Linear Regression. Recall, we
We will do the same thing here, albeit with slightly different code. Here, we will formally partition the $x$-axis into 20 bins.
bins = pd.cut(games["FG_PCT_DIFF"], 20)
bins
games["bin"] = [(b.left + b.right) / 2 for b in bins]
games["bin"]
games
We now know which "bin"
each game belongs to. We can plot the average WON
for each bin.
win_rates_by_bin = games.groupby("bin")["WON"].mean()
win_rates_by_bin
plt.plot(win_rates_by_bin, 'r')
sns.jointplot(data = games, x = "FG_PCT_DIFF", y = "WON",
y_jitter = 0.1,
kind="reg",
ci=False,
joint_kws={'line_kws':{'color':'green'}});
plt.plot(win_rates_by_bin, 'r', linewidth = 5);
It seems like our red graph of averages does a much better job at matching the data than our simple linear regression line.
What is this graph of averages plotting? Since the $y$ axis is only 0s and 1s, and we took the mean of the $y$-values in each bin for a given $x$, the graph of average is plotting the proportion of times a team won, given their FG_PCT_DIFF
. Remember, WON = 1
each time a team won.
Logistic regression aims to model the probability of an observation belonging to class 1, given some set of features.
def sigma(t):
return 1 / (1 + np.exp(-t))
plt.plot(win_rates_by_bin, 'r', linewidth = 5);
x = win_rates_by_bin.index
plt.plot(x, sigma(x * 30), 'black', linewidth = 5);
plt.xlabel('FG_PCT_DIFF')
plt.ylabel('WON');
What is this mystery sigma
function, and why does sigma(x * 30)
match our graph of averages so well? Well... we're getting there.
For now, consider these questions:
What are:
win_rates_by_bin
The odds of an event are defined as the probability that it happens divided by the probability that it doesn't happen.
If some event happens with probability $p$, then $\text{odds}(p) = \frac{p}{1-p}$.
odds_by_bin = win_rates_by_bin / (1 - win_rates_by_bin)
odds_by_bin
If we plot the odds of these probabilities, they look exponential:
plt.plot(odds_by_bin);
But if we take the log of these odds:
plt.plot(np.log(odds_by_bin));
We noticed that the log-odds grows linearly with $x$.
In the lecture slides, we formalize what this means, and how this allows us to arrive at the sigma
function above.
In the slides, we show that our model is
$$P(Y = 1 | x) = \sigma(x^T \theta)$$where $$\sigma(t) = \frac{1}{1 + e^{-t}}$$
Let's explore the shape of the logistic function, $\sigma$.
First, the vanilla curve $\sigma(x)$:
x = np.linspace(-5,5,50)
plt.plot(x, sigma(x));
plt.xlabel('x')
plt.ylabel(r'$\frac{1}{1 + e^{-x}}$');
Now, we look at $\sigma(\theta_1 x)$, for several values of $\theta_1$:
def flatten(li):
return [item for sub in li for item in sub]
bs = [-2, -1, -0.5, 2, 1, 0.5]
xs = np.linspace(-10, 10, 100)
fig, axes = plt.subplots(2, 3, sharex=True, sharey=True, figsize=(10, 6))
for ax, b in zip(flatten(axes), bs):
ys = sigma(xs * b)
ax.plot(xs, ys)
ax.set_title(r'$ \theta_1 = $' + str(b))
# add a big axes, hide frame
fig.add_subplot(111, frameon=False)
# hide tick and tick label of the big axes
plt.tick_params(labelcolor='none', top=False, bottom=False,
left=False, right=False)
plt.grid(False)
plt.xlabel('$x$')
plt.ylabel(r'$ \frac{1}{1+\exp(-\theta_1 \cdot x)} $')
plt.tight_layout()
plt.savefig('sigmoids.png')
Let's explore the shape of $\sigma(\theta_0 + \theta_1x)$, for different values of $\theta_0, \theta_1$. There's quite a bit going on here, so let's use plotly
.
fig = go.Figure()
for theta1 in [-1,1, 5]:
for theta0 in [-2, 0, 2]:
fig.add_trace(go.Scatter(name=f"{theta0} + {theta1} x", x=xs, y=sigma(theta0 + theta1*xs)))
fig