import numpy as np
import pandas as pd
## Plotly plotting support
import plotly.plotly as py
# import plotly.offline as py
# py.init_notebook_mode()
import plotly.graph_objs as go
import plotly.figure_factory as ff
# Make the notebook deterministic
np.random.seed(42)
In this notebook we will explore a key part of data science: feature engineering the process of transforming the representation of model inputs to enable better model approximation. Feature engineering enables you to:
In the supervised learning setting were are given $(X,Y)$ paris with the goal of learning the mapping from $X$ to $Y$. For example, given pairs of square footage and price we want to learn a function that captures (or at least approximates) the relationship between square feet and price. Our functional approximation is some form of typically parametric mapping from some domain to some range:
In this class we will focus on Multiple Regression in which we consider mappings from potentially high-dimensional input spaces onto the real line (i.e., $y \in \mathbb{R}$):
It is worth noting that this is distinct from Multivariate Regression in which we are predicting multiple (confusing?) response values (e.g., $y \in \mathbb{R}^q$).
Suppose we are given the following table:
Our goal is to learn a function that approximates the relationship between the blue and red columns. Let's assume the range, "Ratings"
, are the real numbers (this may be a problem if ratings are between [0, 5] but more on that later).
What is the domain of this function?
The schema of the relational model provides one possible answer:
RatingsData(uid AS INTEGER, age AS FLOAT,
state AS STRING, hasBought AS BOOLEAN,
review AS STRING, rating AS FLOAT)
Which would suggest that the domain is then:
$$ \textbf{Domain} = \mathbb{Z} \times \mathbb{R} \times \mathbb{S} \times \mathbb{B} \times \mathbb{S} \times \mathbb{R} $$Unfortunately, the techniques we have discussed so far and most of the techniques in machine learning and statistics operate on real-valued vector inputs $x \in \mathbb{R}^d$ (or for the statisticians $x \in \mathbb{R}^p$).
Moreover, many of these techniques, especially the linear models we have been studying, assume the inputs are continuous variables in which the relative magnitude of the feature encode information about the response variable.
In the following we define several basic transformations to encode features as real numbers.
Our first step as feature engineers is to translate our data into a form that encodes each feature as a continuous variable.
uid
¶The uid
was likely used to join the user information (e.g., age
, and state
) with some Reviews
table. The uid
presents several questions:
uid
number? uid
reveal information about the rating? There are several answers:
Although numbers, identifiers are typically categorical (like strings) and as a consequence the magnitude has little meaning. In these settings we would either drop or one-hot encode the uid
. We will return to feature dropping and one-hot-encoding in a moment.
There are scenarios where the magnitude of the numerical uid
value contains important information. When user ids are created in consecutive order, larger user ids would imply more recent users. In these cases we might to interpret the uid
feature as a real number.
While uncommon there are certain scenarios where manually dropping features might be helpful:
when the features does not to contain information associated with the prediction task. Dropping uninformative features can help to address over-fitting, an issue we will discuss in great detail soon.
when the feature is not available when at prediction time. For example, the feature might contain information collected after the user entered a rating. This is a common scenario in time-series analysis.
However in the absence of substantial domain knowledge, we would prefer to use algorithmic techniques to help eliminate features. We will discuss this more when we return to regularization.
age
Feature¶The age
feature encodes the users age. This is already a continuous real number so no additional feature transformations are required. However, as we will soon see, we may introduce additional related features (e.g., indicators for various age groups or non-linear transformations).
state
Feature¶The state
feature is a string encoding the category (one of the 50 states). How do we meaningfully encode such a feature as one or more real-numbers?
We could enumerate the states in alphabetical order AL=0
, AK=2
, ... WY=49
. This is a form of dictionary encoding which maps each category to an integer. However, this would likely be a poor feature encoding since the magnitude provides little information about the rating.
Alternatively, we might enumerate the states based on their geographic region (e.g., lower numbers for coastal states.). While this alternative dictionary encoding may provide information there is better way to encode categorical features for machine learning algorithms.
One-Hot encoding, sometimes also called dummy encoding is a simple mechanism to encode categorical data as real numbers such that the magnitude of each dimension is meaningful. Suppose a feature can take on $k$ distinct values (e.g., $k=50$ for 50 states in the United Stated). For each distinct possible value a new feature (dimension) is created. For each record, all the new features are set to zero except the one corresponding to the value in the original feature.
The following is a relatively inefficient (why?) implementation:
def one_hot_encoding(x, categories):
dictionary = dict(zip(categories, range(len(categories))))
enc = np.zeros(len(categories))
enc[dictionary[x]] = 1.0
return enc
categories = ["cat", "dog", "apple"]
one_hot_encoding("dog", categories)
Here we create a toy dataframe of pets including their name and kind:
df = pd.DataFrame({
"name": ["Goldy", "Scooby", "Brian", "Francine", "Goldy"],
"kind": ["Fish", "Dog", "Dog", "Cat", "Dog"],
"age": [0.5, 7., 3., 10., 1.]
}, columns = ["name", "kind", "age"])
df
Pandas has a built in function to construct one-hot encodings called get_dummies
pd.get_dummies(df['kind'])
pd.get_dummies(df)
Issue: While the Pandas pandas.get_dummies
function is very convenient and even retains meaningful column labels it has one key downside.
The get_dummies
function does not take the dictionary of possible values and so will not produce the same encoding if applied across multiple dataframes with different values. This can be a big issue when rendering predictions on a new dataset.
Scikit-learn is a widely used machine learning package in Python and provides several implementations of feature encoders for categorical data.
The DictVectorizer
encodes dictionaries by taking keys that map to strings and applying a one-hot encoding.
from sklearn.feature_extraction import DictVectorizer
vec_enc = DictVectorizer()
vec_enc.fit(df.to_dict(orient='records'))
vec_enc.transform(df.to_dict(orient='records')).toarray()
vec_enc.get_feature_names()
We can apply the dictionary vectorizer to new data:
vec_enc.transform([
{"kind": "Cat", "name": "Goldy", "age": 35},
{"kind": "Bird", "name": "Fluffy"},
{"breed": "Chihuahua", "name": "Goldy"},
]).toarray()
Notice that the second record {"kind": "Bird", "name": "Fluffy"}
has invalid categories and missing fields and it's encoding is entirely zero. Is this reasonable?
OneHotEncoder
¶The basic sklearn OneHotEncoder
encodes a column of integers corresponding to category values. Therefore, we first need to dictionary encode the string values.
# Convert the "kind" column into a category column
kind_codes = (
df['kind'].astype("category", categories=["Cat", "Dog","Fish"])
.cat.codes # Extract the category codes
)
kind_codes
from sklearn.preprocessing import OneHotEncoder
# Build an instance of the encoder
onehot = OneHotEncoder()
# Construct an integer column vector from the 'kind_codes' column
column_vec_kinds = np.array([kind_codes.values]).T
# Fit the encoder (which can be resued to transform other data)
onehot.fit(column_vec_kinds)
# Transform the column vector
onehot.transform(column_vec_kinds).toarray()
While one-hot encoding is the standard mechanism for encoding categorical data there are a few issues to keep in mind:
may generate too many dimensions/features
all possible values must be known in advance
missing values are reasonably captured by a zero in all dummy features.
hasBought
Feature¶The hasBought
feature is a boolean (0/1) valued feature but we it can have missing values:
There are a few options for encoding hasBought
:
Interpret directly as numbers. If there were no missing values then the booleans are typically treated directly as continuous values.
Apply one-hot encoding. This would create two new features hasBought=True
and hasBought=False
. This is probably the most general encoding but suffers from increased complexity.
1/-1 Encoding. Another common encoding for booleans with missing values is:
review
Feature¶Encoding text as a real-valued feature is especially challenging and many of the standard transformations are lossy. Moreover, all of the earlier transformations (e.g., one-hot encoding and Boolean representations) preserve the information in the feature. In contrast, most of the techniques for encoding text destroy information about the word order and in many cases key parts of the grammar.
Here we will discuss two widely used representations of text:
Both of these encoding strategies are related to the one-hot encoding with dummy features created for every word or sequence of words and with multiple dummy features having counts greater than zero.
The bag-of-words encoding is widely used and a standard representation for text in many of the popular text clustering algorithms. The following is a simple illustration of the bag-of-words encoding:
Notice
is
and about
that in isolation contain very little information about the meaning of the sentence. Here is a good list of stop-words in many languages. fun
, machines
, and learning
. Thought there are many possible meanings learning machines have fun learning or learning about machines is fun learning ...0
for every word that is not in each record would be incredibly inefficient. Why is it called a bag-of-words? A bag is another term for a multiset: an unordered collection which may contain multiple instances of each element.
When professor Gonzalez was a graduate student at Carnegie Mellon University, he and several other computer scientists created the following art piece on display at the Gates Center:
Notice
The N-Gram encoding is a generalization of the bag-of-words encoding designed to capture limited ordering information. Consider the following passage of text:
The book was not well written but I did enjoy it.
If we re-arrange the words we can also write:
The book was well written but I did not enjoy it.
Moreover, local word order can be important when making decisions about text. The n-gram encoding captures local word order by defining counts over sliding windows. In the following example a bi-gram ($n=2$) encoding is constructed:
The above n-gram would be encoded in the sparse vector:
Notice that the n-gram captures key pieces of sentiment information: "well written"
and "not enjoy"
.
N-grams are often used for other types of sequence data beyond text. For example, n-grams can be used to encode genomic data, protein sequences, and click logs.
N-Gram Issues
frost_text = [x for x in """
Some say the world will end in fire,
Some say in ice.
From what Ive tasted of desire
I hold with those who favor fire.
""".split("\n") if len(x) > 0]
frost_text
from sklearn.feature_extraction.text import CountVectorizer
# Construct the tokenizer with English stop words
bow = CountVectorizer(stop_words="english")
# fit the model to the passage
bow.fit(frost_text)
# Print the words that are kept
print("\nWords:",
list(zip(range(0,len(bow.get_feature_names())),bow.get_feature_names())))
print("\nSentence Encoding: \n")
# Print the encoding of each line
for (s, r) in zip(frost_text, bow.transform(frost_text)):
print(s)
print(r)
print("------------------")
# Construct the tokenizer with English stop words
bigram = CountVectorizer(stop_words="english", ngram_range=(1, 2))
# fit the model to the passage
bigram.fit(frost_text)
# Print the words that are kept
print("\nWords:",
list(zip(range(0,len(bigram.get_feature_names())), bigram.get_feature_names())))
print("\nSentence Encoding: \n")
# Print the encoding of each line
for (s, r) in zip(frost_text, bigram.transform(frost_text)):
print(s)
print(r)
print("------------------")
If we are encoding text in a particular domain (e.g., processing insurance claims) it is likely that there will be frequent terms (e.g., insurance
or claim
) that provide little information. However, because these terms occur frequently they can present challenges to some modeling techniques. In these cases, additional scaling may be applied to transform the bag-of-word or n-gram vectors to emphasize the more informative terms. One of the most common scalings techniques is the term frequency inverse document frequency (TF-IDF) which emphasizes words that are unique to a particular record. Because the notation is confusing, I have provided a pseudo code implementation. However, you should use a more efficient sparse implementation like those provided in scikit learn.
def tfidf(X):
"""
Input: X is a bag of words matrix (rows=records, cols=terms)
"""
(ndocs, nwords) = X.shape
tf = X / X.sum(axis=1)[:, np.newaxis]
idf = ndocs / (X > 0).sum(axis=0)
return tf * np.log(idf)
While these transformations are especially important when computing similarities between vector encodings of text. We will not cover these transformations in DS100 but it is worth knowing that they exist.
Most machine learning (ML) and statistics techniques operate on multivariate real-valued domains (i.e., vectors). As a consequence, we need methods to encode non-continuous datatypes into meaningful continuous forms. We discussed:
We will now explore how feature transformations can be used to capture domain knowledge and encode complex relationships.
In addition to transforming categorical and text features to real valued representations, we can also often improve model performance through the use of additional feature transformations. Let's start with a simple toy example
To illustrate the potential for feature transformations consider the following synthetic dataset:
train_data = pd.read_csv("toy_training_data.csv")
print(train_data.describe())
train_data.head()
Goal: As usual we would like to learn a model that predicts $Y$ given $X$.
What does this relationship between $X \rightarrow Y$ look like?
train_points = go.Scatter(name = "Training Data",
x = train_data['X'], y = train_data['Y'],
mode = 'markers')
# layout = go.Layout(autosize=False, width=800, height=600)
py.iplot(go.Figure(data=[train_points]),
filename="L19_b_p1")
How would you describe this data?
For the remainder of this lecture we will focus on fitting least squares linear regression models. Recall that linear regression models are functions of the form:
\begin{align}\large f_\theta(x) = x^T \theta = \sum_{j=1}^p \theta_j x_j \end{align}and least squares implies a loss function of the form:
\begin{align} \large L_\mathcal{D}(\theta) = \frac{1}{n} \sum_{i=1}^n \left(y_i - f_\theta(x) \right)^2 \end{align}In the previous lecture, we derived the normal equations which define the loss minimizing $\hat{\theta}$ for:
\begin{align}\large \hat{\theta} & \large = \arg\min L_\mathcal{D}(\theta) \\ & \large = \arg\min \frac{1}{n} \sum_{i=1}^n \left(y_i - X \theta \right)^2 \\ & \large = \left( X^T X \right)^{-1} X^T Y \end{align}In this lecture we will use the scikit-learn linear_model
package to compute the normal equations. This package supports a wide range of generalized linear models. For those who are interested in studying machine learning, I would encourage you to skim through the descriptions of the various models in the linear_model
package. These are the foundation of most practical applications of supervised machine learning.
from sklearn import linear_model
The following block of code creates an instance of the Least Squares Linear Regression model and the fits that model to the training data.
line_reg = linear_model.LinearRegression(fit_intercept=True)
# Fit the model to the data
line_reg.fit(train_data[['X']], train_data['Y'])
Notice: In the above code block we explicitly added a bias (intercept) term by setting fit_intercept=True
. Therefore we will not need to add an additional constant
feature.
To plot the model we will predict the value of $Y$ for a range of $X$ values. I will call these query points X_query
.
X_query = np.linspace(-10, 10, 500)
Use the regression model to render predictions at each X_query
point.
# Note that np.linspace returns a 1d vector therefore
# we must transform it into a 2d column vector
line_Yhat_query = line_reg.predict(
np.reshape(X_query, (len(X_query),1)))
To plot the residual we will also predict the $Y$ value for all the training points:
line_Yhat = line_reg.predict(train_data[['X']])
The following visualization code constructs a line as well as the residual plot
# Define the least squares regression line
basic_line = go.Scatter(name = r"$\theta x$", x=X_query.T,
y=line_Yhat_query)
# Definethe residual lines segments, a separate line for each
# training point
residual_lines = [
go.Scatter(x=[x,x], y=[y,yhat],
mode='lines', showlegend=False,
line=dict(color='black', width = 0.5))
for (x, y, yhat) in zip(train_data['X'], train_data['Y'], line_Yhat)
]
# Combine the plot elements
py.iplot([train_points, basic_line] + residual_lines, filename="L19_b_p2")
To help answer these questions it can often be helpful to plot the residuals in a residual plot. Residual plots plot the difference between the predicted value and the observed value in response to a particular covariate ($X$ dimension). The residual plot can help reveal patterns in the residual that might support additional modeling.
residuals = line_Yhat - train_data['Y']
# Plot.ly plotting code
py.iplot(go.Figure(
data = [dict(x=train_data['X'], y=residuals, mode='markers')],
layout = dict(title="Residual Plot", xaxis=dict(title="X"),
yaxis=dict(title="Residual"))
), filename="L19_b_p3")
Do we see a pattern?
To better visualize the pattern we could apply another regression package. In the following plotting code we call a more sophisticated regression package sklearn.kernel_ridge
to estimate a smoothed approximation to the residuals.
from sklearn.kernel_ridge import KernelRidge
# Use Kernelized Ridge Regression with Radial Basis Functions to
# compute a smoothed estimator. Later in this notebook we will
# actually implement part of this ...
clf = KernelRidge(kernel='rbf', alpha=2)
clf.fit(train_data[['X']], residuals)
residual_smoothed = clf.predict(np.reshape(X_query, (len(X_query),1)))
# Plot the residuals with with a kernel smoothing curve
py.iplot(go.Figure(
data = [dict(name = "Residuals", x=train_data['X'], y=residuals,
mode='markers'),
dict(name = "Smoothed Approximation",
x=X_query, y=residual_smoothed,
line=dict(dash="dash"))],
layout = dict(title="Residual Plot", xaxis=dict(title="X"),
yaxis=dict(title="Residual"))
), filename="L19_b_p4")
Again, the above plot suggests a cyclic pattern to the residuals. In higher dimensional settings or when many features are binary indicators (e.g., one-hot encoding) it will become difficult to interpret residual plots. In these, cases we may instead examine the distribution of the residual to identify skew and outliers.
py.iplot(ff.create_distplot([residuals], group_labels=['Residuals']),
filename="L19_b_p5")
From the above plot we see that there are several large outliers and there appears to be a gap just above 0.
So we have what appears to be a non-linear cyclic structure.
py.iplot(go.Figure(
data = [
dict(name = "Residuals", x=train_data['X'], y=residuals,
mode='markers'),
dict(name = "Smoothed Approximation",
x=X_query, y=residual_smoothed,
line=dict(dash="dash"))],
layout = dict(title="Residual Plot", xaxis=dict(title="X"),
yaxis=dict(title="Residual"))
), filename="L19_b_p4")
Question: Can we fit this non-linear cyclic structure with a linear model?
Let's return to what it means to be a linear model:
$$\large f_\theta(x) = x^T \theta = \sum_{j=1}^p x_j \theta_j $$In what sense is the above model linear?
The answer is yes to all the above questions!
Consider the following alternative model formulation:
$$\large f_\theta\left( \phi(x) \right) = \phi(x)^T \theta = \sum_{j=1}^{k} \phi(x)_j \theta_j $$where $\phi_j$ is an arbitrary function from $x\in \mathbb{R}^p$ to $\phi(x)_j \in \mathbb{R}$ and we define $k$ of these functions. We often refer to these functions $\phi_j$ as feature functions or basis functions and their design plays a critical role in both how we capture prior knowledge and our ability to fit complicated data.
Feature functions can be used to capture domain knowledge by:
Suppose I had data about customer purchases and I wanted to estimate their income:
\begin{align} \phi(\text{date}, \text{lat}, \text{lon}, \text{amount})_1 &= \textbf{isWinter}(\text{date}) \\ \phi(\text{date}, \text{lat}, \text{lon}, \text{amount})_2 &= \cos\left( \frac{\textbf{Hour}(\text{date})}{12} \pi \right) \\ \phi(\text{date}, \text{lat}, \text{lon}, \text{amount})_3 &= \frac{\text{amount}}{\textbf{avg_spend}[\textbf{ZipCode}[\text{lat}, \text{lon}]]} \\ \phi(\text{date}, \text{lat}, \text{lon}, \text{amount})_4 &= \exp\left(-\textbf{Distance}\left((\text{lat},\text{lon}), \textbf{StoreA}\right)\right)^2 \\ \phi(\text{date}, \text{lat}, \text{lon}, \text{amount})_5 &= \exp\left(-\textbf{Distance}\left((\text{lat},\text{lon}), \textbf{StoreB}\right)\right)^2 \end{align}Notice: In the above feature functions:
As a consequence, while the model $f_\theta\left( \phi(x) \right)$ is no longer linear in $x$ it is still a linear model because it is linear in $\theta$. This means we can continue to use the normal equations to compute the optimal parameters.
To apply the normal equations we define the transformed feature matrix:
Then substituting $\Phi$ for $X$ we obtain the normal equation:
$$ \large \hat{\theta} = \left( \Phi^T \Phi \right)^{-1} \Phi^T Y $$It is worth noting that the model is also linear in $\Phi$ and that the $\phi_j$ form a new basis (hence the term basis functions) in which the data live. As a consequence we can think of $\phi$ as mapping the data into a new (often higher dimensional space) in which the relationship between $y$ and $\phi(x)$ is defined by a hyperplane.
In our toy data set we observed a cyclic pattern. Here we construct a $\phi$ to capture the cyclic nature of our data and visualize the corresponding hyperplane.
In the following cell we define a function $\phi$ that maps $x\in \mathbb{R}$ to the vector $[x,\sin(x)] \in \mathbb{R}^2$
$$ \large \phi(x) = [x, \sin(x)] $$Why not:
$$ \large \phi(x) = [x, \sin(\theta_3 x + \theta_4)] $$This would no longer be linear $\theta$. However, in practice we might want to consider a range of $\sin$ basis:
$$ \large \phi_{\alpha,\beta}(x) = \sin(\alpha x + \beta) $$for different values of $\alpha$ and $\beta$. The parameters $\alpha$ and $\beta$ are typically called hyperparameters because (at least in this setting) they are not set automatically through learning.
def sin_phi(x):
return [x, np.sin(x)]
We then compute the matrix $\Phi$ by applying $\phi$ to reach row (record) in the matrix $X$.
Phi = np.array([sin_phi(x) for x in train_data['X']])
# Look at a few examples
Phi[:5,]
It is worth noting that in practice we might prefer a more efficient "vectorized" version of the above code:
Phi = np.vstack((train_data['X'], np.sin(train_data['X']))).T
however in this notebook we will use more explicit for
loop notation.
We can again use the scikit-learn package to fit a linear model on the transformed space.
sin_reg = linear_model.LinearRegression(fit_intercept=False)
sin_reg.fit(Phi, train_data['Y'])
# Making predictions at the transformed query points
Phi_query = np.array([sin_phi(x) for x in X_query])
sin_Yhat_query = sin_reg.predict(Phi_query)
# plot the regression line
sin_line = go.Scatter(name = r"$\theta_0 x + \theta_1 \sin(x)$ ",
x=X_query, y=sin_Yhat_query)
# Make predictions at the training points
sin_Yhat = sin_reg.predict(Phi)
# Plot the residual segments
residual_lines = [
go.Scatter(x=[x,x], y=[y,yhat],
mode='lines', showlegend=False,
line=dict(color='black', width = 0.5))
for (x, y, yhat) in zip(train_data['X'], train_data['Y'], sin_Yhat)
]
py.iplot([train_points, sin_line, basic_line] + residual_lines,
filename="L19_b_p10")
Examine the residuals again
sin_Yhat = sin_reg.predict(Phi)
residuals = train_data['Y'] - sin_Yhat
# Use Kernelized Ridge Regression with Radial Basis Functions to
# compute a smoothed estimator.
clf = KernelRidge(kernel='rbf')
clf.fit(train_data[['X']], residuals)
residual_smoothed = clf.predict(np.reshape(X_query, (len(X_query),1)))
# Plot the residuals with with a kernel smoothing curve
py.iplot(go.Figure(
data = [dict(name = "Residuals",
x=train_data['X'], y=residuals, mode='markers'),
dict(name = "Smoothed Approximation",
x=X_query, y=residual_smoothed,
line=dict(dash="dash"))],
layout = dict(title="Residual Plot",
xaxis=dict(title="X"), yaxis=dict(title="Residual"))
), filename="L19_b_p11")
Look at the distribution of residuals
py.iplot(ff.create_distplot([residuals], group_labels=['Residuals']),
filename="L19_b_p12")
Recall earlier that our residuals were more spread from -10 to 10 and now they have become more concentrated. However, the outliers remain. Is that a problem?
As discussed earlier the model we just constructed, while non-linear in $x$ is actually a linear model in $\phi(x)$ and we can visualize that linear model's structure in higher dimensions.
# Plot the data in higher dimensions
phi3d = go.Scatter3d(name = "Raw Data",
x = Phi[:,0], y = Phi[:,1], z = train_data['Y'],
mode = 'markers',
marker = dict(size=3),
showlegend=False
)
# Compute the predictin plane
(u,v) = np.meshgrid(np.linspace(-10,10,5), np.linspace(-1,1,5))
coords = np.vstack((u.flatten(),v.flatten())).T
ycoords = coords @ sin_reg.coef_
fit_plane = go.Surface(name = "Fitting Hyperplane",
x = np.reshape(coords[:,0], (5,5)),
y = np.reshape(coords[:,1], (5,5)),
z = np.reshape(ycoords, (5,5)),
opacity = 0.8, cauto = False, showscale = False,
colorscale = [[0, 'rgb(255,0,0)'], [1, 'rgb(255,0,0)']]
)
# Construct residual lines
Yhat = sin_reg.predict(Phi)
residual_lines = [
go.Scatter3d(x=[x[0],x[0]], y=[x[1],x[1]], z=[y, yhat],
mode='lines', showlegend=False,
line=dict(color='black'))
for (x, y, yhat) in zip(Phi, train_data['Y'], Yhat)
]
# Label the axis and orient the camera
layout = go.Layout(
scene=go.Scene(
xaxis=go.XAxis(title='X'),
yaxis=go.YAxis(title='sin(X)'),
zaxis=go.ZAxis(title='Y'),
aspectratio=dict(x=1.,y=1., z=1.),
camera=dict(eye=dict(x=-1, y=-1, z=0))
)
)
py.iplot(go.Figure(data=[phi3d, fit_plane] + residual_lines, layout=layout), filename="L19_b_p14")
Recall that in each stage of the process we have been minimizing the squared prediction error while increasing the sophistication of our model by introducing new features. When plotting the prediction error it is common to compute the root mean squared error (RMSE) which is the square-root of the average squared loss over the training data.
$$ \large \textbf{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(Y_i - f_\theta(X_i)\right)^2} $$def sq_loss(y, yhat):
return np.mean((yhat-y)**2)
const_rmse = np.sqrt(sq_loss(train_data['Y'], train_data['Y'].mean()))
line_rmse = np.sqrt(sq_loss(train_data['Y'], line_Yhat))
sin_rmse = np.sqrt(sq_loss(train_data['Y'], sin_Yhat))
py.iplot(go.Figure(data =[go.Bar(
x=[r'$\theta $', r'$\theta x$',
r'$\theta_0 x + \theta_1 \sin(x)$'],
y=[const_rmse, line_rmse, sin_rmse]
)], layout = go.Layout(title="Loss Comparison",
yaxis=dict(title="RMSE"))),
filename="L19_b_p15")
By adding the sine
feature function were were able to reduce the prediction error. How could we improve further?
We will now explore a range of generic feature transformations. However before we proceed it is worth contrasting two categories of feature functions and their applications.
Domain Specific Features: In settings where our goal is to understand the model (e.g., identify important features that predict customer churn) we may want to construct meaningful features based on our understanding of the domain.
Generic Features: However, in other settings where our primary goals is to make accurate predictions we may instead introduce generic feature functions that enable our models to fit and generalize complex relationships.
The first set of generic feature functions we will consider is the polynomial basis:
\begin{align} \phi(x) = [x, x^2, x^3, \ldots, x^k] \end{align}We can define a generic python function to implement this basis:
def poly_phi(k):
return lambda x: [x ** i for i in range(1, k+1)]
To simply the comparison of feature functions we define the following routine:
def evaluate_basis(phi, desc):
# Apply transformation
Phi = np.array([phi(x) for x in train_data['X']])
# Fit a model
reg_model = linear_model.LinearRegression(fit_intercept=False)
reg_model.fit(Phi, train_data['Y'])
# Create plot line
X_test = np.linspace(-10, 10, 1000) # Fine grained test X
Phi_test = np.array([phi(x) for x in X_test])
Yhat_test = reg_model.predict(Phi_test)
line = go.Scatter(name = desc, x=X_test, y=Yhat_test)
# Compute RMSE
Yhat = reg_model.predict(Phi)
rmse = np.sqrt(sq_loss(train_data['Y'], Yhat))
# return results
return (line, rmse, reg_model)
(poly_line, poly_rmse, poly_reg) = (
evaluate_basis(poly_phi(5), "Polynomial")
)
py.iplot(go.Figure(data=[train_points, poly_line, sin_line, basic_line],
layout = go.Layout(xaxis=dict(range=[-10,10]),
yaxis=dict(range=[-25,25]))),
filename="L19_b_p16")
(poly_line, poly_rmse, poly_reg) = (
evaluate_basis(poly_phi(15), "Polynomial")
)
py.iplot(go.Figure(data=[train_points, poly_line, sin_line, basic_line],
layout = go.Layout(xaxis=dict(range=[-10,10]),
yaxis=dict(range=[-25,25]))),
filename="L19_b_p17")
Seems like a pretty reasonable fit. Returning to the RMSE on the training data:
py.iplot([go.Bar(
x=[r'$\theta $', r'$\theta x$',
r'$\theta_0 x + \theta_1 \sin(x)$',
'Polynomial'],
y=[const_rmse, line_rmse, sin_rmse, poly_rmse]
)], filename="L19_b_p18")
This was a slight improvement. Perhaps we should increase to a higher degree polynomial? Why or why not? We will return to this soon.
One of the more widely used generic feature functions are Gaussian radial basis functions. These feature functions take the form:
$$ \phi_{(\lambda, u_1, \ldots, u_k)}(x) = \left[\exp\left( - \frac{\left|\left|x-u_1\right|\right|_2^2}{\lambda} \right), \ldots, \exp\left( - \frac{\left|\left| x-u_k \right|\right|_2^2}{\lambda} \right) \right] $$The hyper-parameters $u_1$ through $u_k$ and $\lambda$ are not optimized with $\theta$ but instead are set externally. In many cases the $u_i$ may correspond to points in the training data. The term $\lambda$ defines the spread of the basis function and determines the "smoothness" of the function $f_\theta(\phi(x))$.
The following is a plot of three radial basis function centered at 2 with different values of $\lambda$.
def gaussian_rbf(u, lam=1):
return lambda x: np.exp(-(x - u)**2 / lam)
tmpX = np.linspace(-2, 6,100)
py.iplot([
dict(name=r"$\lambda=0.5$", x=tmpX,
y=gaussian_rbf(2, lam=0.5)(tmpX)),
dict(name=r"$\lambda=1$", x=tmpX,
y=gaussian_rbf(2, lam=1.)(tmpX)),
dict(name=r"$\lambda=2$", x=tmpX,
y=gaussian_rbf(2, lam=2.)(tmpX))
], filename="L19_b_p19")
Here we plot 10 uniformly spaced RBF functions with $\lambda=1$
def rbf_phi(x):
return [gaussian_rbf(u, 1.)(x) for u in np.linspace(-9, 9, 10)]
(rbf_line, rbf_rmse, rbf_reg) = evaluate_basis(rbf_phi, r"RBF")
py.iplot([train_points, rbf_line, poly_line, sin_line, basic_line], filename="L19_b_p20")
def rbf_phi(x):
return [gaussian_rbf(u, 10.)(x) for u in np.linspace(-9, 9, 10)]
(rbf_line, rbf_rmse, rbf_reg) = (
evaluate_basis(rbf_phi, r"RBF")
)
py.iplot([train_points, rbf_line, poly_line, sin_line, basic_line],
filename="L19_b_p21")
Is this a better fit?
py.iplot([go.Bar(
x=[r'$\theta $', r'$\theta x$',
r'$\theta_0 x + \theta_1 \sin(x)$',
r"Polynomial",
r"RBF"],
y=[const_rmse, line_rmse, sin_rmse, poly_rmse, rbf_rmse]
)], filename="L19_b_p23")
def crazy_rbf_phi(x):
return (
[gaussian_rbf(u,1.)(x) for u in np.linspace(-9, 9, 30)]
)
(crazy_rbf_line, crazy_rbf_rmse, crazy_rbf_reg) = (
evaluate_basis(crazy_rbf_phi, "RBF + Crazy")
)
py.iplot([train_points, crazy_rbf_line, poly_line, sin_line, basic_line],
filename="L19_b_p24")
train_bars = go.Bar(name = "Train",
x=[r'$\theta $', r'$\theta x$', r'$\theta_0 x + \theta_1 \sin(x)$',
"Polynomial",
"RBF",
"RBF + Crazy"],
y=[const_rmse, line_rmse, sin_rmse, poly_rmse, rbf_rmse, crazy_rbf_rmse])
py.iplot([train_bars], filename="L19_b_p25")
We started with the objective of minimizing the training loss (error). As we increased the model sophistication by adding features we were able to fit increasingly complex functions to the data and reduce the loss. However, is our ultimate goal to minimize training error?
Ideally we would like to minimize the error we make when making new predictions at unseen values of $X$. One way to evaluate that error is use a test dataset which is distinct from the dataset used to train the model. Fortunately, we have such a test dataset.
test_data = pd.read_csv("toy_test_data.csv")
test_points = go.Scatter(name = "Test Data", x = test_data['X'], y = test_data['Y'],
mode = 'markers', marker=dict(symbol="cross", color="red"))
py.iplot([train_points, test_points], filename="L19_b_p26")
def test_rmse(phi, reg):
yhat = reg.predict(np.array([phi(x) for x in test_data['X']]))
return np.sqrt(sq_loss(test_data['Y'], yhat))
test_bars = go.Bar(name = "Test",
x=[r'$\theta $', r'$\theta x$', r'$\theta_0 x + \theta_1 \sin(x)$',
"Polynomial",
"RBF",
"RBF + Crazy"],
y=[np.sqrt(sq_loss(test_data['Y'], test_data['Y'].mean())),
test_rmse(lambda x: [x], line_reg),
test_rmse(sin_phi, sin_reg),
test_rmse(poly_phi(15), poly_reg),
test_rmse(rbf_phi, rbf_reg),
test_rmse(crazy_rbf_phi, crazy_rbf_reg)]
)
py.iplot([train_bars, test_bars], filename="L19_b_p27")
What happened here?
This is a very common occurrence in machine learning. As we increased the model complexity
As we increase the expressiveness of our model we begin to over-fit to the variability in our training data. That is we are learning patterns that do not generalize beyond our training dataset
Over-fitting is a key challenge in machine learning and statistical inference. At it's core is a fundamental trade-off between bias and variance: the desire to explain the training data and yet be robust to variation in the training data.
We will study the bias-variance trade-off more in the next lecture but for now we will focus on the trade-off between under fitting and over fitting:
To manage over-fitting it is essential to split your initial training data into a training and testing dataset.
Before running cross validation split the data into train and test subsets (typically a 90-10 split). Do not look at the test data until after selecting your final model.
With the remaining training data:
Questions:
We will dig more into the bias variance trade-off and introduce the concept of regularization to parametrically explore the space of model complexity.