import numpy as np
import pandas as pd
## Plotly plotting support
import plotly.plotly as py
# import plotly.offline as py
# py.init_notebook_mode()
import plotly.graph_objs as go
import plotly.figure_factory as ff
# Make the notebook deterministic
np.random.seed(42)
Notebook created by Joseph E. Gonzalez for DS100.
In the next few notebooks we will explore a key part of data science, feature engineering: the process of transforming the representation of model inputs to enable better model approximation. Feature engineering enables you to:
In the supervised learning setting were are given $(X,Y)$ paris with the goal of learning the mapping from $X$ to $Y$. For example, given pairs of square footage and price we want to learn a function that captures (or at least approximates) the relationship between square feet and price. Our functional approximation is some form of typically parametric mapping from some domain to some range:
In this class we will focus on Multiple Regression in which we consider mappings from potentially high-dimensional input spaces onto the real line (i.e., $y \in \mathbb{R}$):
It is worth noting that this is distinct from Multivariate Regression in which we are predicting multiple (confusing?) response values (e.g., $y \in \mathbb{R}^q$).
Suppose we are given the following table:
Our goal is to learn a function that approximates the relationship between the blue and red columns. Let's assume the range, "Ratings"
, are the real numbers (this may be a problem if ratings are between [0, 5] but more on that later).
What is the domain of this function?
The schema of the relational model provides one possible answer:
RatingsData(uid INTEGER, age FLOAT,
state STRING, hasBought BOOLEAN,
review STRING, rating FLOAT)
Which would suggest that the domain is then:
$$ \textbf{Domain} = \mathbb{Z} \times \mathbb{R} \times \mathbb{S} \times \mathbb{B} \times \mathbb{S} \times \mathbb{R} $$Unfortunately, the techniques we have discussed so far and most of the techniques in machine learning and statistics operate on real-valued vector inputs $x \in \mathbb{R}^d$ (or for the statisticians $x \in \mathbb{R}^p$).
Moreover, many of these techniques, especially the linear models we have been studying, assume the inputs are continuous variables in which the relative magnitude of the feature encode information about the response variable.
In the following we define several basic transformations to encode features as real numbers.
Our first step as feature engineers is to translate our data into a form that encodes each feature as a continuous variable.
uid
¶The uid
was likely used to join the user information (e.g., age
, and state
) with some Reviews
table. The uid
presents several questions:
uid
number? uid
reveal information about the rating? There are several answers:
Although numbers, identifiers are typically categorical (like strings) and as a consequence the magnitude has little meaning. In these settings we would either drop or one-hot encode the uid
. We will return to feature dropping and one-hot-encoding in a moment.
There are scenarios where the magnitude of the numerical uid
value contains important information. When user ids are created in consecutive order, larger user ids would imply more recent users. In these cases we might to interpret the uid
feature as a real number.
While uncommon there are certain scenarios where manually dropping features might be helpful:
when the features does not to contain information associated with the prediction task. Dropping uninformative features can help to address over-fitting, an issue we will discuss in great detail soon.
when the feature is not available when at prediction time. For example, the feature might contain information collected after the user entered a rating. This is a common scenario in time-series analysis.
However in the absence of substantial domain knowledge, we would prefer to use algorithmic techniques to help eliminate features. We will discuss this more when we return to regularization.
age
Feature¶The age
feature encodes the users age. This is already a continuous real number so no additional feature transformations are required. However, as we will soon see, we may introduce additional related features (e.g., indicators for various age groups or non-linear transformations).
state
Feature¶The state
feature is a string encoding the category (one of the 50 states). How do we meaningfully encode such a feature as one or more real-numbers?
We could enumerate the states in alphabetical order AL=0
, AK=2
, ... WY=49
. This is a form of dictionary encoding which maps each category to an integer. However, this would likely be a poor feature encoding since the magnitude provides little information about the rating.
Alternatively, we might enumerate the states based on their geographic region (e.g., lower numbers for coastal states.). While this alternative dictionary encoding may provide information there is better way to encode categorical features for machine learning algorithms.
One-Hot encoding, sometimes also called dummy encoding is a simple mechanism to encode categorical data as real numbers such that the magnitude of each dimension is meaningful. Suppose a feature can take on $k$ distinct values (e.g., $k=50$ for 50 states in the United Stated). For each distinct possible value a new feature (dimension) is created. For each record, all the new features are set to zero except the one corresponding to the value in the original feature.
The term one-hot encoding comes from a digital circuit encoding of a categorical state as particular "hot" wire:
The following is a relatively inefficient implementation:
def one_hot_encoding(x, categories):
dictionary = dict(zip(categories, range(len(categories))))
enc = np.zeros(len(categories))
enc[dictionary[x]] = 1.0
return enc
categories = ["cat", "dog", "apple"]
one_hot_encoding("dog", categories)
Why is this inefficient? Think about a large number of states.
Answer: Here we are using a dense representation which does not make efficient use of memory
Here we create a toy dataframe of pets including their name and kind:
df = pd.DataFrame({
"name": ["Goldy", "Scooby", "Brian", "Francine", "Goldy"],
"kind": ["Fish", "Dog", "Dog", "Cat", "Dog"],
"age": [0.5, 7., 3., 10., 1.]
}, columns = ["name", "kind", "age"])
df
Pandas has a built in function to construct one-hot encodings called get_dummies
pd.get_dummies(df['kind'])
pd.get_dummies(df)
Issue: While the Pandas pandas.get_dummies
function is very convenient and even retains meaningful column labels it has one key downside.
The get_dummies
function does not take the dictionary of possible values and so will not produce the same encoding if applied across multiple dataframes with different values. This can be a big issue when rendering predictions on a new dataset.
Scikit-learn is a widely used machine learning package in Python and provides several implementations of feature encoders for categorical data.
The DictVectorizer
encodes dictionaries by taking keys that map to strings and applying a one-hot encoding.
from sklearn.feature_extraction import DictVectorizer
vec_enc = DictVectorizer()
vec_enc.fit(df.to_dict(orient='records'))
vec_enc.transform(df.to_dict(orient='records')).toarray()
vec_enc.get_feature_names()
We can apply the dictionary vectorizer to new data:
vec_enc.transform([
{"kind": "Cat", "name": "Goldy", "age": 35},
{"kind": "Bird", "name": "Fluffy"},
{"breed": "Chihuahua", "name": "Goldy"},
]).toarray()
Notice that the second record {"kind": "Bird", "name": "Fluffy"}
has invalid categories and missing fields and it's encoding is entirely zero. Is this reasonable?
OneHotEncoder
¶The basic sklearn OneHotEncoder
encodes a column of integers corresponding to category values. Therefore, we first need to dictionary encode the string values.
# Convert the "kind" column into a category column
kind_codes = (
df['kind'].astype("category", categories=["Cat", "Dog","Fish"])
.cat.codes # Extract the category codes
)
kind_codes
from sklearn.preprocessing import OneHotEncoder
# Build an instance of the encoder
onehot = OneHotEncoder()
# Construct an integer column vector from the 'kind_codes' column
column_vec_kinds = np.array([kind_codes.values]).T
# Fit the encoder (which can be resued to transform other data)
onehot.fit(column_vec_kinds)
# Transform the column vector
onehot.transform(column_vec_kinds).toarray()
Suppose you obtain the log of icecream sales for a popular icecream shop.
The data consists of the flavor and topping, the total icecream mass (mass), and the price charged.
icecream = pd.read_csv("icecream_train.csv")
icecream.head()
How would you predict the price of icecream given the flavor, topping, and mass?
Let's start simple and focus on predicting the price from the mass:
from sklearn import linear_model
# Train a linear regression modle to predict price from mass
reg_mass = linear_model.LinearRegression()
reg_mass.fit(icecream[['mass']], icecream['price'])
# Make predictions for each of the purchases in our dataset
yhat_mass = reg_mass.predict(icecream[['mass']])
This is a fairly simple one-dimensional problem so we can plot the data.
def plot_fit_line(x, y, model, filename):
# Data points
points = go.Scatter(name = "Data", x=x,y=y, mode='markers')
# Predictions for line
x_query = np.linspace(np.min(x), np.max(x), 1000)
y_query = model.predict(np.array([x_query]).T)
model_line = go.Scatter(name="Model", x=x_query, y=y_query)
# Residual line segments
residual_lines = [
go.Scatter(x=[x,x], y=[y,yhat],
mode='lines', showlegend=False,
line=dict(color='black', width = 0.5))
for (x, y, yhat) in zip(x, y, model.predict(np.array([x]).T))
]
return py.iplot([points, model_line] + residual_lines, filename=filename)
plot_fit_line(icecream['mass'], icecream['price'], reg_mass, "FE_Part1_0")
residual = yhat_mass - icecream['price']
py.iplot(ff.create_distplot([residual], group_labels=['Residuals'], bin_size=0.1), filename="FE_Part1_1")
When plotting the prediction error it is common to compute the root mean squared error (RMSE) which is the square-root of the average squared loss over the training data.
$$ \large \textbf{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(Y_i - f_\theta(X_i)\right)^2} $$The RMSE error in the units of $Y$ (in this case price) and is biased towards points with the highest error.
Another error metric that is a bit more robust is the median absolute devaiation (MAD) error.
$$ \large \textbf{MAD} = \textbf{median}\left(Y_i - f_\theta(X_i)\right) $$The RMSE error metric is closer to our squared loss objective and the MAD error is closer to an L1 loss and the corresponding Least Absolute Deviation Regression which we have not yet covered.
Let's take a look at both:
def rmse(y, yhat):
return np.sqrt(np.mean((yhat-y)**2))
def mad(y, yhat):
return np.median(np.abs(yhat - y))
print("RMSE:", rmse(icecream['price'], yhat_mass))
print("MAD:", mad(icecream['price'], yhat_mass))
Often a very basic model is enough. However we notice something intresting.
At the same mass value there appears to be multiple icecream prices.
Why?
Given we have categorical data one thing we might do is first try to stratify our analysis. We could look at at subset of assignments and try to get a better picture of what is happening.
I like Chocolate so I decided to look at just purchases of chocolate flavored icecream and chocolate toppings.
ind = (icecream['flavor'] == "Chocolate") & (icecream['topping'] == "Chocolate")
reg_chocolate = linear_model.LinearRegression()
reg_chocolate.fit(icecream[ind][['mass']], icecream[ind]['price'])
Let's plot a stratified version of the data
choc_choc_points = (
go.Scatter(name="Chocolate+Chocolate",
x = icecream[ind]['mass'], y = icecream[ind]['price'],
mode='markers',
marker=dict(color="red", symbol="triangle-up", size=10)))
ind_flav = icecream['flavor'] == "Chocolate"
chocolate_points = (
go.Scatter(name="Choc. Flavored",
x = icecream[ind_flav]['mass'], y = icecream[ind_flav]['price'],
mode='markers',
marker=dict(color="red", symbol="circle-open", size=15)))
all_data = (
go.Scatter(name="Data",
x = icecream['mass'], y = icecream['price'], mode='markers',
marker=dict(color="gray")))
x_query = np.linspace(icecream['mass'].min(), icecream['mass'].max(), 500)
line_mass = (
go.Scatter(name="mass Only",
x = x_query, y = reg_mass.predict(np.array([x_query]).T),
line=dict(color="black")))
line_choclate = (
go.Scatter(name="Choc.+Choc. Line",
x = x_query, y = reg_chocolate.predict(np.array([x_query]).T),
line=dict(color="orange")))
py.iplot([all_data, chocolate_points, choc_choc_points, line_mass, line_choclate],
filename="FE_Part1_2")
In the above we plot:
What do we observe?
They may charge customers differnt prices based on flavor and toppings. How can we incorporate that information?
Let's try constructing one-hot encodings for the flavor and topping information features.
one_hot_enc = DictVectorizer()
feature_columns = ["flavor", "topping", "mass"]
one_hot_enc.fit(icecream[feature_columns].to_dict(orient='records'))
one_hot_features = (
one_hot_enc.transform(icecream[feature_columns].to_dict(orient='records'))
)
one_hot_features
Examining a few rows we see there are multiple one hot encodings (one for flavor and one for toppings).
one_hot_features.todense()[:5,:]
Again we fit a model:
# Train a linear regression modle to predict price from mass
one_hot_reg = linear_model.LinearRegression()
one_hot_reg.fit(one_hot_features, icecream['price'])
# Make predictions for each of the purchases in our dataset
yhat_one_hot = one_hot_reg.predict(one_hot_features)
residual = yhat_one_hot - icecream['price']
py.iplot(ff.create_distplot([residual], group_labels=['Residuals'], bin_size=0.01), filename="FE_Part1_3")
py.iplot([
go.Bar(name="mass Only",
x=["RMSE", "MAD"],
y=[rmse(icecream['price'], yhat_mass),
mad(icecream['price'], yhat_mass)]),
go.Bar(name="OneHot + mass",
x=["RMSE", "MAD"],
y=[rmse(icecream['price'], yhat_one_hot),
mad(icecream['price'], yhat_one_hot)])
], filename="FE_Part1_4")
y_vs_yhat = go.Scatter(name="y vs yhat", x=icecream['price'], y=yhat_one_hot, mode='markers')
slope_one = go.Scatter(name="Ideal", x=[0,5], y=[0,5])
layout = go.Layout(xaxis=dict(title="y"), yaxis=dict(title="yhat"))
py.iplot(go.Figure(data=[y_vs_yhat, slope_one], layout=layout),
filename="FE_Part1_5")
Icecream Pricing Model:
$$\large \text{price} = \text{mass} * \theta_\text{flavor} + \theta_\text{topping} $$Question How could we encode this model so that we can learn it using linear regression?
Here is a proposal:
\begin{align} \phi\left(\text{mass}, \text{flavor}, \text{topping} \right) & = \left[\text{mass} * \textbf{OneHot}\left(\text{flavor}\right), \textbf{OneHot}\left(\text{topping}\right)\right] \end{align}To see how this works lets look at $\theta_\text{topping}$.
\begin{align} \textbf{OneHot}\left(\text{topping}(x)\right) = \left[\textbf{isSprinkles}(x), \textbf{isFruit}(x), \textbf{isChoc}(x), \textbf{isNuts}(x)\right] \end{align}\begin{align} \theta_\text{topping} = \left[\theta_\text{sprinkles}, \theta_\text{isFruit}, \theta_\text{isChoc}, \theta_\text{isNuts}\right] \end{align}If we take their dot-product we select the corresponding essential learns the constant function $\theta$ with the unique $\theta$ value for that topping.
Here we will construct one hot encodings for the flavor and toppings in seperate calls so we know which columns correspond to each:
flavor_enc = DictVectorizer()
flavor_enc.fit(icecream[["flavor"]].to_dict(orient='records'))
onehot_flavor = flavor_enc.transform(icecream[["flavor"]].to_dict(orient='records'))
topping_enc = DictVectorizer()
topping_enc.fit(icecream[["topping"]].to_dict(orient='records'))
onehot_topping = topping_enc.transform(icecream[["topping"]].to_dict(orient='records'))
To scale the sparse matrix fo encodings by the mass we need to multiply by a sparse diaganol matrix.
import scipy as sp
n = len(icecream['mass'].values)
scaling_matrix = sp.sparse.spdiags(icecream['mass'].values, 0, n, n)
mass_times_flavor = scaling_matrix @ onehot_flavor
Combining the sparse mass_times_flavor
columns with the onehot_topping
columns we get a new feature matrix Phi
Phi = sp.sparse.hstack([mass_times_flavor, onehot_topping])
Phi
Again let's look at a few examples (in practice you would want to avoid the todense()
call
Phi.todense()[:5,:]
Notice that this time I am removing the intercept (bias) term since I don't believe it should be part of my model
from sklearn import linear_model
reg_domain_knowledge = linear_model.LinearRegression(fit_intercept=False)
reg_domain_knowledge.fit(Phi, icecream['price'])
yhat_domain_knowledge = reg_domain_knowledge.predict(Phi)
Did we improve the fit?
py.iplot([
go.Bar(name="mass Only",
x=["RMSE", "MAD"],
y=[rmse(icecream['price'], yhat_mass),
mad(icecream['price'], yhat_mass)]),
go.Bar(name="OneHot + mass",
x=["RMSE", "MAD"],
y=[rmse(icecream['price'], yhat_one_hot),
mad(icecream['price'], yhat_one_hot)]),
go.Bar(name="Domain Knowledge",
x=["RMSE", "MAD"],
y=[rmse(icecream['price'], yhat_domain_knowledge),
mad(icecream['price'], yhat_domain_knowledge)])
], filename="FE_Part1_6")
yhat_vs_y = go.Scatter(name="y vs yhat", x=icecream['price'], y=yhat_domain_knowledge, mode='markers')
slope_one = go.Scatter(name="Ideal", x=[0,5], y=[0,5])
layout = go.Layout(xaxis=dict(title="y"), yaxis=dict(title="yhat"))
py.iplot(go.Figure(data=[yhat_vs_y, slope_one], layout=layout),
filename="FE_Part1_7")
While one-hot encoding is the standard mechanism for encoding categorical data there are a few issues to keep in mind:
may generate too many dimensions/features
all possible values must be known in advance
missing values are reasonably captured by a zero in all dummy features.
Can be combined with other features using domain knowledge.
hasBought
Feature¶The hasBought
feature is a boolean (0/1) valued feature but we it can have missing values:
There are a few options for encoding hasBought
:
Interpret directly as numbers. If there were no missing values then the booleans are typically treated directly as continuous values.
Apply one-hot encoding. This would create two new features hasBought=True
and hasBought=False
. This is probably the most general encoding but suffers from increased complexity.
1/-1 Encoding. Another common encoding for booleans with missing values is:
review
Feature¶Encoding text as a real-valued feature is especially challenging and many of the standard transformations are lossy. Moreover, all of the earlier transformations (e.g., one-hot encoding and Boolean representations) preserve the information in the feature. In contrast, most of the techniques for encoding text destroy information about the word order and in many cases key parts of the grammar.
Here we will discuss two widely used representations of text:
Both of these encoding strategies are related to the one-hot encoding with dummy features created for every word or sequence of words and with multiple dummy features having counts greater than zero.
The bag-of-words encoding is widely used and a standard representation for text in many of the popular text clustering algorithms. The following is a simple illustration of the bag-of-words encoding:
Notice
is
and about
that in isolation contain very little information about the meaning of the sentence. Here is a good list of stop-words in many languages. fun
, machines
, and learning
. Thought there are many possible meanings learning machines have fun learning or learning about machines is fun learning ...0
for every word that is not in each record would be incredibly inefficient. Why is it called a bag-of-words? A bag is another term for a multiset: an unordered collection which may contain multiple instances of each element.
When professor Gonzalez was a graduate student at Carnegie Mellon University, he and several other computer scientists created the following art piece on display at the Gates Center:
Notice
The N-Gram encoding is a generalization of the bag-of-words encoding designed to capture limited ordering information. Consider the following passage of text:
The book was not well written but I did enjoy it.
If we re-arrange the words we can also write:
The book was well written but I did not enjoy it.
Moreover, local word order can be important when making decisions about text. The n-gram encoding captures local word order by defining counts over sliding windows. In the following example a bi-gram ($n=2$) encoding is constructed:
The above n-gram would be encoded in the sparse vector:
Notice that the n-gram captures key pieces of sentiment information: "well written"
and "not enjoy"
.
N-grams are often used for other types of sequence data beyond text. For example, n-grams can be used to encode genomic data, protein sequences, and click logs.
N-Gram Issues
frost_text = [x for x in """
Some say the world will end in fire,
Some say in ice.
From what Ive tasted of desire
I hold with those who favor fire.
""".split("\n") if len(x) > 0]
frost_text
from sklearn.feature_extraction.text import CountVectorizer
# Construct the tokenizer with English stop words
bow = CountVectorizer(stop_words="english")
# fit the model to the passage
bow.fit(frost_text)
# Print the words that are kept
print("Words:",
list(zip(range(0,len(bow.get_feature_names())),bow.get_feature_names())))
print("Sentence Encoding: \n")
# Print the encoding of each line
for (s, r) in zip(frost_text, bow.transform(frost_text)):
print(s)
print(r)
print("------------------")
# Construct the tokenizer with English stop words
bigram = CountVectorizer(ngram_range=(1, 2))
# fit the model to the passage
bigram.fit(frost_text)
# Print the words that are kept
print("\nWords:",
list(zip(range(0,len(bigram.get_feature_names())), bigram.get_feature_names())))
print("\nSentence Encoding: \n")
# Print the encoding of each line
for (s, r) in zip(frost_text, bigram.transform(frost_text)):
print(s)
print(r)
print("------------------")
If we are encoding text in a particular domain (e.g., processing insurance claims) it is likely that there will be frequent terms (e.g., insurance
or claim
) that provide little information. However, because these terms occur frequently they can present challenges to some modeling techniques. In these cases, additional scaling may be applied to transform the bag-of-word or n-gram vectors to emphasize the more informative terms. One of the most common scalings techniques is the term frequency inverse document frequency (TF-IDF) which emphasizes words that are unique to a particular record. Because the notation is confusing, I have provided a pseudo code implementation. However, you should use a more efficient sparse implementation like those provided in scikit learn.
def tfidf(X):
"""
Input: X is a bag of words matrix (rows=records, cols=terms)
"""
(ndocs, nwords) = X.shape
tf = X / X.sum(axis=1)[:, np.newaxis]
idf = ndocs / (X > 0).sum(axis=0)
return tf * np.log(idf)
While these transformations are especially important when computing similarities between vector encodings of text. We will not cover these transformations in DS100 but it is worth knowing that they exist.
Most machine learning (ML) and statistics techniques operate on multivariate real-valued domains (i.e., vectors). As a consequence, we need methods to encode non-continuous datatypes into meaningful continuous forms. We discussed:
We will now explore how feature transformations can be used to capture domain knowledge and encode complex relationships.