Feature Engineering

In this notebook we will explore a key part of data science, feature engineering: the process of transforming the representation of model inputs to enable better model approximation. Feature engineering enables you to:

  1. encode non-numeric features to be used as inputs to common numeric models
  2. capture domain knowledge (e.g., the perceived loudness or sound is the log of the intensity)
  3. transform complex relationships into simple linear relationships

Overview of Notation for Feature Engineering

The Following video provides an overview of the notation for feature engineering:

In [1]:
from IPython.display import YouTubeVideo
YouTubeVideo("ET44iB169no")
Out[1]:

Mapping from Domain to Range

In the past few lectures we have been exploring various models for regression. These are models from some domain to a continuous quantity.

So far we have been interested in modeling relationships from some numerical domain to a continuous quantitative range:

In this class we will focus on Multiple Regression in which we consider mappings from potentially high-dimensional input spaces onto the real line (i.e., $y \in \mathbb{R}$):

It is worth noting that this is distinct from Multivariate Regression in which we are predicting multiple (confusing?) response values (e.g., $y \in \mathbb{R}^q$).

Standard Imports

As usual, we will import a standard set of functions

In [2]:
import numpy as np
import pandas as pd
In [3]:
import plotly.offline as py
import plotly.express as px
import plotly.graph_objects as go
import plotly.figure_factory as ff
import cufflinks as cf
cf.set_config_file(offline=True, sharing=False, theme='ggplot');
In [4]:
from sklearn.linear_model import LinearRegression

Basic Feature Engineering

The following video walks through some of the basic steps in feature engineering as well as the next section of this notebook.

In [5]:
from IPython.display import YouTubeVideo
YouTubeVideo("moL6aeW94Ps")
Out[5]:

What does it mean to be a linear model

Linear models are linear combinations of features. These models are therefore linear in the parameters but not necessarily the underlying data. We can encode non-linearity in our data through the use of feature functions:

$$ f_\theta\left( x \right) = \phi(x)^T \theta = \sum_{j=0}^{p} \phi(x)_j \theta_j $$

where $\phi$ is an arbitrary function from $x\in \mathbb{R}^d$ to $\phi(x) \in \mathbb{R}^{p+1}$. Notationally, we might right these as a collection of separate feature $\phi_j$ feature functions from $x\in \mathbb{R}^d$ to $\phi_j(x) \in \mathbb{R}$:

$$ \phi(x) = \left[\phi_0(x), \phi_1(x), \ldots, \phi_p(x) \right] $$

We often refer to these $\phi_j$ as feature functions and their design plays a critical role in both how we capture prior knowledge and our ability to fit complicated data.

Modeling Non-linear relationships

To demonstrate the power of feature engineering let's return to our earlier synthetic dataset.

In [6]:
synth_data = pd.read_csv("data/synth_data.csv.zip")
synth_data.head()
Out[6]:
X1 X2 Y
0 -1.254599 4.507143 1.526396
1 2.319939 0.986585 5.190449
2 -3.439814 -3.440055 4.980978
3 -4.419164 3.661761 1.130775
4 1.011150 2.080726 5.849364

This dataset is simple enough that we can easily visualize it.

In [7]:
fig = go.Figure()
data_scatter = go.Scatter3d(x=synth_data["X1"], y=synth_data["X2"], z=synth_data["Y"], 
                            mode="markers",
                            marker=dict(size=2))
fig.add_trace(data_scatter)
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0), 
                  height=600)
fig

Questions:

Is the relationship between $y$ and $x_1$ and $x_2$ linear?


Answer While the data appear to live on a two dimensional plane there does appear to be some more complex non-linear structure to the data.


Previously we fit a linear model to the data using SKlearn

In [8]:
model = LinearRegression()
model.fit(synth_data[["X1", "X2"]], synth_data[["Y"]])
Out[8]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)

Visualizing the model we obtained:

In [9]:
def plot_plane(f, X, grid_points = 30):
    u = np.linspace(X[:,0].min(),X[:,0].max(), grid_points)
    v = np.linspace(X[:,1].min(),X[:,1].max(), grid_points)
    xu, xv = np.meshgrid(u,v)
    X = np.vstack((xu.flatten(),xv.flatten())).transpose()
    z = f(X)
    return go.Surface(x=xu, y=xv, z=z.reshape(xu.shape),opacity=0.8)
In [10]:
fig = go.Figure()
fig.add_trace(data_scatter)
fig.add_trace(plot_plane(model.predict, synth_data[["X1", "X2"]].to_numpy(), grid_points=5))
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0), 
                  height=600)

This wasn't a bad fit but there is definitely more structure.

Designing a Better Feature Function

Examining the above data we see that there is some periodic structure. Let's define a feature function that might try to capture this periodic structure. In the following will add a few different sine functions at different frequences and offsets. Note that for this to remain a linear model, I cannot make the frequence or phase of the sine function a model parameter. Recall in previous lectures we actually made the frequency and phase a parameter of the model and then we were required to used gradient descent to compute the loss minimizing parameter values.

In [11]:
def phi_periodic(X):
    return np.hstack([
        X,
        np.sin(X),
        np.sin(10*X),
        np.sin(20*X),
        np.sin(X + 1),
        np.sin(10*X + 1),
        np.sin(20*X + 1)
    ])
    

Creating the original $\mathbb{X}$ and $\mathbb{Y}$ matrices

In [12]:
X = synth_data[["X1", "X2"]].to_numpy()
Y = synth_data[["Y"]].to_numpy()

Constructing the $\Phi$ matrix

In [13]:
Phi = phi_periodic(X)
In [14]:
Phi.shape
Out[14]:
(1000, 14)

Fitting the linear model to the transformed features:

In [15]:
model_phi = LinearRegression()
model_phi.fit(Phi, Y)
Out[15]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)
In [16]:
def predict_phi(X):
    return model_phi.predict(phi_periodic(X))
In [17]:
fig = go.Figure()
fig.add_trace(data_scatter)
fig.add_trace(plot_plane(predict_phi, X, grid_points=100))
fig.update_layout(margin=dict(l=0, r=0, t=0, b=0), 
                  height=600)

Examining the model parameters:

In [18]:
model_phi.intercept_ # theta_0
Out[18]:
array([5.00093786])
In [19]:
model_phi.coef_ # theta_1 to theta_14
Out[19]:
array([[ 0.29924649, -0.50316463, -0.63977486,  1.00223915,  0.00835992,
         0.014116  , -0.00336647,  0.00239895,  1.17944458, -0.0058399 ,
         0.00857391, -0.006814  ,  0.0199362 , -0.00281974]])

Feature Engineering

The following video gives an overview of standard feature engineering practices.

In [20]:
YouTubeVideo("y6mxtlWYo54")
Out[20]:

Feature Functions for Categorical or Text Data

Suppose we are given the following table:

Our goal is to learn a function that approximates the relationship between the blue and red columns. Let's assume the range, "Ratings", are the real numbers (this may be a problem if ratings are between [0, 5] but more on that later).

What is the domain of this function?

The schema of the relational model provides one possible answer:

RatingsData(uid INTEGER, age FLOAT, 
            state VARCHAR, hasBought BOOLEAN,
            review VARCHAR, rating FLOAT)

Which would suggest that the domain is then:

integers, real numbers, strings, booleans, and more strings.

Unfortunately, the techniques we have discussed so far and most of the techniques in machine learning and statistics operate on real-valued vector inputs $x \in \mathbb{R}^d$ (or for the statisticians $x \in \mathbb{R}^p$).

Goal:

Moreover, many of these techniques, especially the linear models we have been studying, assume the inputs are qauntitative variables in which the relative magnitude of the feature encode information about the response variable.

In the following we define several basic transformations to encode features as real numbers.

Basic Feature Engineering: Get $\mathbb{R}$

Our first step as feature engineers is to translate our data into a form that encodes each feature as a continuous variable.

The Uninformative Feature: uid

The uid was likely used to join the user information (e.g., age, and state) with some Reviews table. The uid presents several questions:

  • What is the meaning of the uid number?
  • Does the magnitude of the uid reveal information about the rating?

There are several answers:

  1. Although numbers, identifiers are typically categorical (like strings) and as a consequence the magnitude has little meaning. In these settings we would either drop or one-hot encode the uid. We will return to feature dropping and one-hot-encoding in a moment.

  2. There are scenarios where the magnitude of the numerical uid value contains important information. When user ids are created in consecutive order, larger user ids would imply more recent users. In these cases we might to interpret the uid feature as a real number and keep it in our model.

Dropping Features

While uncommon there are certain scenarios where manually dropping features might be helpful:

  1. when the features does not to contain information associated with the prediction task. Dropping uninformative features can help to address over-fitting, an issue we will discuss in great detail soon.

  2. when the feature is not available at prediction time. For example, the feature might contain information collected after the user entered a rating. This is a common scenario in time-series analysis.

However, in the absence of substantial domain knowledge, we would prefer to use algorithmic techniques to help eliminate features. We will discuss this in more detail when we return to regularization.

The Continuous age Feature

The age feature encodes the users age. This is already a continuous real number so no additional feature transformations are required. However, as we will soon see, we may introduce additional related features (e.g., indicators for various age groups or non-linear transformations).

The Categorical state Feature

The state feature is a string encoding the category (one of the 50 states). How do we meaningfully encode such a feature as one or more real-numbers?

We could enumerate the states in alphabetical order AL=0, AK=2, ... WY=49. This is a form of dictionary encoding which maps each category to an integer. However, this would likely be a poor feature encoding since the magnitude provides little information about the rating.

Alternatively, we might enumerate the states based on their geographic region (e.g., lower numbers for coastal states.). While this alternative dictionary encoding may provide information there is better way to encode categorical features for machine learning algorithms.

One-Hot Encoding

One-Hot encoding, sometimes also called dummy encoding is a simple mechanism to encode categorical data as real numbers such that the magnitude of each dimension is meaningful. Suppose a feature can take on $k$ distinct values (e.g., $k=50$ for 50 states in the United Stated). For each distinct possible value a new feature (dimension) is created. For each record, all the new features are set to zero except the one corresponding to the value in the original feature.

The term one-hot encoding comes from a digital circuit encoding of a categorical state as particular "hot" wire:

One-Hot Encoding in Pandas

Here we create a toy DataFrame of pets including their name and kind:

In [21]:
df = pd.DataFrame({
    "name": ["Goldy", "Scooby", "Brian", "Francine", "Goldy"],
    "kind": ["Fish", "Dog", "Dog", "Cat", "Dog"],
    "age": [0.5, 7., 3., 10., 1.]
}, columns = ["name", "kind", "age"])
df
Out[21]:
name kind age
0 Goldy Fish 0.5
1 Scooby Dog 7.0
2 Brian Dog 3.0
3 Francine Cat 10.0
4 Goldy Dog 1.0

Pandas has a built in function to construct one-hot encodings called get_dummies

In [22]:
pd.get_dummies(df['kind'])
Out[22]:
Cat Dog Fish
0 0 0 1
1 0 1 0
2 0 1 0
3 1 0 0
4 0 1 0
In [23]:
pd.get_dummies(df)
Out[23]:
age name_Brian name_Francine name_Goldy name_Scooby kind_Cat kind_Dog kind_Fish
0 0.5 0 0 1 0 0 0 1
1 7.0 0 0 0 1 0 1 0
2 3.0 1 0 0 0 0 1 0
3 10.0 0 1 0 0 1 0 0
4 1.0 0 0 1 0 0 1 0

A serious issue with using Pandas to construct a one-hot-encoding is that if we get new dat in a different DataFrame we may get a different encoding. Scikit-learn provides more flexible routines to construct features.

One-Hot Encoding in Scikit-Learn

Scikit-Learn also has several library for constructing one-hot-encodings. The most basic way to construct a one-hot-encoding is using the scikit-learn OneHotEncoder.

In [24]:
from sklearn.preprocessing import OneHotEncoder

oh_enc = OneHotEncoder()

To "learn" the categories we fit the OneHotEncoder to the data:

In [25]:
oh_enc.fit(df[['name', 'kind']])
Out[25]:
OneHotEncoder(categorical_features=None, categories=None, drop=None,
              dtype=<class 'numpy.float64'>, handle_unknown='error',
              n_values=None, sparse=True)

We can get the "names" of the new one-hot-encoding columns which reveal both the source columns and the categories within each column:

In [26]:
oh_enc.get_feature_names()
Out[26]:
array(['x0_Brian', 'x0_Francine', 'x0_Goldy', 'x0_Scooby', 'x1_Cat',
       'x1_Dog', 'x1_Fish'], dtype=object)

We can also construct the OneHotEncoding of the data:

In [27]:
oh_enc.transform(df[['name', 'kind']])
Out[27]:
<5x7 sparse matrix of type '<class 'numpy.float64'>'
	with 10 stored elements in Compressed Sparse Row format>

Notice that the One-Hot-Encoding produces a sparse output matrix. This is because most the entries are 0. If we wanted to see the matrix we could do one of the following:

In [28]:
oh_enc.transform(df[['name', 'kind']]).todense()
Out[28]:
matrix([[0., 0., 1., 0., 0., 0., 1.],
        [0., 0., 0., 1., 0., 1., 0.],
        [1., 0., 0., 0., 0., 1., 0.],
        [0., 1., 0., 0., 1., 0., 0.],
        [0., 0., 1., 0., 0., 1., 0.]])
In [29]:
import matplotlib.pyplot as plt
plt.spy(oh_enc.transform(df[['name', 'kind']]))
Out[29]:
<matplotlib.lines.Line2D at 0x7fbbe062ab10>

Another more general feature transformation is the scikit-learn DictVectorizer. This will convert any dictionary into a vector encoding and is capable of handling both one-hot-encoding categorical data and numerically encoding strings.

In [30]:
from sklearn.feature_extraction import DictVectorizer

vec_enc = DictVectorizer()
In [31]:
vec_enc.fit(df.to_dict(orient='records'))
Out[31]:
DictVectorizer(dtype=<class 'numpy.float64'>, separator='=', sort=True,
               sparse=True)
In [32]:
vec_enc.get_feature_names()
Out[32]:
['age',
 'kind=Cat',
 'kind=Dog',
 'kind=Fish',
 'name=Brian',
 'name=Francine',
 'name=Goldy',
 'name=Scooby']
In [33]:
vec_enc.transform(df.to_dict(orient='records')).todense()
Out[33]:
matrix([[ 0.5,  0. ,  0. ,  1. ,  0. ,  0. ,  1. ,  0. ],
        [ 7. ,  0. ,  1. ,  0. ,  0. ,  0. ,  0. ,  1. ],
        [ 3. ,  0. ,  1. ,  0. ,  1. ,  0. ,  0. ,  0. ],
        [10. ,  1. ,  0. ,  0. ,  0. ,  1. ,  0. ,  0. ],
        [ 1. ,  0. ,  1. ,  0. ,  0. ,  0. ,  1. ,  0. ]])

Applying to new data

When run on new data with unseen categories the default behavior of the OneHotEncoder is to raise an error but you can also tell it to ignore these categories:

In [34]:
try:
    oh_enc.transform(np.array([["Cat", "Goldy"],["Bird","Fluffy"]])).todense()
except Exception as e:
    print(e)
Found unknown categories ['Bird', 'Cat'] in column 0 during transform
In [35]:
oh_enc.handle_unknown = 'ignore'
oh_enc.transform(np.array([["Cat", "Goldy"],["Bird","Fluffy"]])).toarray()
Out[35]:
array([[0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0.]])

The Vector Encoder is a bit more permissive by default:

In [36]:
vec_enc.transform([
    {"kind": "Cat", "name": "Goldy", "age": 35},
    {"kind": "Bird", "name": "Fluffy"},
    {"breed": "Chihuahua", "name": "Goldy"},
]).toarray()
Out[36]:
array([[35.,  1.,  0.,  0.,  0.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.]])

Dealing With Text Features

Encoding text as a real-valued feature is especially challenging and many of the standard transformations are lossy. Moreover, all of the earlier transformations (e.g., one-hot encoding and Boolean representations) preserve the information in the feature. In contrast, most of the techniques for encoding text destroy information about the word order and in many cases key parts of the grammar.

Here we present two widely used representations of text:

  • Bag-of-Words Encoding: encodes text by the frequency of each word
  • N-Gram Encoding: encodes text by the frequency of sequences of words of length $N$

Both of these encoding strategies are related to the one-hot encoding with dummy features created for every word or sequence of words and with multiple dummy features having counts greater than zero.

The Bag-of-Words Encoding

The bag-of-words encoding is widely used and a standard representation for text in many of the popular text clustering algorithms. The following is a simple illustration of the bag-of-words encoding:

Notice

  1. Stop words are removed. Stop-words are words like is and about that in isolation contain very little information about the meaning of the sentence. Here is a good list of stop-words in many languages.
  2. Word order information is lost. Nonetheless the vector still suggests that the sentence is about fun, machines, and learning. Thought there are many possible meanings learning machines have fun learning or learning about machines is fun learning ...
  3. Capitalization and punctuation are typically removed.
  4. Sparse Encoding: is necessary to represent the bag-of-words efficiently. There are millions of possible words (including terminology, names, and misspellings) and so instantiating a 0 for every word that is not in each record would be incredibly inefficient.

Why is it called a bag-of-words? A bag is another term for a multiset: an unordered collection which may contain multiple instances of each element.

Professor Gonzalez is an "artist"

When professor Gonzalez was a graduate student at Carnegie Mellon University, he and several other computer scientists created the following art piece on display in the Gates Center:

Is this art or science?

Notice

  1. The unordered collection of words in the bag.
  2. The stop words on the floor.
  3. The missing broom. The original sculpture had a broom attached but the janitor got confused ....

Implementing the Bag-of-words Model

We can use sklearn to construct a bag-of-words representation of text

In [37]:
frost_text = [x for x in """
Some say the world will end in fire,
Some say in ice.
From what Ive tasted of desire
I hold with those who favor fire.
""".split("\n") if len(x) > 0]

frost_text
Out[37]:
['Some say the world will end in fire,',
 'Some say in ice.',
 'From what Ive tasted of desire',
 'I hold with those who favor fire.']
In [38]:
from sklearn.feature_extraction.text import CountVectorizer

# Construct the tokenizer with English stop words
bow = CountVectorizer(stop_words="english")

# fit the model to the passage
bow.fit(frost_text)
Out[38]:
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
                dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
                lowercase=True, max_df=1.0, max_features=None, min_df=1,
                ngram_range=(1, 1), preprocessor=None, stop_words='english',
                strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
                tokenizer=None, vocabulary=None)
In [39]:
# Print the words that are kept
print("Words:", list(enumerate(bow.get_feature_names())))
Words: [(0, 'desire'), (1, 'end'), (2, 'favor'), (3, 'hold'), (4, 'ice'), (5, 'ive'), (6, 'say'), (7, 'tasted'), (8, 'world')]
In [40]:
print("Sentence Encoding: \n")
# Print the encoding of each line
for (s, r) in zip(frost_text, bow.transform(frost_text)):
    print(s)
    print(r)
    print("------------------")
Sentence Encoding: 

Some say the world will end in fire,
  (0, 1)	1
  (0, 6)	1
  (0, 8)	1
------------------
Some say in ice.
  (0, 4)	1
  (0, 6)	1
------------------
From what Ive tasted of desire
  (0, 0)	1
  (0, 5)	1
  (0, 7)	1
------------------
I hold with those who favor fire.
  (0, 2)	1
  (0, 3)	1
------------------

The N-Gram Encoding

The N-Gram encoding is a generalization of the bag-of-words encoding designed to capture limited ordering information. Consider the following passage of text:

The book was not well written but I did enjoy it.

If we re-arrange the words we can also write:

The book was well written but I did not enjoy it.

Moreover, local word order can be important when making decisions about text. The n-gram encoding captures local word order by defining counts over sliding windows. In the following example a bi-gram ($n=2$) encoding is constructed:

The above n-gram would be encoded in the sparse vector:

Notice that the n-gram captures key pieces of sentiment information: "well written" and "not enjoy".

N-grams are often used for other types of sequence data beyond text. For example, n-grams can be used to encode genomic data, protein sequences, and click logs.

N-Gram Issues

  1. The n-gram representation is hyper sparse and maintaining the dictionary of possible n-grams can be very costly. The hashing trick is a popular solution to approximate the sparse n-gram encoding. In the hashing trick each n-gram is mapped to a relatively large (e.g., 32bit) hash-id and the counts are associated with the hash index without saving the n-gram text in a dictionary. As a consequence, multiple n-grams are treated as the same.
  2. As $N$ increase the chance of seeing the same n-grams at prediction time decreases rapidly.
In [41]:
# Construct the tokenizer with English stop words
bigram = CountVectorizer(ngram_range=(1, 2))
# fit the model to the passage
bigram.fit(frost_text)
Out[41]:
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
                dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
                lowercase=True, max_df=1.0, max_features=None, min_df=1,
                ngram_range=(1, 2), preprocessor=None, stop_words=None,
                strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
                tokenizer=None, vocabulary=None)
In [42]:
# Print the words that are kept
print("\nWords:", 
      list(zip(range(0,len(bigram.get_feature_names())), bigram.get_feature_names())))
Words: [(0, 'desire'), (1, 'end'), (2, 'end in'), (3, 'favor'), (4, 'favor fire'), (5, 'fire'), (6, 'from'), (7, 'from what'), (8, 'hold'), (9, 'hold with'), (10, 'ice'), (11, 'in'), (12, 'in fire'), (13, 'in ice'), (14, 'ive'), (15, 'ive tasted'), (16, 'of'), (17, 'of desire'), (18, 'say'), (19, 'say in'), (20, 'say the'), (21, 'some'), (22, 'some say'), (23, 'tasted'), (24, 'tasted of'), (25, 'the'), (26, 'the world'), (27, 'those'), (28, 'those who'), (29, 'what'), (30, 'what ive'), (31, 'who'), (32, 'who favor'), (33, 'will'), (34, 'will end'), (35, 'with'), (36, 'with those'), (37, 'world'), (38, 'world will')]
In [43]:
print("\nSentence Encoding: \n")
# Print the encoding of each line
for (s, r) in zip(frost_text, bigram.transform(frost_text)):
    print(s)
    print(r)
    print("------------------")
Sentence Encoding: 

Some say the world will end in fire,
  (0, 1)	1
  (0, 2)	1
  (0, 5)	1
  (0, 11)	1
  (0, 12)	1
  (0, 18)	1
  (0, 20)	1
  (0, 21)	1
  (0, 22)	1
  (0, 25)	1
  (0, 26)	1
  (0, 33)	1
  (0, 34)	1
  (0, 37)	1
  (0, 38)	1
------------------
Some say in ice.
  (0, 10)	1
  (0, 11)	1
  (0, 13)	1
  (0, 18)	1
  (0, 19)	1
  (0, 21)	1
  (0, 22)	1
------------------
From what Ive tasted of desire
  (0, 0)	1
  (0, 6)	1
  (0, 7)	1
  (0, 14)	1
  (0, 15)	1
  (0, 16)	1
  (0, 17)	1
  (0, 23)	1
  (0, 24)	1
  (0, 29)	1
  (0, 30)	1
------------------
I hold with those who favor fire.
  (0, 3)	1
  (0, 4)	1
  (0, 5)	1
  (0, 8)	1
  (0, 9)	1
  (0, 27)	1
  (0, 28)	1
  (0, 31)	1
  (0, 32)	1
  (0, 35)	1
  (0, 36)	1
------------------