Dimensionality Reduction

This notebook is a revised version of the oustanding lecture notebook by Professor Hug.

Imports

As with other notebooks we will use the same set of standard imports.

In [1]:
import numpy as np
import pandas as pd
np.random.seed(23)
In [2]:
import plotly.offline as py
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.figure_factory as ff
import cufflinks as cf
cf.set_config_file(offline=True, sharing=False, theme='ggplot');
import matplotlib.pyplot as plt
import seaborn as sns
In [3]:
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV

Visualizing Relationships

We often create visualizations in order to facilitate exploratory data analysis. For example, we might create scatterplots to explore the relationship between pairs of variables in a dataset. Below, we see weight vs. height.

In [4]:
df = pd.read_csv("data/hongkong_height_weight.csv")
df
Out[4]:
Height Weight
0 65.78331 112.9925
1 71.51521 136.4873
2 69.39874 153.0269
3 68.21660 142.3354
4 67.78781 144.2971
... ... ...
283 66.30582 102.8114
284 68.90702 134.2212
285 65.60543 100.9758
286 68.00189 145.8886
287 69.93715 155.3046

288 rows × 2 columns

In [5]:
px.scatter(df, x="Height", y="Weight")

The dataset below gives the "percentage body fat, age, weight, height, and ten body circumference measurements" for 252 men.

http://jse.amstat.org/v4n1/datasets.johnson.html

For simplicity, we read in only 8 of the provided attributes, yielding the given dataframe.

In [6]:
#http://jse.amstat.org/datasets/fat.txt
df3 = pd.read_fwf("data/fat.dat.txt", colspecs = [(17, 21), (23, 29), (35, 37),
                                             (39, 45), (48, 53), (73, 77),
                                            (80, 85), (88, 93)], header=None, names = ["% fat", "density", "age", "weight", "height", "neck", "chest", "abdomen"])
df3.head()
Out[6]:
% fat density age weight height neck chest abdomen
0 12.3 1.0708 23 154.25 67.75 36.2 93.1 85.2
1 6.1 1.0853 22 173.25 72.25 38.5 93.6 83.0
2 25.3 1.0414 22 154.00 66.25 34.0 95.8 87.9
3 10.4 1.0751 26 184.75 72.25 37.4 101.8 86.4
4 28.7 1.0340 24 184.25 71.25 34.4 97.3 100.0

We see that percentage fat and density in g/cm^3 are almost completely redundant.

In [7]:
px.scatter(df3, x = "% fat", y = "density")

By contrast, while there is a strong correlation between neck and chest measurements, the resulting data is very 2 dimensional.

In [8]:
px.scatter(df3, x = "neck", y = "chest")

Age and height show a small correlation as peolpe seem to get slightly smaller with greater age in this dataset.

In [9]:
px.scatter(df3, x = "age", y = "height")

We note that there is one outlier where a person is slightly less than 29.5 inches tall. While there are a extraordinarily small number of adult males who are less than three feet tall, reflection on the rest of the data from this observation suggest that this was simply an error.

In [10]:
df3.query("height < 40")
Out[10]:
% fat density age weight height neck chest abdomen
41 32.9 1.025 44 205.0 29.5 36.6 106.0 104.3
In [11]:
df3 = df3.drop(41)
In [12]:
px.scatter(df3, x = "age", y = "height")

We can try to visualize more than 2 attributes at once, but the relationships displayed in e.g. the color and dot size space are much harder for human readers to see. For example, above we saw that density and % fat are almost entirely redundant, but this relationship is impossible to see when comparing the colors and dot sizes.

In [13]:
px.scatter(df3, x = "neck", y = "chest", color="density", size = "% fat")

Seaborn gives us the ability to create a matrix of all possible pairs of variables. This is can be useful, though even with only 8 variables it's still difficult to fully digest.

In [14]:
sns.pairplot(df3)
Out[14]:
<seaborn.axisgrid.PairGrid at 0x7f9a28ab7c90>

We can also do this in plotly:

In [15]:
fig = px.scatter_matrix(df3)
fig.update_traces(diagonal_visible=False)

Notice that when you interact with the above plotly plot you actually are interacting with all the plots at once. Try selecting the outlier point.

We should note that despite the very strong relationship between % fat and density, the numerical rank of the data matrix is still 8. For the rank to be 7, we'd need the data to be almost exactly on a line. We'll talk about techniques to reduce the dimensionality over the course of this lecture and the next.

In [16]:
np.linalg.matrix_rank(df3)
Out[16]:
8

House of Representatives Voting Data

Next, let's consider voting data from the house of representatives in the U.S. during the month of September 2019. In this example, our goal will be to try to find clusters of representatives who vote in similar ways. For example, we might expect to find that Democrats and Republicans vote similarly to other members of their party.

In [17]:
import yaml
with open("data/legislators-current.yaml", "r") as f:
    legislators_data = yaml.safe_load(f)
In [18]:
from datetime import datetime
def to_date(s):
    return datetime.strptime(s, '%Y-%m-%d')

legs = pd.DataFrame(
    columns=['leg_id', 'first', 'last', 'gender', 'state', 'chamber', 'party', 'birthday'],
    data=[[x['id']['bioguide'], 
           x['name']['first'],
           x['name']['last'],
           x['bio']['gender'],
           x['terms'][-1]['state'],
           x['terms'][-1]['type'],
           x['terms'][-1]['party'],
           to_date(x['bio']['birthday'])] for x in legislators_data])

legs
Out[18]:
leg_id first last gender state chamber party birthday
0 B000944 Sherrod Brown M OH sen Democrat 1952-11-09
1 C000127 Maria Cantwell F WA sen Democrat 1958-10-13
2 C000141 Benjamin Cardin M MD sen Democrat 1943-10-05
3 C000174 Thomas Carper M DE sen Democrat 1947-01-23
4 C001070 Robert Casey M PA sen Democrat 1960-04-13
... ... ... ... ... ... ... ... ...
530 G000592 Jared Golden M ME rep Democrat 1982-07-25
531 K000395 Fred Keller M PA rep Republican 1965-10-23
532 B001311 Dan Bishop M NC rep Republican 1964-07-01
533 M001210 Gregory Murphy M NC rep Republican 1963-03-05
534 L000594 Kelly Loeffler F GA sen Republican 1970-11-27

535 rows × 8 columns

In [19]:
# February 2019 House of Representatives roll call votes
# Downloaded using https://github.com/eyeseast/propublica-congress
votes = pd.read_csv('data/votes.csv')
votes = votes.astype({"roll call": str}) 
votes.head()
Out[19]:
chamber session roll call member vote
0 House 1 555 A000374 Not Voting
1 House 1 555 A000370 Yes
2 House 1 555 A000055 No
3 House 1 555 A000371 Yes
4 House 1 555 A000372 No
In [20]:
votes.merge(legs, left_on='member', right_on='leg_id').sample(5)
Out[20]:
chamber_x session roll call member vote leg_id first last gender state chamber_y party birthday
10820 House 1 542 M001188 Yes M001188 Grace Meng F NY rep Democrat 1975-10-01
11119 House 1 530 M001206 Yes M001206 Joseph Morelle M NY rep Democrat 1957-04-29
2592 House 1 529 C001123 No C001123 Gilbert Cisneros M CA rep Democrat 1971-02-12
15514 House 1 546 T000193 Yes T000193 Bennie Thompson M MS rep Democrat 1948-01-28
10144 House 1 521 M001158 No M001158 Kenny Marchant M TX rep Republican 1951-02-23
In [21]:
def was_yes(s):
    if s.iloc[0] == 'Yes':
        return 1
    else:
        return 0
In [22]:
vote_pivot = votes.pivot_table(index='member', 
                                columns='roll call', 
                                values='vote', 
                                aggfunc=was_yes, 
                                fill_value=0)
print(vote_pivot.shape)
vote_pivot.head()
(441, 41)
Out[22]:
roll call 515 516 517 518 519 520 521 522 523 524 ... 546 547 548 549 550 551 552 553 554 555
member
A000055 1 0 0 0 1 1 0 1 1 1 ... 0 0 1 0 0 1 0 0 1 0
A000367 0 0 0 0 0 0 0 0 0 0 ... 0 1 1 1 1 0 1 1 0 1
A000369 1 1 0 0 1 1 0 1 1 1 ... 0 0 1 0 0 1 0 0 1 0
A000370 1 1 1 1 1 0 1 0 0 0 ... 1 1 1 1 1 0 1 1 1 1
A000371 1 1 1 1 1 0 1 0 0 0 ... 1 1 1 1 1 0 1 1 1 1

5 rows × 41 columns

In [23]:
vote_pivot.shape
Out[23]:
(441, 41)

This data has 441 observations (members of the House of Representatives including the 6 non-voting representatives) and 41 dimensions (votes). While politics is quite polarized, none of these columns are linearly dependent as we note below.

In [24]:
np.linalg.matrix_rank(vote_pivot)
Out[24]:
41

Suppose we want to find clusters of similar voting behavior. We might try by reducing our data to only two dimensions and looking to see if we can identify clear patterns. Let's start by looking at what votes were most controversial.

In [25]:
np.var(vote_pivot, axis=0).sort_values(ascending = False)
Out[25]:
roll call
555    0.249988
540    0.249896
530    0.249896
542    0.249783
537    0.249783
533    0.249711
534    0.249711
536    0.249711
543    0.249711
550    0.249628
552    0.249536
549    0.249536
546    0.249536
518    0.249433
517    0.249320
547    0.249320
545    0.249063
553    0.248765
525    0.248425
551    0.248240
531    0.247397
524    0.246389
526    0.246111
521    0.246111
529    0.244898
528    0.244230
527    0.243150
520    0.242378
523    0.241144
522    0.231796
539    0.231796
516    0.221461
538    0.216679
544    0.198066
515    0.112546
541    0.095218
554    0.078743
532    0.071153
535    0.051460
519    0.047398
548    0.043295
dtype: float64

We see that roll call 548 had very little variance. According to http://clerk.house.gov/evs/2019/roll548.xml, this bill was referring to the 2019 Whistleblower Complaint about President Trump and Ukraine. The full text of the house resolution for this roll call can be found at https://www.congress.gov/bill/116th-congress/house-resolution/576/text:

(1) the whistleblower complaint received on August 12, 2019, by the Inspector General of the Intelligence Community shall be transmitted immediately to the Select Committee on Intelligence of the Senate and the Permanent Select Committee on Intelligence of the House of Representatives; and

(2) the Select Committee on Intelligence of the Senate and the Permanent Select Committee on Intelligence of the House of Representatives should be allowed to evaluate the complaint in a deliberate and bipartisan manner consistent with applicable statutes and processes in order to safeguard classified and sensitive information.

We see that 421 congresspeople voted for this resolution, and 12 did not vote for this resolution. 2 members answered "present" but did not vote no, and 10 did not vote at all. Clearly, a scatterplot involving this particular dimension isn't going to be useful.

In [26]:
vote_pivot['548'].value_counts()
Out[26]:
1    421
0     20
Name: 548, dtype: int64

By contrast, we saw high variance for most of the other roll call votes. Most them had variances near 0.25, which is the maximum possible for a variable which can take on values 0 or 1. Let's consider the two highest variance variables, shown below:

In [27]:
vote_pivot['555'].value_counts()
Out[27]:
1    222
0    219
Name: 555, dtype: int64
In [28]:
vote_pivot['530'].value_counts()
Out[28]:
1    225
0    216
Name: 530, dtype: int64

Let's use these as our two dimensions for our scatterplot and see what happens.

In [29]:
px.scatter(vote_pivot, x='530', y='555')

By adding some random noise, we can get rid of the overplotting.

In [30]:
vote_pivot_jittered = vote_pivot.copy()
vote_pivot_jittered.loc[:, '515':'555'] += np.random.random(vote_pivot_jittered.loc[:, '515':'555'].shape) * 0.3;
In [31]:
px.scatter(vote_pivot_jittered, x='530', y='555')

We can also look at this data labeled by party.

In [32]:
vote_pivot_labeled = vote_pivot.reset_index().merge(legs, left_on='member', right_on='leg_id').set_index('member')
vote_pivot_labeled.head(5)
Out[32]:
515 516 517 518 519 520 521 522 523 524 ... 554 555 leg_id first last gender state chamber party birthday
member
A000055 1 0 0 0 1 1 0 1 1 1 ... 1 0 A000055 Robert Aderholt M AL rep Republican 1965-07-22
A000367 0 0 0 0 0 0 0 0 0 0 ... 0 1 A000367 Justin Amash M MI rep Independent 1980-04-18
A000369 1 1 0 0 1 1 0 1 1 1 ... 1 0 A000369 Mark Amodei M NV rep Republican 1958-06-12
A000370 1 1 1 1 1 0 1 0 0 0 ... 1 1 A000370 Alma Adams F NC rep Democrat 1946-05-27
A000371 1 1 1 1 1 0 1 0 0 0 ... 1 1 A000371 Pete Aguilar M CA rep Democrat 1979-06-19

5 rows × 49 columns

In [33]:
vote_pivot_labeled_jittered = vote_pivot_labeled.copy()
vote_pivot_labeled_jittered.loc[:, '515':'555'] += np.random.random(vote_pivot_labeled_jittered.loc[:, '515':'555'].shape) * 0.3;
In [34]:
px.scatter(vote_pivot_labeled_jittered, x='530', y='555', color="party", 
           color_discrete_sequence=["#EF553B", "#00CC96", "#636EFA"])
In [35]:
px.scatter(vote_pivot_labeled_jittered, x='545', y='552', color="party", 
           color_discrete_sequence=["#EF553B", "#00CC96", "#636EFA"])

We see that considering only two votes does seem to do a pretty good job of telling Republicans from Democrats. We'll see in the next lecture how we can do even better using a technique called "Principle Component Analysis" or PCA.

Before we can get there, we'll need to spend some time reviewing key linear algebra principles.

Matrix Operations in Python

In [36]:
age_and_height = np.array([[182, 28], [399, 30], [725, 33]])
In [37]:
M = np.array([[1, 0, 0], [0, 1, 1/12]])
In [38]:
age_and_height @ M
Out[38]:
array([[182.        ,  28.        ,   2.33333333],
       [399.        ,  30.        ,   2.5       ],
       [725.        ,  33.        ,   2.75      ]])

Singular Value Decomposition Experiment

Manual Decomposition

In the table below, we have the width, height, area, and perimeter of a rectangle stored in a dataframe.

In [39]:
rectangle = pd.read_csv("data/rectangle_data.csv")
rectangle
Out[39]:
width height area perimeter
0 8 6 48 28
1 2 4 8 12
2 1 3 3 8
3 9 3 27 24
4 9 8 72 34
... ... ... ... ...
95 8 5 40 26
96 8 7 56 30
97 1 4 4 10
98 1 6 6 14
99 2 6 12 16

100 rows × 4 columns

Naturally the perimeter is just the sum of 2x the width and 2x the height. Thus, if we create a new dataframe that has only the width, height, and area...

In [40]:
rectangle_no_perim = rectangle[["width", "height", "area"]]
rectangle_no_perim.head(5)
Out[40]:
width height area
0 8 6 48
1 2 4 8
2 1 3 3
3 9 3 27
4 9 8 72

... then we can recover the perimeter by multiplying this matrix by

1   0   0   2
0   1   0   2
0   0   1   0
In [41]:
transform_3D_to_4D = [[1, 0, 0, 2], [0, 1, 0, 2], [0, 0, 1, 0]]

We can recover the dataframe by multiplying by the transformation matrix and renaming the columns:

In [42]:
dfprod = (rectangle_no_perim @ transform_3D_to_4D)
dfprod.columns = ["width", "height", "area", "perimeter"]
dfprod
Out[42]:
width height area perimeter
0 8 6 48 28
1 2 4 8 12
2 1 3 3 8
3 9 3 27 24
4 9 8 72 34
... ... ... ... ...
95 8 5 40 26
96 8 7 56 30
97 1 4 4 10
98 1 6 6 14
99 2 6 12 16

100 rows × 4 columns

Singular Value Decomposition Example

Singular value decomposition is a numerical technique to (among other things) automatically uncover such redundancies. Given an input matrix X, SVD will return $U\Sigma$ and $V^T$ such that $ X = U \Sigma V^T $.

In [43]:
u, s, vt = np.linalg.svd(rectangle, full_matrices = False)

As we did before with our manual decomposition, we can recover our original rectangle data by multiplying the three return values of this function back together.

In [44]:
pd.DataFrame(u * s @ vt).head(4)
Out[44]:
0 1 2 3
0 8.0 6.0 48.0 28.0
1 2.0 4.0 8.0 12.0
2 1.0 3.0 3.0 8.0
3 9.0 3.0 27.0 24.0

The two key pieces of the decomposition are $U\Sigma$ and $V^T$, which we can think of for now as analogous to our 'data' and 'transformation operation' from our manual decomposition earlier.

Let's start by looking at $U\Sigma$, which we can compute with the Python code u*s.

In [45]:
pd.DataFrame(u*s)
Out[45]:
0 1 2 3
0 -56.309268 4.083696 -0.767969 1.834525e-15
1 -13.925871 -5.615924 1.591069 -3.365812e-15
2 -7.388370 -5.110893 1.513530 -7.044536e-17
3 -36.844432 -4.800059 -3.800959 -2.037419e-16
4 -79.472605 13.002698 0.186598 -2.667095e-16
... ... ... ... ...
95 -48.593653 1.109493 -1.557528 -1.099493e-16
96 -64.024883 7.057900 0.021591 3.360505e-17
97 -9.433844 -6.241127 2.247428 1.125378e-16
98 -13.524792 -8.501595 3.715225 -1.724475e-16
99 -19.636860 -6.703696 3.074769 -1.633136e-16

100 rows × 4 columns

Similarly, we can look at vt.

In [46]:
pd.DataFrame(vt)
Out[46]:
0 1 2 3
0 -0.146436 -0.129942 -8.100201e-01 -0.552756
1 -0.192736 -0.189128 5.863482e-01 -0.763727
2 -0.704957 0.709155 7.951614e-03 0.008396
3 -0.666667 -0.666667 -1.305762e-16 0.333333

The automatic decomposition returned by the svd function looks quite different than what we got when we manually decomposed our data into "data" and "operations". That is, vt is a bunch of seemingly arbitrary numbers instead of the rather simple:

1   0   0   2
0   1   0   2
0   0   1   0

Similarly, if we look at the shape of $U\Sigma$ and $V^T$ we see that they are bigger than in our manual decomposition. Specifically $U\Sigma$ still has 4 columns, meaning that each observation is 4 dimensional. Furthermore, rather than our transformation operation $V^T$ being 3x4, it's 4x4 rows tall, meaning that it maps 4 dimensional inputs back to 4 dimensions.

This seems problematic, because our goal of using SVD was to find a transformation operation that takes 3D inputs and maps them up to 4 dimensions.

Luckily, if we look carefully at $U\Sigma$, we see that the last attribute of each observation is very close to 0.

In [47]:
pd.DataFrame(u * s)
Out[47]:
0 1 2 3
0 -56.309268 4.083696 -0.767969 1.834525e-15
1 -13.925871 -5.615924 1.591069 -3.365812e-15
2 -7.388370 -5.110893 1.513530 -7.044536e-17
3 -36.844432 -4.800059 -3.800959 -2.037419e-16
4 -79.472605 13.002698 0.186598 -2.667095e-16
... ... ... ... ...
95 -48.593653 1.109493 -1.557528 -1.099493e-16
96 -64.024883 7.057900 0.021591 3.360505e-17
97 -9.433844 -6.241127 2.247428 1.125378e-16
98 -13.524792 -8.501595 3.715225 -1.724475e-16
99 -19.636860 -6.703696 3.074769 -1.633136e-16

100 rows × 4 columns

Thus, it makes sense that we remove the last column of $U \Sigma$.

In [48]:
u = u[:, 0:3]
s = s[0:3]
pd.DataFrame(u * s)
Out[48]:
0 1 2
0 -56.309268 4.083696 -0.767969
1 -13.925871 -5.615924 1.591069
2 -7.388370 -5.110893 1.513530
3 -36.844432 -4.800059 -3.800959
4 -79.472605 13.002698 0.186598
... ... ... ...
95 -48.593653 1.109493 -1.557528
96 -64.024883 7.057900 0.021591
97 -9.433844 -6.241127 2.247428
98 -13.524792 -8.501595 3.715225
99 -19.636860 -6.703696 3.074769

100 rows × 3 columns

Similarly, because the observations are now 3D, we should remove the last row of $V^T$, since we want to use $V^T$ to map our now 3D data into 4D (rather than expecting 4D input data).

In [49]:
vt = vt[0:3, :]

After removing the redundant portions of $U\Sigma$ and $V^T$, we can verify that multiplying them together again yields our original array.

In [50]:
pd.DataFrame(u * s @ vt)
Out[50]:
0 1 2 3
0 8.0 6.0 48.0 28.0
1 2.0 4.0 8.0 12.0
2 1.0 3.0 3.0 8.0
3 9.0 3.0 27.0 24.0
4 9.0 8.0 72.0 34.0
... ... ... ... ...
95 8.0 5.0 40.0 26.0
96 8.0 7.0 56.0 30.0
97 1.0 4.0 4.0 10.0
98 1.0 6.0 6.0 14.0
99 2.0 6.0 12.0 16.0

100 rows × 4 columns

Singular Value Decomposition Experiment

The reasons that $U \Sigma$ and $V^T$ look so different than the results of our manual decomposition are a consequence of how singular value decomposition. Specifically, given X, SVD computers $U$, $\Sigma$, and $V$ such that:

  1. $\Sigma$ is a diagonal matrix containing the singular values of X.
  2. $U$ and $V$ are matrices whose columns form an orthonormal set.
  3. $X = U \Sigma V^T$

That is, there are an infinite number of matrices such that $X = AB$. In our example above, we created A and B through manual calculation and insight (recognizing that perimeter was 2w + 2h). SVD computes them automatically, and the results have the specific properties enumerated above.

In [51]:
u, s, vt = np.linalg.svd(rectangle, full_matrices = False)

The Singular Values Matrix $\Sigma$

The middle result returned by the SVD process is a diagonal matrix consisting of the singular values. We won't say a lot about what they mean today, but we will note that if the matrix is of rank r, then the first r singular values will be non-zero, and the rest will be zero.

In [52]:
s
Out[52]:
array([3.62932568e+02, 6.29904732e+01, 2.56544651e+01, 4.29718368e-15])

Python returns the singular values in a slightly funny format (as a list). To get them into the correct form, we can use diag(s).

In [53]:
np.diag(s)
Out[53]:
array([[3.62932568e+02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
       [0.00000000e+00, 6.29904732e+01, 0.00000000e+00, 0.00000000e+00],
       [0.00000000e+00, 0.00000000e+00, 2.56544651e+01, 0.00000000e+00],
       [0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 4.29718368e-15]])

If we do this, then we can compute the original matrix using only the matrix multiplication operator. That is, rather than writing u * s @ vt, we can do u @ np.diag(s) @ vt.

In [54]:
(u @ np.diag(s) @ vt)[0:5, :]
Out[54]:
array([[ 8.,  6., 48., 28.],
       [ 2.,  4.,  8., 12.],
       [ 1.,  3.,  3.,  8.],
       [ 9.,  3., 27., 24.],
       [ 9.,  8., 72., 34.]])

The fact that the last singular value is zero is why we were able to remove the last dimension of $U$ and the last operation in $V^T$. That is, since the last column of $U \Sigma$ is always zero, we can just remove it completely.

In [55]:
pd.DataFrame(u @ np.diag(s))
Out[55]:
0 1 2 3
0 -56.309268 4.083696 -0.767969 1.834525e-15
1 -13.925871 -5.615924 1.591069 -3.365812e-15
2 -7.388370 -5.110893 1.513530 -7.044536e-17
3 -36.844432 -4.800059 -3.800959 -2.037419e-16
4 -79.472605 13.002698 0.186598 -2.667095e-16
... ... ... ... ...
95 -48.593653 1.109493 -1.557528 -1.099493e-16
96 -64.024883 7.057900 0.021591 3.360505e-17
97 -9.433844 -6.241127 2.247428 1.125378e-16
98 -13.524792 -8.501595 3.715225 -1.724475e-16
99 -19.636860 -6.703696 3.074769 -1.633136e-16

100 rows × 4 columns

In [56]:
(u[:, :-1] @ np.diag(s[:-1]) @ vt[:-1, :])[0:5, :]
Out[56]:
array([[ 8.,  6., 48., 28.],
       [ 2.,  4.,  8., 12.],
       [ 1.,  3.,  3.,  8.],
       [ 9.,  3., 27., 24.],
       [ 9.,  8., 72., 34.]])

$U$ and $V^T$

Let's try verifying that $U$ is orthonormal. If so, any column dot producted with itself should be 1, and any columns dot producted with any other should be zero.

In [57]:
np.dot(u[:, 0], u[:, 0])
Out[57]:
1.0000000000000009
In [58]:
np.dot(u[:, 1], u[:, 1])
Out[58]:
1.0000000000000007
In [59]:
np.dot(u[:, 2], u[:, 2])
Out[59]:
1.0000000000000002
In [60]:
for i in range(0, u.shape[1]):
    print(f"row {i} dot producted with itself is {np.dot(u[:, i], u[:, i])}")
row 0 dot producted with itself is 1.0000000000000009
row 1 dot producted with itself is 1.0000000000000007
row 2 dot producted with itself is 1.0000000000000002
row 3 dot producted with itself is 1.0
In [61]:
for i in range(0, u.shape[1]):
    for j in range(i + 1, u.shape[1]):
        print(f"row {i} dot producted with {j} is {np.dot(u[:, i], u[:, j])}")
row 0 dot producted with 1 is -6.938893903907228e-17
row 0 dot producted with 2 is -2.3592239273284576e-16
row 0 dot producted with 3 is 1.3183898417423734e-16
row 1 dot producted with 2 is 6.245004513516506e-17
row 1 dot producted with 3 is 2.7755575615628914e-17
row 2 dot producted with 3 is 3.469446951953614e-17

Let's now look at $V$.

In [62]:
v = vt.T
In [63]:
for i in range(0, v.shape[1]):
    print(f"row {i} of V dot producted with itself is {np.dot(v[:, i], v[:, i])}")
row 0 of V dot producted with itself is 0.9999999999999999
row 1 of V dot producted with itself is 0.9999999999999999
row 2 of V dot producted with itself is 1.0000000000000002
row 3 of V dot producted with itself is 1.0000000000000002
In [64]:
for i in range(0, v.shape[1]):
    for j in range(i + 1, v.shape[1]):
        print(f"row {i} of V dot producted with {j} is {np.dot(v[:, i], v[:, j])}")
row 0 of V dot producted with 1 is -8.326672684688674e-17
row 0 of V dot producted with 2 is -9.367506770274758e-17
row 0 of V dot producted with 3 is 5.551115123125783e-17
row 1 of V dot producted with 2 is -1.5352302762394743e-16
row 1 of V dot producted with 3 is -1.1102230246251565e-16
row 2 of V dot producted with 3 is 5.0306980803327406e-17

We can also see that the transpose of u (and v) is also the inverse of u (and v).

In [65]:
u.T @ u
Out[65]:
array([[ 1.00000000e+00, -5.55111512e-17, -2.15105711e-16,
         1.38777878e-16],
       [-5.55111512e-17,  1.00000000e+00,  6.93889390e-18,
         4.16333634e-17],
       [-2.15105711e-16,  6.93889390e-18,  1.00000000e+00,
         3.46944695e-17],
       [ 1.38777878e-16,  4.16333634e-17,  3.46944695e-17,
         1.00000000e+00]])
In [66]:
v.T @ v
Out[66]:
array([[ 1.00000000e+00, -1.11022302e-16, -9.71445147e-17,
         6.93889390e-17],
       [-1.11022302e-16,  1.00000000e+00, -1.66533454e-16,
        -8.32667268e-17],
       [-9.71445147e-17, -1.66533454e-16,  1.00000000e+00,
         5.55111512e-17],
       [ 6.93889390e-17, -8.32667268e-17,  5.55111512e-17,
         1.00000000e+00]])