CS231n assignment 1

2019-04-20
學校課程/ 圖像辨識

簡介

這次作業主要實作以下演算法:

  • k-Nearest Neighbor (kNN)
  • Support Vector Machine (SVM)
  • Softmax classifier
  • Two-Layer Neural Network
  • Higher Level Representations: Image Features

k-Nearest Neighbor (kNN)

knn.ipynb中已經將資料載入完成,使用 CIFAR-10 圖片集中的 5000 筆當作訓練,500 筆當作測試。每張圖片的大小都是 (32, 32, 3),3 代表 RGB 三個通道。

cifar-10 資料集的10種類別
cifar-10 資料集的10種類別

我們要實作三個版本的 kNN,分別是使用雙迴圈、單迴圈、無迴圈的版本,實作的程式碼在cs231n/classifiers/k_nearest_neighbor.py

compute_distances_two_loops

首先是雙迴圈的版本,X是輸入的 test data,大小為 (num_test, D),輸出dists為一個大小為 (num_test, num_train) 的 numpy array,元素dists[i,j]代表第 i 個 test data point 與第 j 個 train data point 的歐幾里得距離。

1
2
3
4
5
6
7
8
9
def compute_distances_two_loops(self, X):
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dists = np.zeros((num_test, num_train))
for i in range(num_test):
for j in range(num_train):
dists[i][j] = np.sqrt(np.sum((X[i] - self.X_train[j]) ** 2))

return dists

predict_labels

接著實作函式predict_labels,先找出前 k 個與測試資料最接近的點,再透過 majority vote 的方式選出最有可能的類別。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def predict_labels(self, dists, k=1):
num_test = dists.shape[0]
y_pred = np.zeros(num_test)
for i in range(num_test):
# A list of length k storing the labels of the k nearest neighbors to
# the ith test point.
closest_y = []

indices = np.argsort(dists[i])[:k]
closest_y = self.y_train[indices]

y_count = {}
for y in closest_y:
y_count[y] = y_count.get(y, 0) + 1
max_value = max(y_count.values())
candidates = [y for y, v in y_count.items() if v == max_value]
y_pred[i] = min(candidates)

return y_pred

下半部其實可以用一行程式碼解決:

1
y_pred[i] = np.bincount(closest_y).argmax()

利用剛剛得到的dists,可以計算出 test data 的預測結果,再將預測結果與正確答案比較就可以算出準確率。

k = 1 時的準確率:

1
Got 137 / 500 correct => accuracy: 0.274000

k = 5 時的準確率:

1
Got 139 / 500 correct => accuracy: 0.278000

compute_distances_one_loop

接著實作單迴圈版本的 kNN:

1
2
3
4
5
6
7
8
def compute_distances_one_loop(self, X):
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dists = np.zeros((num_test, num_train))
for i in range(num_test):
dists[i] = np.sqrt(np.sum((X[i] - self.X_train) ** 2, axis=1))

return dists

compute_distances_no_loops

無迴圈的概念就是將兩點間的距離以平方差公式展開:\((x-y)^2=x^2+y^2-2xy\)

1
2
3
4
5
6
7
8
def compute_distances_no_loops(self, X):
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dists = np.zeros((num_test, num_train))
dists += np.sqrt(np.sum(self.X_train ** 2, axis=1) + np.sum(X ** 2, axis=1)[:, np.newaxis] \
- 2 * np.dot(X, self.X_train.T))

return dists

執行time_function可以觀察到無迴圈的版本執行速度,遠遠將其他兩個版本甩在後頭。

1
2
3
Two loop version took 29.963571 seconds
One loop version took 23.200013 seconds
No loop version took 0.140244 seconds

Cross Validation (交叉驗證)

實作交叉驗證來進行 hyperparameter 的搜索,要找的超參數為 k 值。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]

X_train_folds = []
y_train_folds = []

X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)

k_to_accuracies = {}

for k in k_choices:
k_to_accuracies[k] = []
for n in range(num_folds):
classifier.train(np.concatenate(X_train_folds[:n] + X_train_folds[n+1:]), np.concatenate(y_train_folds[:n] + y_train_folds[n+1:]))
pred = classifier.predict(X_train_folds[n], k=k)
num_correct = np.sum(pred == y_train_folds[n])
k_to_accuracies[k].append(float(num_correct) / len(pred))

# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))

執行此段程式碼會輸出每個 k 值在每個 fold 的表現。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
k = 1, accuracy = 0.263000
k = 1, accuracy = 0.257000
k = 1, accuracy = 0.264000
k = 1, accuracy = 0.278000
k = 1, accuracy = 0.266000
k = 3, accuracy = 0.239000
k = 3, accuracy = 0.249000
k = 3, accuracy = 0.240000
...
k = 100, accuracy = 0.256000
k = 100, accuracy = 0.270000
k = 100, accuracy = 0.263000
k = 100, accuracy = 0.256000
k = 100, accuracy = 0.263000

視覺化

將結果視覺化,可以觀察到在 k = 10 的時候會在訓練集得到最好的準確率。

cross validation result
cross validation result

將 k 值設為 10 之後,在測試集的準確率確實有提升一些。

1
Got 141 / 500 correct => accuracy: 0.282000

Support Vector Machine (SVM)

svm_loss_naive

首先實作迴圈版本的 svm loss function。

輸入的資料維度是 D,總共有 C 種類別,每個 minibatch 有 N 筆資料。

參數W是大小為 (D,C) 的權重,X是大小為 (N,D) 的 minibatch data,y大小為 (N,1) 代表 training labels。

SVM 的 loss function 如下:

\(L_i=\sum_{j\neq y_i}max(0,s_j-s_{y_i}+\Delta)\)

Loss function 的 gradient 如下:

\(\nabla_{w_{y_i}}L_i=-(\sum_{j\neq y_i} 1(w_j^Tx_i-w_{y_i}^Tx_i+\Delta>0))x_i\)

\(\nabla_{w_j}L_i=1(w_j^Tx_i-w_{y_i}^Tx_i+\Delta>0)x_i\)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
def svm_loss_naive(W, X, y, reg):
dW = np.zeros(W.shape) # initialize the gradient as zero

# compute the loss and the gradient
num_classes = W.shape[1]
num_train = X.shape[0]
loss = 0.0
for i in range(num_train):
scores = X[i].dot(W)
correct_class_score = scores[y[i]]
for j in range(num_classes):
if j == y[i]:
continue
margin = scores[j] - correct_class_score + 1 # note delta = 1
if margin > 0:
loss += margin
dW[:, j] += X[i]
dW[:, y[i]] -= X[i]

# Right now the loss is a sum over all training examples, but we want it
# to be an average instead so we divide by num_train.
loss /= num_train
dW /= num_train

# Add regularization to the loss.
loss += reg * np.sum(W * W)
dW += reg * W

return loss, dW

svm_loss_vectorized

利用numpy vectorized 的計算方式提升運算速度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def svm_loss_vectorized(W, X, y, reg):
"""
Structured SVM loss function, vectorized implementation.

Inputs and outputs are the same as svm_loss_naive.
"""
loss = 0.0
dW = np.zeros(W.shape) # initialize the gradient as zero

num_train = X.shape[0]
scores = X.dot(W)
correct_class_score = scores[np.arange(num_train), y].reshape(-1, 1)
margin = scores - correct_class_score + 1
margin[np.arange(num_train), y] = 0
loss += margin[margin > 0].sum() / num_train
loss += reg * np.sum(W * W)

counts = (margin > 0).astype(int)
counts[range(num_train), y] = - np.sum(counts, axis = 1)
dW += np.dot(X.T, counts) / num_train + reg * W

return loss, dW

Stochastic Gradient Descent (SGD)

cs331n/linear_classifier.py

隨機梯度下降,每次更新W時只利用一部分的資料來計算 loss 及 gradient,能夠減少運算量。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,
batch_size=200, verbose=False):
num_train, dim = X.shape
num_classes = np.max(y) + 1 # assume y takes values 0...K-1 where K is number of classes
if self.W is None:
# lazily initialize W
self.W = 0.001 * np.random.randn(dim, num_classes)

# Run stochastic gradient descent to optimize W
loss_history = []
for it in range(num_iters):
X_batch = None
y_batch = None

idx = np.random.choice(np.arange(num_train), batch_size)
X_batch, y_batch = X[idx], y[idx]

# evaluate loss and gradient
loss, grad = self.loss(X_batch, y_batch, reg)
loss_history.append(loss)

self.W -= learning_rate * grad

if verbose and it % 100 == 0:
print('iteration %d / %d: loss %f' % (it, num_iters, loss))

return loss_history
1
2
3
4
5
def predict(self, X):
y_pred = np.zeros(X.shape[0])
y_pred = X.dot(self.W).argmax(axis=1)

return y_pred

Parameter Tuning

使用 Grid Search 的方法尋找超參數。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
learning_rates = [1e-7, 5e-5]
regularization_strengths = [2.5e4, 5e4]

results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.

for lr in learning_rates:
for reg in regularization_strengths:
print("hyperparameter tuning: lr={}, reg={}".format(lr, reg))
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=True)

y_train_pred = svm.predict(X_train)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val)
val_acc = np.mean(y_val == y_val_pred)

results[(lr, reg)] = (train_acc, val_acc)

if val_acc > best_val:
best_val = val_acc
best_svm = svm

print('-'*40)


# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))

print('best validation accuracy achieved during cross-validation: %f' % best_val)

將學習到的權重視覺化

learned_weight
learned_weight

Softmax classifier

與 svm 相同都是要實作兩種方法:

softmax_loss_naive

模型的W,X,y都與 SVM 相同,唯一不同的點是 softmax classifier 使用的 loss function 不是 hinge loss,而是 cross-entropy loss:

\(L_i=-log(\dfrac{e^{f_{y_i}}}{\sum_j e^{f_j}})\)

也可以寫成

\(L_i=-f_{y_i}+log\sum_j e^{f_j}\)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
def softmax_loss_naive(W, X, y, reg):
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)

num_classes = W.shape[1]
num_train = X.shape[0]
for i in range(num_train):
scores = X[i].dot(W)
scores -= np.max(scores)
correct_class_score = scores[y[i]]
exp_sum = np.sum(np.exp(scores))
loss += -np.log(np.exp(correct_class_score) / exp_sum)
for j in range(num_classes):
dW[:, j] += (np.exp(scores[j]) / exp_sum - (y[i] == j)) * X[i]

loss /= num_train
dW /= num_train

loss += reg * np.sum(W * W)
dW += reg * W

return loss, dW

softmax_loss_vectorized

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def softmax_loss_vectorized(W, X, y, reg):
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)

num_train = X.shape[0]
scores = X.dot(W)
scores -= np.max(scores, axis=1, keepdims=True)
prob = np.exp(scores) / np.sum(np.exp(scores), axis=1, keepdims=True)
loss += np.sum(-np.log(prob[np.arange(num_train), y]))

ind = np.zeros_like(prob)
ind[np.arange(num_train), y] = 1
dW = X.T.dot(prob - ind)

loss = loss / num_train + reg * np.sum(W * W)
dW = dW / num_train + reg * W

return loss, dW

接下來也是實作 Grid Search 搜尋超參數,與 SVM 的部份相同,就不再贅述。

Two-Layer Neural Network

Fordward Pass

實作一個 two-layer 的 NN,使用 ReLU nonlinearity。

計算方法如下:

\(h_1=ReLU(X\cdot W_1+b_1)\)

\(S=h_1\cdot W_2+b_2\)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def loss(self, X, y=None, reg=0.0):
# Unpack variables from the params dictionary
W1, b1 = self.params['W1'], self.params['b1']
W2, b2 = self.params['W2'], self.params['b2']
N, D = X.shape

# Compute the forward pass
scores = None

relu = lambda x: np.maximum(0, x)
h1 = relu(X.dot(W1) + b1)
scores = h1.dot(W2) + b2

# If the targets are not given then jump out, we're done
if y is None:
return scores

Compute Loss

使用 softmax classifier loss:\(L_i=-log(\dfrac{e^{f_{y_i}}}{\sum_j e^{f_j}})\)

1
2
3
4
5
6
7
# Compute the loss
loss = None

scores_ = scores - np.max(scores, axis=1, keepdims=True)
prob = np.exp(scores_) / np.sum(np.exp(scores_), axis=1, keepdims=True)
loss = np.sum(-np.log(prob[np.arange(N), y]))
loss = loss / N + 0.5 * reg * (np.sum(W1 * W1) + np.sum(W2 * W2))

Backpropagation

計算 Loss 對每個參數的偏微分:\(\dfrac{\partial L}{\partial W_2},\dfrac{\partial L}{\partial b_2},\dfrac{\partial L}{\partial W_1},\dfrac{\partial L}{\partial b_1}\)

以下進行四個偏微分的推導:

\(\mathbf{\dfrac{\partial L}{\partial W_2}}=\dfrac{\partial L}{\partial S}\dfrac{\partial S}{\partial W_2}=dS\cdot h_1^\top\)

\(dS\) 為 softmax function 對 score 的偏微分:

\(dS=\begin{cases} \dfrac{e^{s_i}}{\sum_j e^{s_j}} - 1, & j=y_i \\ \dfrac{e^{s_i}}{\sum_j e^{s_j}}, & j\neq y_i \end{cases}\)

\(\mathbf{\dfrac{\partial L}{\partial b_2}}=\dfrac{\partial L}{\partial S}\dfrac{\partial S}{\partial b_2}=dS\)

\(\mathbf{\dfrac{\partial L}{\partial W_1}}=\dfrac{\partial L}{\partial S}\dfrac{\partial S}{\partial h_1}\dfrac{\partial h_1}{\partial W_1}=dS\cdot W_2^\top\cdot \dfrac{\partial h_1}{\partial W_1}\)

\(\dfrac{\partial h_1}{\partial W_1}=\begin{cases} 0, & h_1=0 \\ x^\top, & h_1 > 0 \end{cases}\)

因為 \(ReLU(x)=max(0,x),\dfrac{\partial ReLU}{\partial x}=\begin{cases} 0, & x<0 \\ 1, & x>0 \end{cases}\)

\(\mathbf{\dfrac{\partial L}{\partial b_1}}=\dfrac{\partial L}{\partial S}\dfrac{\partial S}{\partial h_1}\dfrac{\partial h_1}{\partial b_1}=dS\cdot W_2^\top\cdot\dfrac{\partial h_1}{\partial b_1}\)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Backward pass: compute gradients
grads = {}

dS = prob
dS[np.arange(N), y] -= 1
dS /= N

grads['W2'] = h1.T.dot(dS) + reg * W2
grads['b2'] = np.sum(dS, axis=0)

dh1 = dS.dot(W2.T) * (h1 > 0)
grads['W1'] = X.T.dot(dh1) + reg * W1
grads['b1'] = np.sum(dh1, axis=0)

return loss, grads

Higher Level Representations: Image Features

計算圖片的 Histogram of Oriented Gradients (HOG) 當作 feature,丟進模型做訓練。這部分的程式碼都已經先寫好了,我們只要 Tuning 模型的 Hyperparameter 使準確率到預期的值即可。