markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Global Params
|
num_trials = 100000 # Num trials at each sigma
sigmas = np.linspace(0.125, 0.5, 4)
|
_____no_output_____
|
Apache-2.0
|
special_orthogonalization/svd_vs_gs_simulations.ipynb
|
wy-go/google-research
|
Gaussian NoiseHere we generate a noise matrix with iid Gaussian entries drawn from$\sigma N(0,1)$.The "Frobenius Error Diff" shows the distributions of the error differences$\|A - \textrm{GS}(\tilde A)\|_F^2 - \|A - \textrm{SVD}(\tilde A)\|_F^2$ fordifferent values of $\sigma$. The "Geodesic Error Diff" plot shows theanalagous data, but in terms of the geodesic error.
|
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='gaussian')
plt.plot(sigmas,
3*sigmas**2,
'--b',
label='3 $\\sigma^2$')
plt.errorbar(sigmas,
all_errs_svd.mean(axis=1),
color='b',
label='E[$\\|\\|\\mathrm{SVD}^+(M) - R\\|\\|_F^2]$')
plt.plot(sigmas, 6*sigmas**2,
'--r',
label='6 $\\sigma^2$')
plt.errorbar(sigmas,
all_errs_gs.mean(axis=1),
color='r',
label='E[$\\|\\|\\mathrm{GS}^+(M) - R\\|\\|_F^2$]')
plt.xlabel('$\\sigma$')
plt.legend(loc='upper left')
make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Gaussian Noise', ytitle='Frobenius Error Diff', xtitle='$\\sigma$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Gaussian Noise', ytitle='Geodesic Error Diff', xtitle='$\\sigma$')
|
_____no_output_____
|
Apache-2.0
|
special_orthogonalization/svd_vs_gs_simulations.ipynb
|
wy-go/google-research
|
Uniform NoiseHere, the noise matrix is constructed with iid entries drawn from $\sigma \textrm{Unif}(-1, 1)$.
|
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='uniform')
make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Uniform Noise', ytitle='Frobenius Error Diff', xtitle='$\\phi$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Uniform Noise', ytitle='Geodesic Error Diff', xtitle='$\\phi$')
|
_____no_output_____
|
Apache-2.0
|
special_orthogonalization/svd_vs_gs_simulations.ipynb
|
wy-go/google-research
|
Rotation Noise
|
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='rotation')
make_diff_plot(all_errs_svd, all_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Frobenius Error Diff', xtitle='$\\kappa$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Geodesic Error Diff', xtitle='$\\kappa$')
|
_____no_output_____
|
Apache-2.0
|
special_orthogonalization/svd_vs_gs_simulations.ipynb
|
wy-go/google-research
|
Character-Level LSTM in PyTorchIn this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. **This model will be able to generate new text based on the text from the book!**This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN. First let's load in our required resources for data loading and model creation.
|
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Load in DataThen, we'll load the Anna Karenina text file and convert it into integers for our network to use.
|
# open text file and read in data as `text`
with open('data/anna.txt', 'r') as f:
text = f.read()
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.
|
text[:100]
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
TokenizationIn the cells, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
|
# encode the text and map each character to an integer and vice versa
# we create two dictionaries:
# 1. int2char, which maps integers to characters
# 2. char2int, which maps characters to unique integers
chars = tuple(set(text))
int2char = dict(enumerate(chars))
char2int = {ch: ii for ii, ch in int2char.items()}
# encode the text
encoded = np.array([char2int[ch] for ch in text])
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
And we can see those same characters from above, encoded as integers.
|
encoded[:100]
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Pre-processing the dataAs you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an integer (via our created dictionary) and *then* converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that!
|
def one_hot_encode(arr, n_labels):
# Initialize the the encoded array
one_hot = np.zeros((arr.size, n_labels), dtype=np.float32)
# Fill the appropriate elements with ones
one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.
# Finally reshape it to get back to the original array
one_hot = one_hot.reshape((*arr.shape, n_labels))
return one_hot
# check that the function works as expected
test_seq = np.array([[3, 5, 1]])
one_hot = one_hot_encode(test_seq, 8)
print(one_hot)
|
[[[0. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0.]]]
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Making training mini-batchesTo train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:In this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `batch_size`. Each of our sequences will be `seq_length` long. Creating Batches**1. The first thing we need to do is discard some of the text so we only have completely full mini-batches. **Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.**2. After that, we need to split `arr` into $N$ batches. ** You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.**3. Now that we have this array, we can iterate through it to get our mini-batches. **The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `seq_length`. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of tokens in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `seq_length` wide.> **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.**
|
def get_batches(arr, batch_size, seq_length):
'''Create a generator that returns batches of size
batch_size x seq_length from arr.
Arguments
---------
arr: Array you want to make batches from
batch_size: Batch size, the number of sequences per batch
seq_length: Number of encoded chars in a sequence
'''
batch_size_total = batch_size * seq_length
# total number of batches we can make
n_batches = len(arr)//batch_size_total
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size_total]
# Reshape into batch_size rows
arr = arr.reshape((batch_size, -1))
# iterate through the array, one sequence at a time
for n in range(0, arr.shape[1], seq_length):
# The features
x = arr[:, n:n+seq_length]
# The targets, shifted by one
y = np.zeros_like(x)
try:
y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+seq_length]
except IndexError:
y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0]
yield x, y
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Test Your ImplementationNow I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps.
|
batches = get_batches(encoded, 8, 50)
x, y = next(batches)
# printing out the first 10 items in a sequence
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
|
x
[[54 68 75 18 59 45 37 33 66 48]
[25 46 26 33 59 68 75 59 33 75]
[45 26 70 33 46 37 33 75 33 13]
[25 33 59 68 45 33 39 68 53 45]
[33 25 75 51 33 68 45 37 33 59]
[39 43 25 25 53 46 26 33 75 26]
[33 82 26 26 75 33 68 75 70 33]
[20 27 7 46 26 25 77 81 76 33]]
y
[[68 75 18 59 45 37 33 66 48 48]
[46 26 33 59 68 75 59 33 75 59]
[26 70 33 46 37 33 75 33 13 46]
[33 59 68 45 33 39 68 53 45 13]
[25 75 51 33 68 45 37 33 59 45]
[43 25 25 53 46 26 33 75 26 70]
[82 26 26 75 33 68 75 70 33 25]
[27 7 46 26 25 77 81 76 33 11]]
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
If you implemented `get_batches` correctly, the above output should look something like ```x [[25 8 60 11 45 27 28 73 1 2] [17 7 20 73 45 8 60 45 73 60] [27 20 80 73 7 28 73 60 73 65] [17 73 45 8 27 73 66 8 46 27] [73 17 60 12 73 8 27 28 73 45] [66 64 17 17 46 7 20 73 60 20] [73 76 20 20 60 73 8 60 80 73] [47 35 43 7 20 17 24 50 37 73]]y [[ 8 60 11 45 27 28 73 1 2 2] [ 7 20 73 45 8 60 45 73 60 45] [20 80 73 7 28 73 60 73 65 7] [73 45 8 27 73 66 8 46 27 65] [17 60 12 73 8 27 28 73 45 27] [64 17 17 46 7 20 73 60 20 80] [76 20 20 60 73 8 60 80 73 17] [35 43 7 20 17 24 50 37 73 36]] ``` although the exact numbers may be different. Check to make sure the data is shifted over one step for `y`. --- Defining the network with PyTorchBelow is where you'll define the network.Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters. Model StructureIn `__init__` the suggested structure is as follows:* Create and store the necessary dictionaries (this has been done for you)* Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching)* Define a dropout layer with `drop_prob`* Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters)* Finally, initialize the weights (again, this has been given)Note that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`. --- LSTM Inputs/OutputsYou can create a basic [LSTM layer](https://pytorch.org/docs/stable/nn.htmllstm) as follows```pythonself.lstm = nn.LSTM(input_size, n_hidden, n_layers, dropout=drop_prob, batch_first=True)```where `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell.We also need to create an initial hidden state of all zeros. This is done like so```pythonself.init_hidden()```
|
# check if GPU is available
train_on_gpu = torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU!')
else:
print('No GPU available, training on CPU; consider making n_epochs very small.')
class CharRNN(nn.Module):
def __init__(self, tokens, n_hidden=256, n_layers=2,
drop_prob=0.5, lr=0.001):
super().__init__()
self.drop_prob = drop_prob
self.n_layers = n_layers
self.n_hidden = n_hidden
self.lr = lr
# creating character dictionaries
self.chars = tokens
self.int2char = dict(enumerate(self.chars))
self.char2int = {ch: ii for ii, ch in self.int2char.items()}
## TODO: define the LSTM
self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers,
dropout=drop_prob, batch_first=True)
## TODO: define a dropout layer
self.dropout = nn.Dropout(drop_prob)
## TODO: define the final, fully-connected output layer
self.fc = nn.Linear(n_hidden, len(self.chars))
def forward(self, x, hidden):
''' Forward pass through the network.
These inputs are x, and the hidden/cell state `hidden`. '''
## TODO: Get the outputs and the new hidden state from the lstm
r_output, hidden = self.lstm(x, hidden)
## TODO: pass through a dropout layer
out = self.dropout(r_output)
# Stack up LSTM outputs using view
# you may need to use contiguous to reshape the output
out = out.contiguous().view(-1, self.n_hidden)
## TODO: put x through the fully-connected layer
out = self.fc(out)
# return the final output and the hidden state
return out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x n_hidden,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_())
return hidden
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Time to trainThe train function gives us the ability to set the number of epochs, the learning rate, and other parameters.Below we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual!A couple of details about training: >* Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new *tuple* variable because an LSTM has a hidden state that is a tuple of the hidden and cell states.* We use [`clip_grad_norm_`](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) to help prevent exploding gradients.
|
def train(net, data, epochs=10, batch_size=10, seq_length=50, lr=0.001, clip=5, val_frac=0.1, print_every=10):
''' Training a network
Arguments
---------
net: CharRNN network
data: text data to train the network
epochs: Number of epochs to train
batch_size: Number of mini-sequences per mini-batch, aka batch size
seq_length: Number of character steps per mini-batch
lr: learning rate
clip: gradient clipping
val_frac: Fraction of data to hold out for validation
print_every: Number of steps for printing training and validation loss
'''
net.train()
opt = torch.optim.Adam(net.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss()
# create training and validation data
val_idx = int(len(data)*(1-val_frac))
data, val_data = data[:val_idx], data[val_idx:]
if(train_on_gpu):
net.cuda()
counter = 0
n_chars = len(net.chars)
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
for x, y in get_batches(data, batch_size, seq_length):
counter += 1
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
inputs, targets = torch.from_numpy(x), torch.from_numpy(y)
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output, targets.view(batch_size*seq_length).long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
opt.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for x, y in get_batches(val_data, batch_size, seq_length):
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
x, y = torch.from_numpy(x), torch.from_numpy(y)
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
inputs, targets = x, y
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output, targets.view(batch_size*seq_length).long())
val_losses.append(val_loss.item())
net.train() # reset to train mode after iterationg through validation data
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.4f}...".format(loss.item()),
"Val Loss: {:.4f}".format(np.mean(val_losses)))
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Instantiating the modelNow we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training!
|
# define and print the net
n_hidden=512
n_layers=2
net = CharRNN(chars, n_hidden, n_layers)
print(net)
batch_size = 128
seq_length = 100
n_epochs = 20 # start smaller if you are just testing initial behavior
# train the model
train(net, encoded, epochs=n_epochs, batch_size=batch_size, seq_length=seq_length, lr=0.001, print_every=10)
|
Epoch: 1/20... Step: 10... Loss: 3.2482... Val Loss: 3.2114
Epoch: 1/20... Step: 20... Loss: 3.1410... Val Loss: 3.1354
Epoch: 1/20... Step: 30... Loss: 3.1360... Val Loss: 3.1238
Epoch: 1/20... Step: 40... Loss: 3.1139... Val Loss: 3.1195
Epoch: 1/20... Step: 50... Loss: 3.1408... Val Loss: 3.1170
Epoch: 1/20... Step: 60... Loss: 3.1161... Val Loss: 3.1144
Epoch: 1/20... Step: 70... Loss: 3.1051... Val Loss: 3.1113
Epoch: 1/20... Step: 80... Loss: 3.1133... Val Loss: 3.1029
Epoch: 1/20... Step: 90... Loss: 3.1048... Val Loss: 3.0833
Epoch: 1/20... Step: 100... Loss: 3.0508... Val Loss: 3.0351
Epoch: 1/20... Step: 110... Loss: 2.9844... Val Loss: 2.9579
Epoch: 1/20... Step: 120... Loss: 2.8520... Val Loss: 2.8698
Epoch: 1/20... Step: 130... Loss: 2.7709... Val Loss: 2.7311
Epoch: 2/20... Step: 140... Loss: 2.7156... Val Loss: 2.6316
Epoch: 2/20... Step: 150... Loss: 2.5914... Val Loss: 2.5473
Epoch: 2/20... Step: 160... Loss: 2.5292... Val Loss: 2.4892
Epoch: 2/20... Step: 170... Loss: 2.4596... Val Loss: 2.4423
Epoch: 2/20... Step: 180... Loss: 2.4390... Val Loss: 2.4093
Epoch: 2/20... Step: 190... Loss: 2.3830... Val Loss: 2.3769
Epoch: 2/20... Step: 200... Loss: 2.3723... Val Loss: 2.3445
Epoch: 2/20... Step: 210... Loss: 2.3436... Val Loss: 2.3148
Epoch: 2/20... Step: 220... Loss: 2.2939... Val Loss: 2.2818
Epoch: 2/20... Step: 230... Loss: 2.2846... Val Loss: 2.2509
Epoch: 2/20... Step: 240... Loss: 2.2627... Val Loss: 2.2227
Epoch: 2/20... Step: 250... Loss: 2.1919... Val Loss: 2.1996
Epoch: 2/20... Step: 260... Loss: 2.1661... Val Loss: 2.1744
Epoch: 2/20... Step: 270... Loss: 2.1747... Val Loss: 2.1523
Epoch: 3/20... Step: 280... Loss: 2.1612... Val Loss: 2.1336
Epoch: 3/20... Step: 290... Loss: 2.1422... Val Loss: 2.1000
Epoch: 3/20... Step: 300... Loss: 2.1086... Val Loss: 2.0798
Epoch: 3/20... Step: 310... Loss: 2.0786... Val Loss: 2.0613
Epoch: 3/20... Step: 320... Loss: 2.0523... Val Loss: 2.0378
Epoch: 3/20... Step: 330... Loss: 2.0238... Val Loss: 2.0222
Epoch: 3/20... Step: 340... Loss: 2.0444... Val Loss: 1.9995
Epoch: 3/20... Step: 350... Loss: 2.0152... Val Loss: 1.9814
Epoch: 3/20... Step: 360... Loss: 1.9430... Val Loss: 1.9665
Epoch: 3/20... Step: 370... Loss: 1.9763... Val Loss: 1.9481
Epoch: 3/20... Step: 380... Loss: 1.9566... Val Loss: 1.9345
Epoch: 3/20... Step: 390... Loss: 1.9172... Val Loss: 1.9153
Epoch: 3/20... Step: 400... Loss: 1.9015... Val Loss: 1.9021
Epoch: 3/20... Step: 410... Loss: 1.9104... Val Loss: 1.8867
Epoch: 4/20... Step: 420... Loss: 1.9027... Val Loss: 1.8719
Epoch: 4/20... Step: 430... Loss: 1.8848... Val Loss: 1.8577
Epoch: 4/20... Step: 440... Loss: 1.8724... Val Loss: 1.8445
Epoch: 4/20... Step: 450... Loss: 1.8158... Val Loss: 1.8321
Epoch: 4/20... Step: 460... Loss: 1.7973... Val Loss: 1.8220
Epoch: 4/20... Step: 470... Loss: 1.8302... Val Loss: 1.8081
Epoch: 4/20... Step: 480... Loss: 1.8078... Val Loss: 1.7975
Epoch: 4/20... Step: 490... Loss: 1.8182... Val Loss: 1.7851
Epoch: 4/20... Step: 500... Loss: 1.8034... Val Loss: 1.7736
Epoch: 4/20... Step: 510... Loss: 1.7886... Val Loss: 1.7640
Epoch: 4/20... Step: 520... Loss: 1.8058... Val Loss: 1.7547
Epoch: 4/20... Step: 530... Loss: 1.7575... Val Loss: 1.7445
Epoch: 4/20... Step: 540... Loss: 1.7292... Val Loss: 1.7370
Epoch: 4/20... Step: 550... Loss: 1.7692... Val Loss: 1.7253
Epoch: 5/20... Step: 560... Loss: 1.7331... Val Loss: 1.7184
Epoch: 5/20... Step: 570... Loss: 1.7250... Val Loss: 1.7056
Epoch: 5/20... Step: 580... Loss: 1.6994... Val Loss: 1.6949
Epoch: 5/20... Step: 590... Loss: 1.6999... Val Loss: 1.6887
Epoch: 5/20... Step: 600... Loss: 1.6930... Val Loss: 1.6822
Epoch: 5/20... Step: 610... Loss: 1.6770... Val Loss: 1.6757
Epoch: 5/20... Step: 620... Loss: 1.6782... Val Loss: 1.6705
Epoch: 5/20... Step: 630... Loss: 1.7051... Val Loss: 1.6594
Epoch: 5/20... Step: 640... Loss: 1.6475... Val Loss: 1.6531
Epoch: 5/20... Step: 650... Loss: 1.6617... Val Loss: 1.6462
Epoch: 5/20... Step: 660... Loss: 1.6298... Val Loss: 1.6378
Epoch: 5/20... Step: 670... Loss: 1.6466... Val Loss: 1.6343
Epoch: 5/20... Step: 680... Loss: 1.6483... Val Loss: 1.6273
Epoch: 5/20... Step: 690... Loss: 1.6326... Val Loss: 1.6203
Epoch: 6/20... Step: 700... Loss: 1.6298... Val Loss: 1.6155
Epoch: 6/20... Step: 710... Loss: 1.6189... Val Loss: 1.6099
Epoch: 6/20... Step: 720... Loss: 1.6038... Val Loss: 1.6019
Epoch: 6/20... Step: 730... Loss: 1.6189... Val Loss: 1.5949
Epoch: 6/20... Step: 740... Loss: 1.5844... Val Loss: 1.5916
Epoch: 6/20... Step: 750... Loss: 1.5705... Val Loss: 1.5838
Epoch: 6/20... Step: 760... Loss: 1.6029... Val Loss: 1.5829
Epoch: 6/20... Step: 770... Loss: 1.5919... Val Loss: 1.5786
Epoch: 6/20... Step: 780... Loss: 1.5683... Val Loss: 1.5708
Epoch: 6/20... Step: 790... Loss: 1.5492... Val Loss: 1.5678
Epoch: 6/20... Step: 800... Loss: 1.5784... Val Loss: 1.5631
Epoch: 6/20... Step: 810... Loss: 1.5611... Val Loss: 1.5589
Epoch: 6/20... Step: 820... Loss: 1.5152... Val Loss: 1.5521
Epoch: 6/20... Step: 830... Loss: 1.5756... Val Loss: 1.5487
Epoch: 7/20... Step: 840... Loss: 1.5236... Val Loss: 1.5427
Epoch: 7/20... Step: 850... Loss: 1.5457... Val Loss: 1.5427
Epoch: 7/20... Step: 860... Loss: 1.5223... Val Loss: 1.5339
Epoch: 7/20... Step: 870... Loss: 1.5323... Val Loss: 1.5283
Epoch: 7/20... Step: 880... Loss: 1.5344... Val Loss: 1.5250
Epoch: 7/20... Step: 890... Loss: 1.5340... Val Loss: 1.5217
Epoch: 7/20... Step: 900... Loss: 1.5128... Val Loss: 1.5206
Epoch: 7/20... Step: 910... Loss: 1.4882... Val Loss: 1.5201
Epoch: 7/20... Step: 920... Loss: 1.5208... Val Loss: 1.5138
Epoch: 7/20... Step: 930... Loss: 1.4947... Val Loss: 1.5096
Epoch: 7/20... Step: 940... Loss: 1.4995... Val Loss: 1.5051
Epoch: 7/20... Step: 950... Loss: 1.5136... Val Loss: 1.5007
Epoch: 7/20... Step: 960... Loss: 1.5143... Val Loss: 1.4966
Epoch: 7/20... Step: 970... Loss: 1.5095... Val Loss: 1.5004
Epoch: 8/20... Step: 980... Loss: 1.4829... Val Loss: 1.4945
Epoch: 8/20... Step: 990... Loss: 1.4891... Val Loss: 1.4878
Epoch: 8/20... Step: 1000... Loss: 1.4794... Val Loss: 1.4834
Epoch: 8/20... Step: 1010... Loss: 1.5210... Val Loss: 1.4804
Epoch: 8/20... Step: 1020... Loss: 1.4882... Val Loss: 1.4778
Epoch: 8/20... Step: 1030... Loss: 1.4722... Val Loss: 1.4736
Epoch: 8/20... Step: 1040... Loss: 1.4865... Val Loss: 1.4733
Epoch: 8/20... Step: 1050... Loss: 1.4553... Val Loss: 1.4747
Epoch: 8/20... Step: 1060... Loss: 1.4647... Val Loss: 1.4654
Epoch: 8/20... Step: 1070... Loss: 1.4727... Val Loss: 1.4644
Epoch: 8/20... Step: 1080... Loss: 1.4652... Val Loss: 1.4622
Epoch: 8/20... Step: 1090... Loss: 1.4416... Val Loss: 1.4591
Epoch: 8/20... Step: 1100... Loss: 1.4400... Val Loss: 1.4560
Epoch: 8/20... Step: 1110... Loss: 1.4567... Val Loss: 1.4523
Epoch: 9/20... Step: 1120... Loss: 1.4561... Val Loss: 1.4521
Epoch: 9/20... Step: 1130... Loss: 1.4460... Val Loss: 1.4495
Epoch: 9/20... Step: 1140... Loss: 1.4466... Val Loss: 1.4437
Epoch: 9/20... Step: 1150... Loss: 1.4679... Val Loss: 1.4423
Epoch: 9/20... Step: 1160... Loss: 1.4279... Val Loss: 1.4398
Epoch: 9/20... Step: 1170... Loss: 1.4303... Val Loss: 1.4372
Epoch: 9/20... Step: 1180... Loss: 1.4196... Val Loss: 1.4382
Epoch: 9/20... Step: 1190... Loss: 1.4541... Val Loss: 1.4338
Epoch: 9/20... Step: 1200... Loss: 1.4059... Val Loss: 1.4305
Epoch: 9/20... Step: 1210... Loss: 1.4142... Val Loss: 1.4277
Epoch: 9/20... Step: 1220... Loss: 1.4176... Val Loss: 1.4261
Epoch: 9/20... Step: 1230... Loss: 1.4006... Val Loss: 1.4275
Epoch: 9/20... Step: 1240... Loss: 1.4079... Val Loss: 1.4239
Epoch: 9/20... Step: 1250... Loss: 1.4157... Val Loss: 1.4224
Epoch: 10/20... Step: 1260... Loss: 1.4191... Val Loss: 1.4196
Epoch: 10/20... Step: 1270... Loss: 1.4144... Val Loss: 1.4178
Epoch: 10/20... Step: 1280... Loss: 1.4276... Val Loss: 1.4137
Epoch: 10/20... Step: 1290... Loss: 1.4112... Val Loss: 1.4160
Epoch: 10/20... Step: 1300... Loss: 1.3895... Val Loss: 1.4108
Epoch: 10/20... Step: 1310... Loss: 1.4017... Val Loss: 1.4084
Epoch: 10/20... Step: 1320... Loss: 1.3792... Val Loss: 1.4094
Epoch: 10/20... Step: 1330... Loss: 1.3848... Val Loss: 1.4071
Epoch: 10/20... Step: 1340... Loss: 1.3680... Val Loss: 1.4056
Epoch: 10/20... Step: 1350... Loss: 1.3753... Val Loss: 1.4014
Epoch: 10/20... Step: 1360... Loss: 1.3737... Val Loss: 1.3971
Epoch: 10/20... Step: 1370... Loss: 1.3583... Val Loss: 1.4007
Epoch: 10/20... Step: 1380... Loss: 1.4051... Val Loss: 1.3960
Epoch: 10/20... Step: 1390... Loss: 1.4199... Val Loss: 1.3956
Epoch: 11/20... Step: 1400... Loss: 1.4129... Val Loss: 1.3954
Epoch: 11/20... Step: 1410... Loss: 1.4208... Val Loss: 1.3943
Epoch: 11/20... Step: 1420... Loss: 1.4071... Val Loss: 1.3881
Epoch: 11/20... Step: 1430... Loss: 1.3801... Val Loss: 1.3923
Epoch: 11/20... Step: 1440... Loss: 1.4088... Val Loss: 1.3927
Epoch: 11/20... Step: 1450... Loss: 1.3344... Val Loss: 1.3870
Epoch: 11/20... Step: 1460... Loss: 1.3599... Val Loss: 1.3864
Epoch: 11/20... Step: 1470... Loss: 1.3470... Val Loss: 1.3850
Epoch: 11/20... Step: 1480... Loss: 1.3596... Val Loss: 1.3819
Epoch: 11/20... Step: 1490... Loss: 1.3603... Val Loss: 1.3798
Epoch: 11/20... Step: 1500... Loss: 1.3483... Val Loss: 1.3807
Epoch: 11/20... Step: 1510... Loss: 1.3253... Val Loss: 1.3807
Epoch: 11/20... Step: 1520... Loss: 1.3710... Val Loss: 1.3751
Epoch: 12/20... Step: 1530... Loss: 1.4196... Val Loss: 1.3775
Epoch: 12/20... Step: 1540... Loss: 1.3718... Val Loss: 1.3752
Epoch: 12/20... Step: 1550... Loss: 1.3842... Val Loss: 1.3743
Epoch: 12/20... Step: 1560... Loss: 1.3866... Val Loss: 1.3698
Epoch: 12/20... Step: 1570... Loss: 1.3444... Val Loss: 1.3744
Epoch: 12/20... Step: 1580... Loss: 1.3167... Val Loss: 1.3729
Epoch: 12/20... Step: 1590... Loss: 1.3057... Val Loss: 1.3692
Epoch: 12/20... Step: 1600... Loss: 1.3297... Val Loss: 1.3698
Epoch: 12/20... Step: 1610... Loss: 1.3380... Val Loss: 1.3704
Epoch: 12/20... Step: 1620... Loss: 1.3254... Val Loss: 1.3650
Epoch: 12/20... Step: 1630... Loss: 1.3539... Val Loss: 1.3628
Epoch: 12/20... Step: 1640... Loss: 1.3310... Val Loss: 1.3656
Epoch: 12/20... Step: 1650... Loss: 1.3040... Val Loss: 1.3641
Epoch: 12/20... Step: 1660... Loss: 1.3597... Val Loss: 1.3606
Epoch: 13/20... Step: 1670... Loss: 1.3311... Val Loss: 1.3615
Epoch: 13/20... Step: 1680... Loss: 1.3349... Val Loss: 1.3575
Epoch: 13/20... Step: 1690... Loss: 1.3168... Val Loss: 1.3589
Epoch: 13/20... Step: 1700... Loss: 1.3228... Val Loss: 1.3540
Epoch: 13/20... Step: 1710... Loss: 1.2991... Val Loss: 1.3595
Epoch: 13/20... Step: 1720... Loss: 1.3131... Val Loss: 1.3567
Epoch: 13/20... Step: 1730... Loss: 1.3383... Val Loss: 1.3541
Epoch: 13/20... Step: 1740... Loss: 1.3161... Val Loss: 1.3528
Epoch: 13/20... Step: 1750... Loss: 1.2798... Val Loss: 1.3588
Epoch: 13/20... Step: 1760... Loss: 1.3097... Val Loss: 1.3541
Epoch: 13/20... Step: 1770... Loss: 1.3252... Val Loss: 1.3523
Epoch: 13/20... Step: 1780... Loss: 1.3103... Val Loss: 1.3512
Epoch: 13/20... Step: 1790... Loss: 1.2921... Val Loss: 1.3480
Epoch: 13/20... Step: 1800... Loss: 1.3165... Val Loss: 1.3468
Epoch: 14/20... Step: 1810... Loss: 1.3175... Val Loss: 1.3458
Epoch: 14/20... Step: 1820... Loss: 1.3055... Val Loss: 1.3433
Epoch: 14/20... Step: 1830... Loss: 1.3234... Val Loss: 1.3466
Epoch: 14/20... Step: 1840... Loss: 1.2678... Val Loss: 1.3471
Epoch: 14/20... Step: 1850... Loss: 1.2659... Val Loss: 1.3489
Epoch: 14/20... Step: 1860... Loss: 1.3215... Val Loss: 1.3451
Epoch: 14/20... Step: 1870... Loss: 1.3197... Val Loss: 1.3400
Epoch: 14/20... Step: 1880... Loss: 1.3095... Val Loss: 1.3424
Epoch: 14/20... Step: 1890... Loss: 1.3336... Val Loss: 1.3434
Epoch: 14/20... Step: 1900... Loss: 1.3067... Val Loss: 1.3394
Epoch: 14/20... Step: 1910... Loss: 1.3049... Val Loss: 1.3380
Epoch: 14/20... Step: 1920... Loss: 1.3004... Val Loss: 1.3395
Epoch: 14/20... Step: 1930... Loss: 1.2739... Val Loss: 1.3377
Epoch: 14/20... Step: 1940... Loss: 1.3157... Val Loss: 1.3353
Epoch: 15/20... Step: 1950... Loss: 1.2943... Val Loss: 1.3358
Epoch: 15/20... Step: 1960... Loss: 1.2895... Val Loss: 1.3343
Epoch: 15/20... Step: 1970... Loss: 1.2929... Val Loss: 1.3315
Epoch: 15/20... Step: 1980... Loss: 1.2891... Val Loss: 1.3335
Epoch: 15/20... Step: 1990... Loss: 1.2827... Val Loss: 1.3371
Epoch: 15/20... Step: 2000... Loss: 1.2699... Val Loss: 1.3355
Epoch: 15/20... Step: 2010... Loss: 1.2878... Val Loss: 1.3301
Epoch: 15/20... Step: 2020... Loss: 1.3037... Val Loss: 1.3344
Epoch: 15/20... Step: 2030... Loss: 1.2671... Val Loss: 1.3342
Epoch: 15/20... Step: 2040... Loss: 1.2919... Val Loss: 1.3325
Epoch: 15/20... Step: 2050... Loss: 1.2736... Val Loss: 1.3303
Epoch: 15/20... Step: 2060... Loss: 1.2852... Val Loss: 1.3279
Epoch: 15/20... Step: 2070... Loss: 1.2926... Val Loss: 1.3213
Epoch: 15/20... Step: 2080... Loss: 1.2809... Val Loss: 1.3224
Epoch: 16/20... Step: 2090... Loss: 1.2955... Val Loss: 1.3228
Epoch: 16/20... Step: 2100... Loss: 1.2697... Val Loss: 1.3226
Epoch: 16/20... Step: 2110... Loss: 1.2672... Val Loss: 1.3233
Epoch: 16/20... Step: 2120... Loss: 1.2819... Val Loss: 1.3234
Epoch: 16/20... Step: 2130... Loss: 1.2560... Val Loss: 1.3248
Epoch: 16/20... Step: 2140... Loss: 1.2631... Val Loss: 1.3232
Epoch: 16/20... Step: 2150... Loss: 1.2937... Val Loss: 1.3202
Epoch: 16/20... Step: 2160... Loss: 1.2618... Val Loss: 1.3238
Epoch: 16/20... Step: 2170... Loss: 1.2633... Val Loss: 1.3237
Epoch: 16/20... Step: 2180... Loss: 1.2604... Val Loss: 1.3224
Epoch: 16/20... Step: 2190... Loss: 1.2807... Val Loss: 1.3211
Epoch: 16/20... Step: 2200... Loss: 1.2664... Val Loss: 1.3189
Epoch: 16/20... Step: 2210... Loss: 1.2232... Val Loss: 1.3150
Epoch: 16/20... Step: 2220... Loss: 1.2737... Val Loss: 1.3184
Epoch: 17/20... Step: 2230... Loss: 1.2517... Val Loss: 1.3176
Epoch: 17/20... Step: 2240... Loss: 1.2480... Val Loss: 1.3181
Epoch: 17/20... Step: 2250... Loss: 1.2364... Val Loss: 1.3136
Epoch: 17/20... Step: 2260... Loss: 1.2542... Val Loss: 1.3144
Epoch: 17/20... Step: 2270... Loss: 1.2624... Val Loss: 1.3179
Epoch: 17/20... Step: 2280... Loss: 1.2746... Val Loss: 1.3178
Epoch: 17/20... Step: 2290... Loss: 1.2668... Val Loss: 1.3142
Epoch: 17/20... Step: 2300... Loss: 1.2300... Val Loss: 1.3199
Epoch: 17/20... Step: 2310... Loss: 1.2596... Val Loss: 1.3183
Epoch: 17/20... Step: 2320... Loss: 1.2488... Val Loss: 1.3139
Epoch: 17/20... Step: 2330... Loss: 1.2533... Val Loss: 1.3163
Epoch: 17/20... Step: 2340... Loss: 1.2689... Val Loss: 1.3139
Epoch: 17/20... Step: 2350... Loss: 1.2705... Val Loss: 1.3107
Epoch: 17/20... Step: 2360... Loss: 1.2696... Val Loss: 1.3130
Epoch: 18/20... Step: 2370... Loss: 1.2372... Val Loss: 1.3079
Epoch: 18/20... Step: 2380... Loss: 1.2402... Val Loss: 1.3094
Epoch: 18/20... Step: 2390... Loss: 1.2515... Val Loss: 1.3089
Epoch: 18/20... Step: 2400... Loss: 1.2753... Val Loss: 1.3081
Epoch: 18/20... Step: 2410... Loss: 1.2641... Val Loss: 1.3094
Epoch: 18/20... Step: 2420... Loss: 1.2459... Val Loss: 1.3057
Epoch: 18/20... Step: 2430... Loss: 1.2597... Val Loss: 1.3067
Epoch: 18/20... Step: 2440... Loss: 1.2370... Val Loss: 1.3081
Epoch: 18/20... Step: 2450... Loss: 1.2314... Val Loss: 1.3043
Epoch: 18/20... Step: 2460... Loss: 1.2521... Val Loss: 1.3043
Epoch: 18/20... Step: 2470... Loss: 1.2417... Val Loss: 1.3069
Epoch: 18/20... Step: 2480... Loss: 1.2324... Val Loss: 1.3054
Epoch: 18/20... Step: 2490... Loss: 1.2297... Val Loss: 1.3019
Epoch: 18/20... Step: 2500... Loss: 1.2282... Val Loss: 1.3038
Epoch: 19/20... Step: 2510... Loss: 1.2360... Val Loss: 1.3056
Epoch: 19/20... Step: 2520... Loss: 1.2464... Val Loss: 1.3032
Epoch: 19/20... Step: 2530... Loss: 1.2523... Val Loss: 1.2980
Epoch: 19/20... Step: 2540... Loss: 1.2623... Val Loss: 1.3016
Epoch: 19/20... Step: 2550... Loss: 1.2296... Val Loss: 1.3026
Epoch: 19/20... Step: 2560... Loss: 1.2345... Val Loss: 1.2996
Epoch: 19/20... Step: 2570... Loss: 1.2265... Val Loss: 1.2992
Epoch: 19/20... Step: 2580... Loss: 1.2649... Val Loss: 1.2984
Epoch: 19/20... Step: 2590... Loss: 1.2177... Val Loss: 1.2993
Epoch: 19/20... Step: 2600... Loss: 1.2174... Val Loss: 1.2952
Epoch: 19/20... Step: 2610... Loss: 1.2284... Val Loss: 1.2975
Epoch: 19/20... Step: 2620... Loss: 1.2137... Val Loss: 1.2962
Epoch: 19/20... Step: 2630... Loss: 1.2231... Val Loss: 1.2972
Epoch: 19/20... Step: 2640... Loss: 1.2337... Val Loss: 1.2998
Epoch: 20/20... Step: 2650... Loss: 1.2263... Val Loss: 1.2995
Epoch: 20/20... Step: 2660... Loss: 1.2451... Val Loss: 1.2973
Epoch: 20/20... Step: 2670... Loss: 1.2533... Val Loss: 1.2932
Epoch: 20/20... Step: 2680... Loss: 1.2300... Val Loss: 1.2944
Epoch: 20/20... Step: 2690... Loss: 1.2325... Val Loss: 1.2981
Epoch: 20/20... Step: 2700... Loss: 1.2327... Val Loss: 1.2951
Epoch: 20/20... Step: 2710... Loss: 1.2025... Val Loss: 1.2988
Epoch: 20/20... Step: 2720... Loss: 1.2114... Val Loss: 1.2968
Epoch: 20/20... Step: 2730... Loss: 1.2085... Val Loss: 1.2936
Epoch: 20/20... Step: 2740... Loss: 1.2006... Val Loss: 1.2926
Epoch: 20/20... Step: 2750... Loss: 1.2099... Val Loss: 1.2921
Epoch: 20/20... Step: 2760... Loss: 1.2045... Val Loss: 1.2917
Epoch: 20/20... Step: 2770... Loss: 1.2393... Val Loss: 1.2932
Epoch: 20/20... Step: 2780... Loss: 1.2661... Val Loss: 1.2952
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Getting the best modelTo set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network. HyperparametersHere are the hyperparameters for the network.In defining the model:* `n_hidden` - The number of units in the hidden layers.* `n_layers` - Number of hidden LSTM layers to use.We assume that dropout probability and learning rate will be kept at the default, in this example.And in training:* `batch_size` - Number of sequences running through the network in one pass.* `seq_length` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.* `lr` - Learning rate for trainingHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnntips-and-tricks).> Tips and Tricks> Monitoring Validation Loss vs. Training Loss>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)> Approximate number of parameters> The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are:> - The number of parameters in your model. This is printed when you start training.> - The size of your dataset. 1MB file is approximately 1 million characters.>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger.> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.> Best models strategy>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. CheckpointAfter training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.
|
# change the name, for saving multiple files
model_name = 'rnn_20_epoch.net'
checkpoint = {'n_hidden': net.n_hidden,
'n_layers': net.n_layers,
'state_dict': net.state_dict(),
'tokens': net.chars}
with open(model_name, 'wb') as f:
torch.save(checkpoint, f)
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
--- Making PredictionsNow that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text! A note on the `predict` functionThe output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**.> To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character. Top K samplingOur predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about [topk, here](https://pytorch.org/docs/stable/torch.htmltorch.topk).
|
def predict(net, char, h=None, top_k=None):
''' Given a character, predict the next character.
Returns the predicted character and the hidden state.
'''
# tensor inputs
x = np.array([[net.char2int[char]]])
x = one_hot_encode(x, len(net.chars))
inputs = torch.from_numpy(x)
if(train_on_gpu):
inputs = inputs.cuda()
# detach hidden state from history
h = tuple([each.data for each in h])
# get the output of the model
out, h = net(inputs, h)
# get the character probabilities
p = F.softmax(out, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# get top characters
if top_k is None:
top_ch = np.arange(len(net.chars))
else:
p, top_ch = p.topk(top_k)
top_ch = top_ch.numpy().squeeze()
# select the likely next character with some element of randomness
p = p.numpy().squeeze()
char = np.random.choice(top_ch, p=p/p.sum())
# return the encoded value of the predicted char and the hidden state
return net.int2char[char], h
|
_____no_output_____
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Priming and generating text Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.
|
def sample(net, size, prime='The', top_k=None):
if(train_on_gpu):
net.cuda()
else:
net.cpu()
net.eval() # eval mode
# First off, run through the prime characters
chars = [ch for ch in prime]
h = net.init_hidden(1)
for ch in prime:
char, h = predict(net, ch, h, top_k=top_k)
chars.append(char)
# Now pass in the previous character and get a new one
for ii in range(size):
char, h = predict(net, chars[-1], h, top_k=top_k)
chars.append(char)
return ''.join(chars)
print(sample(net, 1000, prime='Anna', top_k=5))
|
Anna had so that an enter strength to be says off and he cared to be an unmarrely sister.
The children are saying in a place. A smile of their secretary and the sense of a condition. He saw that the princess was the same, the peaciting of his
briderous country second still. That she had seen him a little as it was the simminest that he had not been
the simple of
the passion to see his finger, and
his brother and the points he heard this place which
he was not
sense. All had sent him that he could he concealed the steps and that he was to be patied,
so much at hands, at the servants who had said something with the
chair.
"This is a solitat matter?"
"It's not thinking in the more the point is and that he's talking of the drinking of the
crain. If I was a memory. Have you
seen my to thousard more
characteribries, and this, and would be the framing of the most careful towards me, to the country too that they did nothind when she could not see him. What is
it you want a conviluated more to mo
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
Loading a checkpoint
|
# Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net`
with open('rnn_20_epoch.net', 'rb') as f:
checkpoint = torch.load(f)
loaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers'])
loaded.load_state_dict(checkpoint['state_dict'])
# Sample using a loaded model
print(sample(loaded, 2000, top_k=5, prime="And Levin said"))
|
And Levin said those second portryit on the contrast.
"What is it?" said Stepan Arkadyevitch,
letting up his
shirt and talking to her face. And he had
not speak to Levin that his head on the round stop and
trouble
to be faint, as he
was not a man who was said, she was the setter times that had been before so much talking in the steps of the door, his force to think of their sense of the sendence, both always bowing about in the country and the same time of her character and all at him with his face, and went out of her hand, sitting down beside
the clothes, and
the
same
single mind and when they seemed to a strange of his
brother's.
And he
was so meched the paints was so standing the man had been a love was the man, and stopped at once in the first step. But he was
a change to
do. The sound of the partice say a construnting his
steps and telling a single camp of the
ready and three significance of the same forest.
"Yes, but you see it." He carried his face and the condition in their carriage to her, and to go, she
said that had been talking of his forest, a strange world, when Levin came the conversation as sense of her son, and he could not see him to hive answer, which had been saking when at tomere within the
counting her face that he was serenely from her she took a counting, there
was the since he
had too wearted and seemed to her," said the member of the cannors in the steps to his
word.
The moss of the convincing it had been drawing up the people that there was nothing without this way or a single wife as he did not hear
him or that he was not seeing that she would be a court of the sound of some sound of the position, and to spartly she
could
see her and a sundroup times there was nothing this
father and as she stoop serious in the sound, was a steps of the master, a few sistersily play of his husband. The crowd had no carreated herself, and truets, and shaking up, the pases, and the moment that he was not at the marshal, and the starling the secret were stopping to be
|
MIT
|
recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb
|
danielbank/deep-learning-v2-pytorch
|
1 - Sequence to Sequence Learning with Neural NetworksIn this series we'll be building a machine learning model to go from once sequence to another, using PyTorch and torchtext. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language.In this first notebook, we'll start simple to understand the general concepts by implementing the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper. IntroductionThe most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.The above image shows an example translation. The input/source sentence, "guten morgen", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (``) and *end of sequence* (``) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$:$$h_t = \text{EncoderRNN}(e(x_t), h_{t-1})$$We're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit). Here, we have $X = \{x_1, x_2, ..., x_T\}$, where $x_1 = \text{}, x_2 = \text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.Once the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.Now we have our context vector, $z$, we can start decoding it to get the output/target sentence, "good morning". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:$$s_t = \text{DecoderRNN}(d(y_t), s_{t-1})$$Although the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters.In the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\hat{y}_t$. $$\hat{y}_t = f(s_t)$$The words in the decoder are always generated one after another, with one per time-step. We always use `` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/). When training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `` token or after a certain amount of words have been generated.Once we have our predicted target sentence, $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$, we compare it against our actual target sentence, $Y = \{ y_1, y_2, ..., y_T \}$, to calculate our loss. We then use this loss to update all of the parameters in our model. Preparing DataWe'll be coding up the models in PyTorch and using torchtext to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.
|
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.legacy.datasets import Multi30k
from torchtext.legacy.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
We'll set the random seeds for deterministic results.
|
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word. spaCy has model for each language ("de_core_news_sm" for German and "en_core_web_sm" for English) which need to be loaded so we can access the tokenizer of each model. **Note**: the models must first be downloaded using the following on the command line: ```python -m spacy download en_core_web_smpython -m spacy download de_core_news_sm```We load the models as such:
|
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens.In the paper we are implementing, they find it beneficial to reverse the order of the input which they believe "introduces many short term dependencies in the data that make the optimization problem much easier". We copy this by reversing the German sentence after it has been transformed into a list of tokens.
|
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings (tokens) and reverses it
"""
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
torchtext's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.pyL61). We set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the "start of sequence" and "end of sequence" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.
|
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
|
/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/field.py:150: UserWarning: Field class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.
warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Next, we download and load the train, validation and test data. The dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence. `exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.
|
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
|
/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/example.py:78: UserWarning: Example class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.
warnings.warn('Example class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.', UserWarning)
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
We can double check that we've loaded the right number of examples:
|
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
|
Number of training examples: 29000
Number of validation examples: 1014
Number of testing examples: 1000
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
We can also print out an example, making sure the source sentence is reversed:
|
print(vars(train_data.examples[0]))
|
{'src': ['.', 'büsche', 'vieler', 'nähe', 'der', 'in', 'freien', 'im', 'sind', 'männer', 'weiße', 'junge', 'zwei'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `` (unknown) token.It is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into our model, giving us artifically inflated validation/test scores.
|
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
|
Unique tokens in source (de) vocabulary: 7853
Unique tokens in target (en) vocabulary: 5893
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary. We also need to define a `torch.device`. This is used to tell torchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchText iterators handle this for us! We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences.
|
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
|
/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/iterator.py:48: UserWarning: BucketIterator class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.
warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Building the Seq2Seq ModelWe'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each. EncoderFirst, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers. For a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\{h_1, h_2, ..., h_T\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:$$h_t^1 = \text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$The hidden states in the second layer are given by:$$h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$Using a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.Without going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.$$\begin{align*}h_t &= \text{RNN}(e(x_t), h_{t-1})\\(h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1})\end{align*}$$We can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.Extending our multi-layer equations to LSTMs, we get:$$\begin{align*}(h_t^1, c_t^1) &= \text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\(h_t^2, c_t^2) &= \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))\end{align*}$$Note how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.So our encoder looks something like this: We create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:- `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.- `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions. - `hid_dim` is the dimensionality of the hidden and cell states.- `n_layers` is the number of layers in the RNN.- `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.We aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/). The embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.One thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.In the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.htmltorch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros. The RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).As we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`. The sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2.
|
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
DecoderNext, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.The `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.$$\begin{align*}(s_t^1, c_t^1) = \text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\(s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))\end{align*}$$Remember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.We then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\hat{y}_{t+1}$. $$\hat{y}_{t+1} = f(s_t^L)$$The arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.Within the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.**Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`.
|
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Seq2SeqFor the final part of the implemenetation, we'll implement the seq2seq model. This will handle: - receiving the input/source sentence- using the encoder to produce the context vectors - using the decoder to produce the predicted output/target sentenceOur full model will look like this:The `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).For this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc.Our `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence. The first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\hat{Y}$.We then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states.The first input to the decoder is the start of sequence (``) token. As our `trg` tensor already has the `` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `` token - the `` token is never input into the decoder. During each iteration of the loop, we:- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder- receive a prediction, next hidden state and next cell state ($\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder- place our prediction, $\hat{y}_{t+1}$/`output` in our tensor of predictions, $\hat{Y}$/`outputs`- decide if we are going to "teacher force" or not - if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]` - if we don't, the next `input` is the predicted next token in the sequence, $\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor Once we've made all of our predictions, we return our tensor full of predictions, $\hat{Y}$/`outputs`.**Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:$$\begin{align*}\text{trg} = [, &y_1, y_2, y_3, ]\\\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, ]\end{align*}$$Later on when we calculate the loss, we cut off the first element of each tensor to get:$$\begin{align*}\text{trg} = [&y_1, y_2, y_3, ]\\\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, ]\end{align*}$$
|
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Training the Seq2Seq ModelNow we have our model implemented, we can begin training it. First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same. We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.
|
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.
|
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
We also define a function that will calculate the number of trainable parameters in the model.
|
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
|
The model has 13,898,501 trainable parameters
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.
|
optimizer = optim.Adam(model.parameters())
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions. Our loss function calculates the average loss per token, however by passing the index of the `` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
|
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Next, we'll define our training loop. First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.As stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:$$\begin{align*}\text{trg} = [, &y_1, y_2, y_3, ]\\\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, ]\end{align*}$$Here, when we calculate the loss, we cut off the first element of each tensor to get:$$\begin{align*}\text{trg} = [&y_1, y_2, y_3, ]\\\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, ]\end{align*}$$At each iteration:- get the source and target sentences from the batch, $X$ and $Y$- zero the gradients calculated from the last batch- feed the source and target into the model to get the output, $\hat{Y}$- as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view` - we slice off the first column of the output and target tensors as mentioned above- calculate the gradients with `loss.backward()`- clip the gradients to prevent them from exploding (a common issue in RNNs)- update the parameters of our model by doing an optimizer step- sum the loss value to a running totalFinally, we return the loss that is averaged over all batches.
|
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up. The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.
|
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
Next, we'll create a function that we'll use to tell us how long an epoch takes.
|
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
|
_____no_output_____
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
We can finally start training our model!At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss. We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.
|
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
|
/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/batch.py:23: UserWarning: Batch class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.
warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.
|
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
|
| Test Loss: 3.951 | Test PPL: 52.001 |
|
MIT
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
RCXD/pytorch-seq2seq
|
IBM Quantum Challenge Fall 2021 Challenge 3: Classify images with quantum machine learning We recommend that you switch to **light** workspace theme under the Account menu in the upper right corner for optimal experience. IntroductionMachine learning is a technology that has attracted a great deal of attention due to its high performance and versatility. In fact, it has been put to practical use in many industries with the recent development of algorithms and the increase of computational resources. A typical example is computer vision, where machine learning is now able to classify images with the same or better accuracy than humans. For example, the ability to automatically classify clothing images has made online shopping for clothes more convenient.The application of quantum computation to machine learning has recently been shown to have the potential for even greater capabilities. Various algorithms have been proposed for quantum machine learning, such as the quantum support vector machine (QSVM) and quantum generative adversarial networks (QGANs). In this challenge, you will use QSVM to tackle the clothing image classification task.QSVM is a quantum version of the support vector machine (SVM), a classical machine learning algorithm. There are various approaches to QSVM, some aim to accelerate computation assuming fault-tolerant quantum computers, while others aim to achieve higher expressive power assuming noisy, near-term devices. In this challenge, we will focus on the latter, and the details will be explained later.For this implementation of QSVM, you will be able to make choices on how you want to compose your quantum model, in particular focusing on the quantum feature map. This is motivated by the tradeoff that a more complex feature map would have greater representation power but be more susceptible to noise, which could be especially critical when using noisy, near-term devices.Many of the concepts that appear in this challenge are explained in the 2021 Qiskit Global Summer School (QGSS). The materials and lecture videos are available, and it is recommended that you study them as well. Refer to the links in each part for the corresponding lectures. Challenge**Goal**Implement a QSVM model for multiclass classification and predict labels accurately. **Plan**First, you will learn how to construct QSVM for binary classification of a simple dataset. Then apply what you have learned to a more complex problem, 3-class classification of a different dataset.**1. Tutorial - QSVM for binary classification of MNIST:** familiarize yourself with a typical workflow for QSVM and find the best combination of dimentions/feature maps.**2. Challenge - QSVM for 3-class classification of Fashion-MNIST:** implement a 3-class classifier using binary QSVM classifers. Perform similar investigation as in the first part to find the best combination of dimentions/feature maps. Achieve better accuracy with smaller feature map circuits.Before you begin, we recommend watching the [**Qiskit Machine Learning Demo Session with Anton Dekusar**](https://youtu.be/claoY57eVIc?t=1814) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-machine-learning) to learn how to do classifications using QSVM.
|
# General imports
import os
import gzip
import numpy as np
import matplotlib.pyplot as plt
from pylab import cm
import warnings
warnings.filterwarnings("ignore")
# scikit-learn imports
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
# Qiskit imports
from qiskit import Aer, execute
from qiskit.circuit import QuantumCircuit, Parameter, ParameterVector
from qiskit.circuit.library import PauliFeatureMap, ZFeatureMap, ZZFeatureMap
from qiskit.circuit.library import TwoLocal, NLocal, RealAmplitudes, EfficientSU2
from qiskit.circuit.library import HGate, RXGate, RYGate, RZGate, CXGate, CRXGate, CRZGate
from qiskit_machine_learning.kernels import QuantumKernel
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Part 1: Tutorial - QSVM for binary classification of MNISTIn this part, you will apply QSVM to the binary classification of handwritten numbers 4 and 9. Through this tutorial, you will learn the workflow of applying QSVM to binary classification. Find better combinations and achieve higher accuracy.Related QGSS material:- [**Lab 3**](https://www.youtube.com/watch?v=GVhCOTzAkCM&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=17) 1. Data preparationThe data we are going to work with at the beginning is a small subset of the well known handwritten digits dataset, which is available publicly. We will be aiming to differentiate between '4' and '9'. There are a total of 100 data in the dataset. Of these, eighty are labeled training data, and the remaining twenty are unlabeled test data. Each data is a 28x28 image of a digit, collapsed into an array, where each element is an integer between 0 (white) and 255 (black). To use the dataset for quantum classification, we need to scale the range to between -1 and 1, and reduce the dimensionality to the number of qubits we want to use (here N_DIM=5).
|
# Load MNIST dataset
DATA_PATH = './resources/ch3_part1.npz'
data = np.load(DATA_PATH)
sample_train = data['sample_train']
labels_train = data['labels_train']
sample_test = data['sample_test']
# Split train data
sample_train, sample_val, labels_train, labels_val = train_test_split(
sample_train, labels_train, test_size=0.2, random_state=42)
# Visualize samples
fig = plt.figure()
LABELS = [4, 9]
num_labels = len(LABELS)
for i in range(num_labels):
ax = fig.add_subplot(1, num_labels, i+1)
img = sample_train[labels_train==LABELS[i]][0].reshape((28, 28))
ax.imshow(img, cmap="Greys")
# Standardize
ss = StandardScaler()
sample_train = ss.fit_transform(sample_train)
sample_val = ss.transform(sample_val)
sample_test = ss.transform(sample_test)
# Reduce dimensions
N_DIM = 5
pca = PCA(n_components=N_DIM)
sample_train = pca.fit_transform(sample_train)
sample_val = pca.transform(sample_val)
sample_test = pca.transform(sample_test)
# Normalize
mms = MinMaxScaler((-1, 1))
sample_train = mms.fit_transform(sample_train)
sample_val = mms.transform(sample_val)
sample_test = mms.transform(sample_test)
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
2. Data EncodingWe will take the classical data and encode it to the quantum state space using a quantum feature map. The choice of which feature map to use is important and may depend on the given dataset we want to classify. Here we'll look at the feature maps available in Qiskit, before selecting and customising one to encode our data. 2.1 Quantum Feature MapsAs the name suggests, a quantum feature map $\phi(\mathbf{x})$ is a map from the classical feature vector $\mathbf{x}$ to the quantum state $|\Phi(\mathbf{x})\rangle\langle\Phi(\mathbf{x})|$. This is facilitated by applying the unitary operation $\mathcal{U}_{\Phi(\mathbf{x})}$ on the initial state $|0\rangle^{n}$ where _n_ is the number of qubits being used for encoding.The following feature maps currently available in Qiskit are those introduced in [**_Havlicek et al_. Nature **567**, 209-212 (2019)**](https://www.nature.com/articles/s41586-019-0980-2), in particular the `ZZFeatureMap` is conjectured to be hard to simulate classically and can be implemented as short-depth circuits on near-term quantum devices.- [**`PauliFeatureMap`**](https://qiskit.org/documentation/stubs/qiskit.circuit.library.PauliFeatureMap.html)- [**`ZZFeatureMap`**](https://qiskit.org/documentation/stubs/qiskit.circuit.library.ZFeatureMap.html)- [**`ZFeatureMap`**](https://qiskit.org/documentation/stubs/qiskit.circuit.library.ZZFeatureMap.html)The `PauliFeatureMap` is defined as:```pythonPauliFeatureMap(feature_dimension=None, reps=2, entanglement='full', paulis=None, data_map_func=None, parameter_prefix='x', insert_barriers=False)```and describes the unitary operator of depth $d$:$$ \mathcal{U}_{\Phi(\mathbf{x})}=\prod_d U_{\Phi(\mathbf{x})}H^{\otimes n},\ U_{\Phi(\mathbf{x})}=\exp\left(i\sum_{S\subseteq[n]}\phi_S(\mathbf{x})\prod_{k\in S} P_i\right), $$which contains layers of Hadamard gates interleaved with entangling blocks, $U_{\Phi(\mathbf{x})}$, encoding the classical data as shown in circuit diagram below for $d=2$.Within the entangling blocks, $U_{\Phi(\mathbf{x})}$: $P_i \in \{ I, X, Y, Z \}$ denotes the Pauli matrices, the index $S$ describes connectivities between different qubits or datapoints: $S \in \{\binom{n}{k}\ combinations,\ k = 1,... n \}$, and by default the data mapping function $\phi_S(\mathbf{x})$ is $$\phi_S:\mathbf{x}\mapsto \Bigg\{\begin{array}{ll} x_i & \mbox{if}\ S=\{i\} \\ (\pi-x_i)(\pi-x_j) & \mbox{if}\ S=\{i,j\} \end{array}$$when $k = 1, P_0 = Z$, this is the `ZFeatureMap`: $$\mathcal{U}_{\Phi(\mathbf{x})} = \left( \exp\left(i\sum_j \phi_{\{j\}}(\mathbf{x}) \, Z_j\right) \, H^{\otimes n} \right)^d.$$which is defined as:```pythonZFeatureMap(feature_dimension, reps=2, data_map_func=None, insert_barriers=False)```
|
# 3 features, depth 2
map_z = ZFeatureMap(feature_dimension=3, reps=2)
map_z.decompose().draw('mpl')
|
/Users/scapape/miniconda3/envs/qiskit_env/lib/python3.8/site-packages/sympy/core/expr.py:2451: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
SymPyDeprecationWarning(feature="expr_free_symbols method",
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Note the lack of entanglement in this feature map, this means that this feature map is simple to simulate classically and will not provide quantum advantage.and when $k = 2, P_0 = Z, P_1 = ZZ$, this is the `ZZFeatureMap`: $$\mathcal{U}_{\Phi(\mathbf{x})} = \left( \exp\left(i\sum_{jk} \phi_{\{j,k\}}(\mathbf{x}) \, Z_j \otimes Z_k\right) \, \exp\left(i\sum_j \phi_{\{j\}}(\mathbf{x}) \, Z_j\right) \, H^{\otimes n} \right)^d.$$ which is defined as:```pythonZZFeatureMap(feature_dimension, reps=2, entanglement='full', data_map_func=None, insert_barriers=False)```
|
# 3 features, depth 1, linear entanglement
map_zz = ZZFeatureMap(feature_dimension=3, reps=1, entanglement='linear')
map_zz.decompose().draw('mpl')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Note that there is entanglement in the feature map, we can define the entanglement map:
|
# 3 features, depth 1, circular entanglement
map_zz = ZZFeatureMap(feature_dimension=3, reps=1, entanglement='circular')
map_zz.decompose().draw('mpl')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
We can customise the Pauli gates in the feature map, for example, $P_0 = X, P_1 = Y, P_2 = ZZ$:$$\mathcal{U}_{\Phi(\mathbf{x})} = \left( \exp\left(i\sum_{jk} \phi_{\{j,k\}}(\mathbf{x}) \, Z_j \otimes Z_k\right) \, \exp\left(i\sum_{j} \phi_{\{j\}}(\mathbf{x}) \, Y_j\right) \, \exp\left(i\sum_j \phi_{\{j\}}(\mathbf{x}) \, X_j\right) \, H^{\otimes n} \right)^d.$$
|
# 3 features, depth 1
map_pauli = PauliFeatureMap(feature_dimension=3, reps=1, paulis = ['X', 'Y', 'ZZ'])
map_pauli.decompose().draw('mpl')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
The [`NLocal`](https://qiskit.org/documentation/stubs/qiskit.circuit.library.NLocal.html) and [`TwoLocal`](https://qiskit.org/documentation/stubs/qiskit.circuit.library.TwoLocal.html) functions in Qiskit's circuit library can also be used to create parameterised quantum circuits as feature maps. ```pythonTwoLocal(num_qubits=None, reps=3, rotation_blocks=None, entanglement_blocks=None, entanglement='full', skip_unentangled_qubits=False, skip_final_rotation_layer=False, parameter_prefix='θ', insert_barriers=False, initial_state=None)``````pythonNLocal(num_qubits=None, reps=1, rotation_blocks=None, entanglement_blocks=None, entanglement=None, skip_unentangled_qubits=False, skip_final_rotation_layer=False, overwrite_block_parameters=True, parameter_prefix='θ', insert_barriers=False, initial_state=None, name='nlocal')```Both functions create parameterised circuits of alternating rotation and entanglement layers. In both layers, parameterised circuit-blocks act on the circuit in a defined way. In the rotation layer, the blocks are applied stacked on top of each other, while in the entanglement layer according to the entanglement strategy. Each layer is repeated a number of times, and by default a final rotation layer is appended.In `NLocal`, the circuit blocks can have arbitrary sizes (smaller equal to the number of qubits in the circuit), while in `TwoLocal`, the rotation layers are single qubit gates applied on all qubits and the entanglement layer uses two-qubit gates.For example, here is a `TwoLocal` circuit, with $R_y$ and $R_Z$ gates in the rotation layer and $CX$ gates in the entangling layer with circular entanglement:
|
twolocal = TwoLocal(num_qubits=3, reps=2, rotation_blocks=['ry','rz'],
entanglement_blocks='cx', entanglement='circular', insert_barriers=True)
twolocal.decompose().draw('mpl')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
and the equivalent `NLocal` circuit:
|
twolocaln = NLocal(num_qubits=3, reps=2,
rotation_blocks=[RYGate(Parameter('a')), RZGate(Parameter('a'))],
entanglement_blocks=CXGate(),
entanglement='circular', insert_barriers=True)
twolocaln.decompose().draw('mpl')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Let's encode the first training sample using the `PauliFeatureMap`:
|
print(f'First training data: {sample_train[0]}')
encode_map = PauliFeatureMap(feature_dimension=N_DIM, reps=1, paulis = ['X', 'Y', 'ZZ'])
encode_circuit = encode_map.bind_parameters(sample_train[0])
encode_circuit.decompose().draw(output='mpl')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
**Challenge 3a**Construct a feature map to encode a 5-dimensionally embedded data, using 'ZZFeatureMap' with 3 repetitions, 'circular' entanglement and the rest as default. Submission format:```pythonex3a_fmap = ZZFeatureMap(...)```
|
##############################
# Provide your code here
ex3a_fmap = ZZFeatureMap(feature_dimension=N_DIM,
reps=3,
entanglement='circular',
data_map_func=None,
insert_barriers=False)
##############################
# Check your answer and submit using the following code
from qc_grader import grade_ex3a
grade_ex3a(ex3a_fmap)
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
2.2 Quantum Kernel EstimationA quantum feature map, $\phi(\mathbf{x})$, naturally gives rise to a quantum kernel, $k(\mathbf{x}_i,\mathbf{x}_j)= \phi(\mathbf{x}_j)^\dagger\phi(\mathbf{x}_i)$, which can be seen as a measure of similarity: $k(\mathbf{x}_i,\mathbf{x}_j)$ is large when $\mathbf{x}_i$ and $\mathbf{x}_j$ are close. When considering finite data, we can represent the quantum kernel as a matrix: $K_{ij} = \left| \langle \phi^\dagger(\mathbf{x}_j)| \phi(\mathbf{x}_i) \rangle \right|^{2}$. We can calculate each element of this kernel matrix on a quantum computer by calculating the transition amplitude:$$\left| \langle \phi^\dagger(\mathbf{x}_j)| \phi(\mathbf{x}_i) \rangle \right|^{2} = \left| \langle 0^{\otimes n} | \mathbf{U_\phi^\dagger}(\mathbf{x}_j) \mathbf{U_\phi}(\mathbf{x_i}) | 0^{\otimes n} \rangle \right|^{2}$$assuming the feature map is a parameterized quantum circuit, which can be described as a unitary transformation $\mathbf{U_\phi}(\mathbf{x})$ on $n$ qubits. This provides us with an estimate of the quantum kernel matrix, which we can then use in a kernel machine learning algorithm, such as support vector classification.As discussed in [***Havlicek et al*. Nature 567, 209-212 (2019)**](https://www.nature.com/articles/s41586-019-0980-2), quantum kernel machine algorithms only have the potential of quantum advantage over classical approaches if the corresponding quantum kernel is hard to estimate classically. As we will see later, the hardness of estimating the kernel with classical resources is of course only a necessary and not always sufficient condition to obtain a quantum advantage. However, it was proven recently in [***Liu et al.* arXiv:2010.02174 (2020)**](https://arxiv.org/abs/2010.02174) that learning problems exist for which learners with access to quantum kernel methods have a quantum advantage over all classical learners.With our training and testing datasets ready, we set up the `QuantumKernel` class with the PauliFeatureMap, and use the `BasicAer` `statevector_simulator` to estimate the training and testing kernel matrices.
|
pauli_map = PauliFeatureMap(feature_dimension=N_DIM, reps=1, paulis = ['X', 'Y', 'ZZ'])
pauli_kernel = QuantumKernel(feature_map=pauli_map, quantum_instance=Aer.get_backend('statevector_simulator'))
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Let's calculate the transition amplitude between the first and second training data samples, one of the entries in the training kernel matrix.
|
print(f'First training data : {sample_train[0]}')
print(f'Second training data: {sample_train[1]}')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
First we create and draw the circuit:
|
pauli_circuit = pauli_kernel.construct_circuit(sample_train[0], sample_train[1])
pauli_circuit.decompose().decompose().draw(output='mpl')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
The parameters in the gates are a little difficult to read, but notice how the circuit is symmetrical, with one half encoding one of the data samples, the other half encoding the other. We then simulate the circuit. We will use the `qasm_simulator` since the circuit contains measurements, but increase the number of shots to reduce the effect of sampling noise.
|
backend = Aer.get_backend('qasm_simulator')
job = execute(pauli_circuit, backend, shots=8192,
seed_simulator=1024, seed_transpiler=1024)
counts = job.result().get_counts(pauli_circuit)
counts['0'*N_DIM]
counts
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
The transition amplitude is the proportion of counts in the zero state:
|
print(f"Transition amplitude: {counts['0'*N_DIM]/sum(counts.values())}")
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
This process is then repeated for each pair of training data samples to fill in the training kernel matrix, and between each training and testing data sample to fill in the testing kernel matrix. Note that each matrix is symmetric, so to reduce computation time, only half the entries are calculated explicitly. Here we compute and plot the training and testing kernel matrices:
|
matrix_train = pauli_kernel.evaluate(x_vec=sample_train)
matrix_val = pauli_kernel.evaluate(x_vec=sample_val, y_vec=sample_train)
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(np.asmatrix(matrix_train),
interpolation='nearest', origin='upper', cmap='Blues')
axs[0].set_title("training kernel matrix")
axs[1].imshow(np.asmatrix(matrix_val),
interpolation='nearest', origin='upper', cmap='Reds')
axs[1].set_title("validation kernel matrix")
plt.show()
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
**Challenge 3b**Calculate the transition amplitude between $x = (-0.5, -0.4, 0.3, 0, -0.9)$ and $y = (0, -0.7, -0.3, 0, -0.4)$ using the 'ZZFeatureMap' with 3 repetitions, 'circular' entanglement and the rest as default. Use the 'qasm_simulator' with 'shots=8192', 'seed_simulator=1024' and 'seed_transpiler=1024'.
|
sample_train[0]
np.array([-0.5,-0.4,0.3,0,-0.9])
x = [-0.5, -0.4, 0.3, 0, -0.9]
y = [0, -0.7, -0.3, 0, -0.4]
##############################
# Provide your code here
pauli_map = ZZFeatureMap(feature_dimension=N_DIM,
reps=3,
entanglement='circular',
data_map_func=None,
insert_barriers=False)
pauli_kernel = QuantumKernel(feature_map=pauli_map, quantum_instance=Aer.get_backend('statevector_simulator'))
pauli_circuit = pauli_kernel.construct_circuit(x, y)
backend = Aer.get_backend('qasm_simulator')
job = execute(pauli_circuit, backend, shots=8192,
seed_simulator=1024, seed_transpiler=1024)
counts = job.result().get_counts(pauli_circuit)
ex3b_amp = counts['0'*N_DIM]/sum(counts.values())
##############################
# Check your answer and submit using the following code
from qc_grader import grade_ex3b
grade_ex3b(ex3b_amp)
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Related QGSS materials:- [**Kernel Trick (Lecture 6.1)**](https://www.youtube.com/watch?v=m6EzmYsEOiI&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=14)- [**Kernel Trick (Lecture 6.2)**](https://www.youtube.com/watch?v=zw3JYUrS-v8&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=15) 2.3 Quantum Support Vector Machine (QSVM)Introduced in [***Havlicek et al*. Nature 567, 209-212 (2019)**](https://www.nature.com/articles/s41586-019-0980-2), the quantum kernel support vector classification algorithm consists of these steps: 1. Build the train and test quantum kernel matrices. 1. For each pair of datapoints in the training dataset $\mathbf{x}_{i},\mathbf{x}_j$, apply the feature map and measure the transition probability: $ K_{ij} = \left| \langle 0 | \mathbf{U}^\dagger_{\Phi(\mathbf{x_j})} \mathbf{U}_{\Phi(\mathbf{x_i})} | 0 \rangle \right|^2 $. 2. For each training datapoint $\mathbf{x_i}$ and testing point $\mathbf{y_j}$, apply the feature map and measure the transition probability: $ K_{ij} = \left| \langle 0 | \mathbf{U}^\dagger_{\Phi(\mathbf{y_j})} \mathbf{U}_{\Phi(\mathbf{x_i})} | 0 \rangle \right|^2 $.2. Use the train and test quantum kernel matrices in a classical support vector machine classification algorithm.The `scikit-learn` `svc` algorithm allows us to [**define a custom kernel**](https://scikit-learn.org/stable/modules/svm.htmlcustom-kernels) in two ways: by providing the kernel as a callable function or by precomputing the kernel matrix. We can do either of these using the `QuantumKernel` class in Qiskit.The following code takes the training and testing kernel matrices we calculated earlier and provides them to the `scikit-learn` `svc` algorithm:
|
pauli_svc = SVC(kernel='precomputed')
pauli_svc.fit(matrix_train, labels_train)
pauli_score = pauli_svc.score(matrix_val, labels_val)
print(f'Precomputed kernel classification test score: {pauli_score*100}%')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Related QGSS materials:- [**Classical SVM (Lecture 4.2)**](https://www.youtube.com/watch?v=lpPij21jnZ4&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=9)- [**Quantum Classifier (Lecture 5.1)**](https://www.youtube.com/watch?v=-sxlXNz7ZxU&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=11) Part 2: Challenge - QSVM for 3-class classification of Fashion-MNISTIn this part, you will use what your have learned so far to implement 3-class classification of clothing images and work on improving its accuracy. **Challenge 3c****Goal**: Implement a 3-class classifier using QSVM and achieve 70% accuracy on clothing image dataset with smaller feature map circuits.**Dataset**: Fashion-MNIST clothing image dataset. There are following three dataset in this challnge. - Train: Both images and labels are given.- Public test: Images are given and labels are hidden.- Private test: Both images and labels are hidden. Grading will be performed on both public test and private test data. The purpose of this is to make sure that quantum methods are used, so that cheating is not possible. How to implement a multi-class classifier using binary classifiersSo far, you have learned how to implement binary classification with QSVM. Now, how can you scale it up to multi-class classification? There are two approaches to do so. One is the One-vs-Rest approach, and the other is the One-vs-One approach.1. One-vs-Rest: In this approach, multi-class classification is achieved by combining classifiers for each class that classifies the class as positive and the others as negative. Since one classifier is required for each class, the total number of classifiers required for N-class classification is N. The advantage is that fewer classifiers are needed, and the disadvantage is that the labels are likely to be imbalanced in each classification.2. One-vs-One: In this approach, multi-class classification is achieved by combining classifiers for each pair of two classes, where one is positive and the other is negative. Since one classifier is required for each label pair, the total number of classifiers required for N-class classification is N(N-1)/2. The advantage is that labels are less likely to be imbalanced in each classification, and the disadvantage is that the number of classifiers required is larger.Both approaches can be used to solve this problem, but here you will be given hints based on the One-vs-Rest approach. Please follow the hints to solve it.Figure via [cc.gatech.edu](https://www.cc.gatech.edu/classes/AY2016/cs4476_fall/results/proj4/html/jnanda3/index.html) 1. Data preparationThe data we are working with here is a small subset of clothing image dataset called Fashion-MNIST, which is a variant of the MNIST dataset. We aim to classify the following labels.- label 0: T-shirt/top- label 2: pullover- label 3: dressFirst, let's load the dataset and display one image for each class.
|
# Load MNIST dataset
DATA_PATH = './resources/ch3_part2.npz'
data = np.load(DATA_PATH)
sample_train = data['sample_train']
labels_train = data['labels_train']
sample_test = data['sample_test']
# Split train data
sample_train, sample_val, labels_train, labels_val = train_test_split(
sample_train, labels_train, test_size=0.2, random_state=42)
# Visualize samples
fig = plt.figure()
LABELS = [0, 2, 3]
num_labels = len(LABELS)
for i in range(num_labels):
ax = fig.add_subplot(1, num_labels, i+1)
img = sample_train[labels_train==LABELS[i]][0].reshape((28, 28))
ax.imshow(img, cmap="Greys")
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Then, preprocess the dataset in the same way as before.- Standardization- PCA- NormalizationNote that you can change the number of features here by changing N_DIM.
|
# Standardize
standard_scaler = StandardScaler()
sample_train = standard_scaler.fit_transform(sample_train)
sample_val = standard_scaler.transform(sample_val)
sample_test = standard_scaler.transform(sample_test)
# Reduce dimensions
N_DIM = 5
pca = PCA(n_components=N_DIM)
sample_train = pca.fit_transform(sample_train)
sample_val = pca.transform(sample_val)
sample_test = pca.transform(sample_test)
# Normalize
min_max_scaler = MinMaxScaler((-1, 1))
sample_train = min_max_scaler.fit_transform(sample_train)
sample_val = min_max_scaler.transform(sample_val)
sample_test = min_max_scaler.transform(sample_test)
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
2. ModelingBased on the One-vs-Rest approach, you need to create the following three QSVM binary classifiers- the label 0 and the rest- the label 2 and the rest- the label 3 and the restHere is the first one as a hint. 2.1: Label 0 vs RestCreate new labels with label 0 as positive(1) and the rest as negative(0) as follows.
|
labels_train_0 = np.where(labels_train==0, 1, 0)
labels_val_0 = np.where(labels_val==0, 1, 0)
print(f'Original validation labels: {labels_val}')
print(f'Validation labels for 0 vs Rest: {labels_val_0}')
|
Original validation labels: [3 3 2 0 3 0 3 2 3 2 2 3 2 2 2 3 0 2 3 3]
Validation labels for 0 vs Rest: [0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0]
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
See only places where the original label was 0 are set to 1. Next, construct a binary classifier using QSVM as before. Note that PauliFeatureMap is used in this hint but you can use a different feature map.
|
pauli_map_0 = PauliFeatureMap(feature_dimension=N_DIM, reps=2, paulis = ['X', 'Y', 'ZZ'])
pauli_kernel_0 = QuantumKernel(feature_map=pauli_map_0, quantum_instance=Aer.get_backend('statevector_simulator'))
pauli_svc_0 = SVC(kernel='precomputed', probability=True)
matrix_train_0 = pauli_kernel_0.evaluate(x_vec=sample_train)
pauli_svc_0.fit(matrix_train_0, labels_train_0)
matrix_val_0 = pauli_kernel_0.evaluate(x_vec=sample_val, y_vec=sample_train)
pauli_score_0 = pauli_svc_0.score(matrix_val_0, labels_val_0)
print(f'Accuracy of discriminating between label 0 and others: {pauli_score_0*100}%')
# Var 1
map_0 = ZZFeatureMap(feature_dimension=N_DIM, reps=1, entanglement='linear')
kernel_0 = QuantumKernel(feature_map=map_0, quantum_instance=Aer.get_backend('statevector_simulator'))
svc_0 = SVC(kernel='precomputed', probability=True)
matrix_train_0 = kernel_0.evaluate(x_vec=sample_train)
svc_0.fit(matrix_train_0, labels_train_0)
matrix_val_0 = pauli_kernel_0.evaluate(x_vec=sample_val, y_vec=sample_train)
pauli_score_0 = svc_0.score(matrix_val_0, labels_val_0)
print(f'Accuracy of discriminating between label 0 and others: {pauli_score_0*100}%')
|
Accuracy of discriminating between label 0 and others: 75.0%
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
You can see that the QSVM binary classifier is able to distinguish between label 0 and the rest with a reasonable probability.Finally, for each of the test data, calculate the probability that it has label 0. It can be obtained by ```predict_proba``` method.
|
matrix_test_0 = pauli_kernel_0.evaluate(x_vec=sample_test, y_vec=sample_train)
pred_0 = pauli_svc_0.predict_proba(matrix_test_0)[:, 1]
print(f'Probability of label 0: {np.round(pred_0, 2)}')
|
Probability of label 0: [0.31 0.32 0.25 0.46 0.21 0.3 0.24 0.23 0.34 0.51 0.38 0.3 0.22 0.26
0.41 0.49 0.38 0.47 0.33 0.22]
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
These probabilities are important clues for multiclass classification. Obtain the probabilities for the remaining two labels in the same way. 2.2: Label 2 vs RestBuild a binary classifier using QSVM and get the probability of label 2 for test dataset.
|
labels_train_2 = np.where(labels_train==2, 1, 0)
labels_val_2 = np.where(labels_val==2, 1, 0)
print(f'Original validation labels: {labels_val}')
print(f'Validation labels for 2 vs Rest: {labels_val_2}')
pauli_map_2 = PauliFeatureMap(feature_dimension=N_DIM, reps=2, paulis = ['X', 'Y', 'ZZ'])
pauli_kernel_2 = QuantumKernel(feature_map=pauli_map_2, quantum_instance=Aer.get_backend('statevector_simulator'))
pauli_svc_2 = SVC(kernel='precomputed', probability=True)
matrix_train_2 = pauli_kernel_2.evaluate(x_vec=sample_train)
pauli_svc_2.fit(matrix_train_2, labels_train_2)
matrix_val_2 = pauli_kernel_2.evaluate(x_vec=sample_val, y_vec=sample_train)
pauli_score_2 = pauli_svc_2.score(matrix_val_2, labels_val_2)
print(f'Accuracy of discriminating between label 2 and others: {pauli_score_2*100}%')
# Var 2
map_2 = ZZFeatureMap(feature_dimension=N_DIM, reps=1, entanglement='linear')
kernel_2 = QuantumKernel(feature_map=map_2, quantum_instance=Aer.get_backend('statevector_simulator'))
svc_2 = SVC(kernel='precomputed', probability=True)
matrix_train_2 = kernel_2.evaluate(x_vec=sample_train)
svc_2.fit(matrix_train_2, labels_train_2)
matrix_val_2 = pauli_kernel_2.evaluate(x_vec=sample_val, y_vec=sample_train)
pauli_score_2 = svc_2.score(matrix_val_2, labels_val_2)
print(f'Accuracy of discriminating between label 2 and others: {pauli_score_2*100}%')
##############################
# Provide your code here
matrix_test_2 = pauli_kernel_2.evaluate(x_vec=sample_test, y_vec=sample_train)
pred_2 = pauli_svc_2.predict_proba(matrix_test_2)[:, 1]
##############################
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
2.3 Label 3 vs RestBuild a binary classifier using QSVM and get the probability of label 3 for test dataset.
|
labels_train_3 = np.where(labels_train==3, 1, 0)
labels_val_3 = np.where(labels_val==3, 1, 0)
print(f'Original validation labels: {labels_val}')
print(f'Validation labels for 3 vs Rest: {labels_val_3}')
pauli_map_3 = PauliFeatureMap(feature_dimension=N_DIM, reps=2, paulis = ['X', 'Y', 'ZZ'])
pauli_kernel_3 = QuantumKernel(feature_map=pauli_map_3, quantum_instance=Aer.get_backend('statevector_simulator'))
pauli_svc_3 = SVC(kernel='precomputed', probability=True)
matrix_train_3 = pauli_kernel_3.evaluate(x_vec=sample_train)
pauli_svc_3.fit(matrix_train_3, labels_train_3)
matrix_val_3 = pauli_kernel_3.evaluate(x_vec=sample_val, y_vec=sample_train)
pauli_score_3 = pauli_svc_3.score(matrix_val_3, labels_val_3)
print(f'Accuracy of discriminating between label 3 and others: {pauli_score_3*100}%')
# Var 3
map_3 = ZZFeatureMap(feature_dimension=N_DIM, reps=1, entanglement='linear')
kernel_3 = QuantumKernel(feature_map=map_3, quantum_instance=Aer.get_backend('statevector_simulator'))
svc_3 = SVC(kernel='precomputed', probability=True)
matrix_train_3 = kernel_3.evaluate(x_vec=sample_train)
svc_3.fit(matrix_train_3, labels_train_3)
matrix_val_3 = pauli_kernel_3.evaluate(x_vec=sample_val, y_vec=sample_train)
pauli_score_3 = svc_3.score(matrix_val_3, labels_val_3)
print(f'Accuracy of discriminating between label 3 and others: {pauli_score_3*100}%')
##############################
# Provide your code here
matrix_test_3 = pauli_kernel_3.evaluate(x_vec=sample_test, y_vec=sample_train)
pred_3 = pauli_svc_3.predict_proba(matrix_test_3)[:, 1]
##############################
print(f'Probability of label 0: {np.round(pred_0, 2)}')
print(f'Probability of label 2: {np.round(pred_2, 2)}')
print(f'Probability of label 3: {np.round(pred_3, 2)}')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
3. PredictionLastly, make a final prediction based on the probability of each label. The prediction you submit should be in the following format.
|
sample_pred = np.load('./resources/ch3_part2_sub.npy')
print(f'Sample prediction: {sample_pred}')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
In order to understand the method to make predictions for multiclass classification, let's begin with the case of making predictions for just two labels, label 2 and label 3.If probabilities are as follows for a certain data, label 2 should be considered the most plausible.- probability of label 2: 0.7- probability of label 3: 0.2You can implement this with ```np.where``` function. (Of course, you can use different methods.)
|
pred_2_ex = np.array([0.7])
pred_3_ex = np.array([0.2])
pred_test_ex = np.where((pred_2_ex > pred_3_ex), 2, 3)
print(f'Prediction: {pred_test_ex}')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
You can apply this method as is to multiple data.If second data has probabilities for each label as follows, it should be classified as label 3.- probability of label 2: 0.1- probability of label 3: 0.6
|
pred_2_ex = np.array([0.7, 0.1])
pred_3_ex = np.array([0.2, 0.6])
pred_test_ex = np.where((pred_2_ex > pred_3_ex), 2, 3)
print(f'Prediction: {pred_test_ex}')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
This method can be extended to make predictions for 3-class classification.Implement such an extended method and make the final 3-class predictions.
|
##############################
# Provide your code here
pred_test = np.array([0 if ((pred_0[i] > pred_2[i]) & (pred_0[i] > pred_3[i]))
else 2 if ((pred_2[i] > pred_0[i]) & (pred_2[i] > pred_3[i]))
else 3 if ((pred_3[i] > pred_0[i]) & (pred_3[i] > pred_2[i]))
else -1 for i in range(len(pred_0))])
##############################
print(f'Original validation labels: {labels_val}')
print(f'Prediction: {pred_test}')
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
4. Submission **Challenge 3c****Submission**: Submit the following 11 items.- **pred_test**: prediction for the public test dataset- **sample_train**: train data used to obtain kernels- **standard_scaler**: the one used to standardize data- **pca**: the one used to reduce dimention- **min_max_scaler**: the one used to normalize data- **kernel_0**: the kernel for the "label 0 vs rest" classifier- **kernel_2**: the kernel for the "label 2 vs rest" classifier- **kernel_3**: the kernel for the "label 3 vs rest" classifier- **svc_0**: the SVC trained to classify "label 0 vs rest"- **svc_2**: the SVC trained to classify "label 2 vs rest"- **svc_3**: the SVC trained to classify "label 3 vs rest"**Criteria**: Accuracy of 70% or better on both public and private test data.**Score**: Solutions that pass the criteria will be scored as follows. The smaller this final score is, the better.1. Each feature map gets transpiled with: - basis_gates=['u1', 'u2', 'u3', 'cx'] - optimization_level=02. Calculate the cost for each transpiled circuit: cost = 10 * cx + (u1 + u2 + u3)3. The sum of the costs will be the final score.Again, the prediction you submit should be in the following format.- prediction for the public test data (**sample_test**)- type: numpy.ndarray- shape: (20,)
|
print(f'Sample prediction: {sample_pred}')
# Check your answer and submit using the following code
from qc_grader import grade_ex3c
grade_ex3c(pred_test, sample_train,
standard_scaler, pca, min_max_scaler,
kernel_0, kernel_2, kernel_3,
svc_0, svc_2, svc_3)
|
_____no_output_____
|
Apache-2.0
|
content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb
|
scapape/ibm-quantum-challenge-fall-2021
|
Session 2 - Training a Network w/ TensorflowAssignment: Teach a Deep Neural Network to PaintParag K. MitalCreative Applications of Deep Learning w/ TensorflowKadenze AcademyCADLThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Learning Goals* Learn how to create a Neural Network* Learn to use a neural network to paint an image* Apply creative thinking to the inputs, outputs, and definition of a network Outline- [Assignment Synopsis](assignment-synopsis)- [Part One - Fully Connected Network](part-one---fully-connected-network) - [Instructions](instructions) - [Code](code) - [Variable Scopes](variable-scopes)- [Part Two - Image Painting Network](part-two---image-painting-network) - [Instructions](instructions-1) - [Preparing the Data](preparing-the-data) - [Cost Function](cost-function) - [Explore](explore) - [A Note on Crossvalidation](a-note-on-crossvalidation)- [Part Three - Learning More than One Image](part-three---learning-more-than-one-image) - [Instructions](instructions-2) - [Code](code-1)- [Part Four - Open Exploration \(Extra Credit\)](part-four---open-exploration-extra-credit)- [Assignment Submission](assignment-submission)This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
|
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif
import IPython.display as ipyd
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Assignment SynopsisIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. Part One - Fully Connected Network InstructionsCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.
|
xs = np.linspace(-6, 6, 100)
plt.plot(xs, np.maximum(xs, 0), label='relu')
plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')
plt.plot(xs, np.tanh(xs), label='tanh')
plt.xlabel('Input')
plt.xlim([-6, 6])
plt.ylabel('Output')
plt.ylim([-1.5, 1.5])
plt.title('Common Activation Functions/Nonlinearities')
plt.legend(loc='lower right')
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. CodeIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:Help on function placeholder in module `tensorflow.python.ops.array_ops`:```pythonplaceholder(dtype, shape=None, name=None)``` Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example:```pythonx = tf.placeholder(tf.float32, shape=(1024, 1024))y = tf.matmul(x, x)with tf.Session() as sess: print(sess.run(y)) ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) Will succeed.``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. TODO! COMPLETE THIS SECTION!
|
# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X":
X = ...
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the `tf.matmul` function takes two arguments, the left hand ($\textbf{X}$) and right hand side ($\textbf{W}$) of a matrix multiplication.To create $\textbf{W}$, you will use `tf.get_variable` to create a matrix which is `2 x 20` in dimension. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape` (this is the one that has to be [2, 20]), `dtype` (i.e. tf.float32), and `initializer` (the `tf.random_normal_intializer` you should create) when creating your $\textbf{W}$ variable with `tf.get_variable(...)`.For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!This part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using `tf.get_variable?` in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. TODO! COMPLETE THIS SECTION!
|
W = tf.get_variable(...
h = tf.matmul(...
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.TODO! COMPLETE THIS SECTION!
|
b = tf.get_variable(...
h = tf.nn.bias_add(...
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
So far we have done:$$\textbf{X}\textbf{W} + \textbf{b}$$Finally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:$$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$TODO! COMPLETE THIS SECTION!
|
h = ...
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).```pythonutils.linear??``````pythondef linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W``` Variable ScopesNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:1. If this happens while you are interactively editing a graph, you may need to reset the current graph:```python tf.reset_default_graph()```You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! 2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so: ```python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` or: ```python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` You can now write the same process as the above steps by simply calling:
|
h, W = utils.linear(
x=X, n_output=20, name='linear', activation=tf.nn.relu)
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Part Two - Image Painting Network InstructionsFollow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. Preparing the DataWe'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.TODO! COMPLETE THIS SECTION!
|
# First load an image
img = ...
# Be careful with the size of your image.
# Try a fairly small image to begin with,
# then come back here and try larger sizes.
img = imresize(img, (100, 100))
plt.figure(figsize=(5, 5))
plt.imshow(img)
# Make sure you save this image as "reference.png"
# and include it in your zipped submission file
# so we can tell what image you are trying to paint!
plt.imsave(fname='reference.png', arr=img)
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.
|
def split_image(img):
# We'll first collect all the positions in the image in our list, xs
xs = []
# And the corresponding colors for each of these positions
ys = []
# Now loop over the image
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
# And store the inputs
xs.append([row_i, col_i])
# And outputs that the network needs to learn to predict
ys.append(img[row_i, col_i])
# we'll convert our lists to arrays
xs = np.array(xs)
ys = np.array(ys)
return xs, ys
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):
|
xs, ys = split_image(img)
# and print the shapes
xs.shape, ys.shape
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Also remember, we should normalize our input values!TODO! COMPLETE THIS SECTION!
|
# Normalize the input (xs) using its mean and standard deviation
xs = ...
# Just to make sure you have normalized it correctly:
print(np.min(xs), np.max(xs))
assert(np.min(xs) > -3.0 and np.max(xs) < 3.0)
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Similarly for the output:
|
print(np.min(ys), np.max(ys))
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:
|
ys = ys / 255.0
print(np.min(ys), np.max(ys))
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.We can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:
|
plt.imshow(ys.reshape(img.shape))
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).Create 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\textbf{Y}$.TODO! COMPLETE THIS SECTION!
|
# Let's reset the graph:
tf.reset_default_graph()
# Create a placeholder of None x 2 dimensions and dtype tf.float32
# This will be the input to the network which takes the row/col
X = tf.placeholder(...
# Create the placeholder, Y, with 3 output dimensions instead of 2.
# This will be the output of the network, the R, G, B values.
Y = tf.placeholder(...
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\begin{align}\textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \\\end{align}So the next layer will take that output, and connect it up again:\begin{align}\textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \\\end{align}And same for every other layer:\begin{align}\textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \\\textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \\\textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \\\textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \\\end{align}Including the very last layer, which will be the prediction of the network:\begin{align}\hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7)\end{align}Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on Variable Scopes if you get stuck with name issues.TODO! COMPLETE THIS SECTION!
|
# We'll create 6 hidden layers. Let's create a variable
# to say how many neurons we want for each of the layers
# (try 20 to begin with, then explore other values)
n_neurons = ...
# Create the first linear + nonlinear layer which will
# take the 2 input neurons and fully connects it to 20 neurons.
# Use the `utils.linear` function to do this just like before,
# but also remember to give names for each layer, such as
# "1", "2", ... "5", or "layer1", "layer2", ... "layer6".
h1, W1 = ...
# Create another one:
h2, W2 = ...
# and four more (or replace all of this with a loop if you can!):
h3, W3 = ...
h4, W4 = ...
h5, W5 = ...
h6, W6 = ...
# Now, make one last layer to make sure your network has 3 outputs:
Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')
assert(X.get_shape().as_list() == [None, 2])
assert(Y_pred.get_shape().as_list() == [None, 3])
assert(Y.get_shape().as_list() == [None, 3])
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Cost FunctionNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.Let's say our error is `E`, then the cost will be:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b$$where the error is measured as, e.g.:$$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically.
|
error = np.linspace(0.0, 128.0**2, 100)
loss = error**2.0
plt.plot(error, loss)
plt.xlabel('error')
plt.ylabel('loss')
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.
|
error = np.linspace(0.0, 1.0, 100)
plt.plot(error, error**2, label='l_2 loss')
plt.plot(error, np.abs(error), label='l_1 loss')
plt.xlabel('error')
plt.ylabel('loss')
plt.legend(loc='lower right')
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$For $l_1$ norm, we'd have:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})$$Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.TODO! COMPLETE THIS SECTION!
|
# first compute the error, the inner part of the summation.
# This should be the l1-norm or l2-norm of the distance
# between each color channel.
error = ...
assert(error.get_shape().as_list() == [None, 3])
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
TODO! COMPLETE THIS SECTION!
|
# Now sum the error for each feature in Y.
# If Y is [Batch, Features], the sum should be [Batch]:
sum_error = ...
assert(sum_error.get_shape().as_list() == [None])
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
TODO! COMPLETE THIS SECTION!
|
# Finally, compute the cost, as the mean error of the batch.
# This should be a single value.
cost = ...
assert(cost.get_shape().as_list() == [])
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.TODO! COMPLETE THIS SECTION!
|
# Refer to the help for the function
optimizer = tf.train....minimize(cost)
# Create parameters for the number of iterations to run for (< 100)
n_iterations = ...
# And how much data is in each minibatch (< 500)
batch_size = ...
# Then create a session
sess = tf.Session()
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)
|
# Initialize all your variables and run the operation with your session
sess.run(tf.initialize_all_variables())
# Optimize over a few iterations, each time following the gradient
# a little at a time
imgs = []
costs = []
gif_step = n_iterations // 10
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
training_cost = sess.run(
[cost, optimizer],
feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
img = np.clip(ys_pred.reshape(img.shape), 0, 1)
imgs.append(img)
# Plot the cost over time
fig, ax = plt.subplots(1, 2)
ax[0].plot(costs)
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Cost')
ax[1].imshow(img)
fig.suptitle('Iteration {}'.format(it_i))
plt.show()
# Save the images as a GIF
_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Let's now display the GIF we've just created:
|
ipyd.Image(url='single.gif?{}'.format(np.random.rand()),
height=500, width=500)
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
ExploreGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? A Note on CrossvalidationThe cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! Part Three - Learning More than One Image InstructionsWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!I've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.
|
def build_model(xs, ys, n_neurons, n_layers, activation_fn,
final_activation_fn, cost_type):
xs = np.asarray(xs)
ys = np.asarray(ys)
if xs.ndim != 2:
raise ValueError(
'xs should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
if ys.ndim != 2:
raise ValueError(
'ys should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
n_xs = xs.shape[1]
n_ys = ys.shape[1]
X = tf.placeholder(name='X', shape=[None, n_xs],
dtype=tf.float32)
Y = tf.placeholder(name='Y', shape=[None, n_ys],
dtype=tf.float32)
current_input = X
for layer_i in range(n_layers):
current_input = utils.linear(
current_input, n_neurons,
activation=activation_fn,
name='layer{}'.format(layer_i))[0]
Y_pred = utils.linear(
current_input, n_ys,
activation=final_activation_fn,
name='pred')[0]
if cost_type == 'l1_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.abs(Y - Y_pred), 1))
elif cost_type == 'l2_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.squared_difference(Y, Y_pred), 1))
else:
raise ValueError(
'Unknown cost_type: {}. '.format(
cost_type) + 'Use only "l1_norm" or "l2_norm"')
return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}
def train(imgs,
learning_rate=0.0001,
batch_size=200,
n_iterations=10,
gif_step=2,
n_neurons=30,
n_layers=10,
activation_fn=tf.nn.relu,
final_activation_fn=tf.nn.tanh,
cost_type='l2_norm'):
N, H, W, C = imgs.shape
all_xs, all_ys = [], []
for img_i, img in enumerate(imgs):
xs, ys = split_image(img)
all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])
all_ys.append(ys)
xs = np.array(all_xs).reshape(-1, 3)
xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)
ys = np.array(all_ys).reshape(-1, 3)
ys = ys / 127.5 - 1
g = tf.Graph()
with tf.Session(graph=g) as sess:
model = build_model(xs, ys, n_neurons, n_layers,
activation_fn, final_activation_fn,
cost_type)
optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(model['cost'])
sess.run(tf.initialize_all_variables())
gifs = []
costs = []
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
training_cost = 0
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size:
(batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
cost = sess.run(
[model['cost'], optimizer],
feed_dict={model['X']: xs[idxs_i],
model['Y']: ys[idxs_i]})[0]
training_cost += cost
print('iteration {}/{}: cost {}'.format(
it_i + 1, n_iterations, training_cost / n_batches))
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = model['Y_pred'].eval(
feed_dict={model['X']: xs}, session=sess)
img = ys_pred.reshape(imgs.shape)
gifs.append(img)
return gifs
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
CodeBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!TODO! COMPLETE THIS SECTION!
|
celeb_imgs = utils.get_celeb_imgs()
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(celeb_imgs).astype(np.uint8))
# It doesn't have to be 100 images, explore!
imgs = np.array(celeb_imgs).copy()
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.TODO! COMPLETE THIS SECTION!
|
# Change the parameters of the train function and
# explore changing the dataset
gifs = train(imgs=imgs)
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:
|
montage_gifs = [np.clip(utils.montage(
(m * 127.5) + 127.5), 0, 255).astype(np.uint8)
for m in gifs]
_ = gif.build_gif(montage_gifs, saveto='multiple.gif')
|
_____no_output_____
|
Apache-2.0
|
session-2/session-2.ipynb
|
takitsuba/kadenze_cadl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.